Tuesday, October 15, 2013

[how to] how do i get hotel email addresses

[how to] how do i get hotel email addresses


how do i get hotel email addresses

Posted: 15 Oct 2013 09:07 PM PDT

I'm looking for hotel email addresses in texas.I cant get the email addresses I have looked them up but I cannot get them to correspond with the hotels physical address.

How to make SQL Server Agent job history less verbose?

Posted: 15 Oct 2013 08:04 PM PDT

So, I've created a SQL Server Agent job, according to the answer to my prev question:

General

Advanced

enter image description here

running nightly (at 2:11) according to the schedule:

Nightly Schedule

There are 9 .bak files in 3 subdirectory of d:\backup\ source folder, 3 old bakup files are being deleted and 3 new is being created by another preceding SQL Server Agent job.

The described here job copy-purges files but... what the heck the history of this Copy&Purge BAKs SQL ServerAgent job shows 96 items:

enter image description here

How to make it less verbose?

Eventvwr.msc doesn't contain any eroors for the corresponding period of time (of job run).

Distributed Database using 3rd party services

Posted: 15 Oct 2013 07:33 PM PDT

Many websites offer ways to save information in the form of plain text. Some websites allow retrieval in structured formats such as JSONP which is easily parseable and available for the client without hitting the server (well, it hits the 3rd party server).

My question is why hasn't / has this been exploited yet? It seems to me that one could wire up something easily that uses a 3rd party's public facing JSONP api to store their information. Then whenever the client needs that information, it can be fetched easily through that 3rd party service.

After writing this, I feel like the benefits make this moot. The only thing I can come up with is saving disk space, but disk space is so cheap it almost doesn't matter.

What filter to use in SQL Profiler for socket write error

Posted: 15 Oct 2013 07:07 PM PDT

Can someone please help with intitial SQL Profiler filter setup to narrow down the settings and reduce the noise when I'm trying to find out the reason behind socket write error ?

Basically the problem is that certain integration software is dropping xml files along the way from one application to the end application but some go through. There are the errors in the log. After we restarted the sql and app server first two days no messages were dropped.

I need to figure out the possible reason. We will probably have to run the profiler for 24 hours because the error appear only occasionaly so if there is too much data it becomes useless. Any help appreaciated. Thank you.

jdbc:jtds:sqlserver://ASSET-DB-02;instance=test;DatabaseName=MonitorEngine'  I/O Error: Connection reset by peer: socket write error  java.sql.SQLException: I/O Error: Connection reset by peer: socket write error  

Subscription uploading is not happening every time

Posted: 15 Oct 2013 04:31 PM PDT

I have two SQL Servers (2008 R2), and merge replication between them, one being the publisher and the other subscriber. I noticed that downloads are happening every minute, but uploads are happening every 10-15 minutes. I wonder why is this happening. There is very little activity on the subscriber, in average one change per minute. Does the subscriber batches uploads and only uploads them only when the batch reaches certain size? What can I do to determine the cause of this behavior.

filling a table by triggering multiple tables on insert

Posted: 15 Oct 2013 04:39 PM PDT

I have a table m contains the following fields:

id int(11) not null auto_increment, //PK  name varchar(100) null,  something_else varchar(100) default 'not important'  

I need to fill up table m by using this trigger

CREATE TRIGGER some_name AFTER INSERT ON `x` FOR EACH ROW INSERT INTO m (name) VALUES (NEW.name);  

which I got from this dba question

but the problem is that each row on m should be filled using after insert on multiple different tables like x,y. So, how can this be done.

I hope that my question is clear enough as it is in my mind.

Update 1: table m has the following row:

id - name - something_else  1  - null - not important  

on after insert for table x, table m row should become like:

id - name     - something_else  1  - newthing - not important  

then, on after insert for table y, table m row should become like:

id - name     - something_else  1  - newthing - new value  

Hence, the row on table m had been filled by multiple inserts on different tables and updated accordingly.

How can Sql Server have an Index Allocation Map error on object ID 0, index ID -1

Posted: 15 Oct 2013 02:45 PM PDT

I received a Fatal error 823 due to an incorrect checksum in tempdb.mdf. This can be a hard drive error (and create corruption) so I ran DBCC CHECKDB on all my databases (on restores of last nights backups at least on a different server) and one of the databases returned the following issue:

Msg 2576, Level 16, State 1, Line 1
The Index Allocation Map (IAM) page (0:0) is pointed to by the previous pointer of IAM page (1:330) in object ID 0, index ID -1, partition ID 0, alloc unit ID 72057594044612608 (type Unknown), but it was not detected in the scan.

I have looked online about the error and normally there is a non-zero object ID for the object with the issue. This is a very small database (60 MB) and since the app that uses it has been chugging for at least a week (went back to my oldest backup and the issue was there) the corruption is not a big issue its self (will still get addressed and hopefully be ammo for better maintenance procedures).

On my backup running DBCC CHECKDB('DBName',REPAIR_ALLOW_DATA_LOSS) fixes the issue with no apparent data loss but since this requires single user mode and down time I want to be sure it's the right thing and that I understand what the error message is saying before I risk possible data loss (and a late night).

The 823 read error is being addressed by a different team.

Join dynamically on tables in Postgres

Posted: 15 Oct 2013 12:22 PM PDT

SELECT id, audited_row, name            FROM audit_log_change AS C                              JOIN audit_log_action AS A                 ON A.id = C.action_id                JOIN audit_log_table AS T                  ON A.audited_table_id = T.id  

This produces a result like this

  id   audited_row     name      ----------------------------------        41   6108            poolsystem    42   1108            pool    43   342             user  

Values in the column column above are table names and the corresponding value in audited_row are their id. I am wondering id there is a way to join on the tables dynamically in Postgres ? For example, JOIN poolsystem ON poolsystem.id = 6108 , JOIN pool ON pool.id =1108, JOIN user ON user.id = 342

Thanks

Query gets slow when bind peeking is turned on

Posted: 15 Oct 2013 11:49 AM PDT

There is an old database that was upgraded from Oracle 10 to 11G R1. In Oracle 10 we had disabled bind peeking. Since Oracle 11 has adaptive cursor sharing, we want to turn bind peeking back on. When we did this we found that many queries went much much faster. Unfortunately, one critical query got very slow. As soon as we turn off bind peeking, the one query gets fast again but everything else goes back to being sluggish.

The question is: In Oracle 11, what would cause bind peeking to make a query slow? I thought adaptive cursor sharing was supposed to take care of the bad bind variable peeks problem.

When is it appropriate database design to use a separate schema?

Posted: 15 Oct 2013 03:26 PM PDT

I am developing a database which has data sourced from many different applications. In my first pass at design, I placed the staging tables each in a schema named for their source application.

As several of the source applications have similar data and similar table names, I use the schema name to differentiate the source application. The alternative I am considering would be using a single schema and including the source application in the table name.

I wanted to look into the design rules pertaining to when to use a different schema and the pros and cons of doing so and I could not find anything.

Is the schema purely for permissioning and security?

Does it make sense from an organizational point of view to create objects in separate schemas beyond what is required for application development or is this just needlessly adding complexity to queries?

Are there any other repercussions of this decision which I have neglected to consider?

I have table 'log' that dont have FK, just alone, is better to use MyISAM or InnoDB?

Posted: 15 Oct 2013 01:29 PM PDT

Database: Mysql 5.1. Default engine for database: InnoDB. Webapp use it.

MySQL - Autoincrement does not increment sequentally if last row was deleted

Posted: 15 Oct 2013 05:34 PM PDT

I have a table which contains an auto incremented primary key id. If I delete the last row (highest id, for exampe id = 6) and insert a new row, the new id starts at 7. Which paramater I have to change that the primary key starts at 6?

CREATE TABLE animals (   id MEDIUMINT NOT NULL AUTO_INCREMENT,   name CHAR(30) NOT NULL,   PRIMARY KEY (id)  ) ENGINE=MyISAM;    INSERT INTO animals (name) VALUES  ('dog'),('cat'),('penguin'),  ('lax'),('whale'),('ostrich');  

Result:
id name
1 dog
2 cat
3 penguin
4 lax
5 whale
6 ostrich

DELETE FROM animals WHERE id = 6;  INSERT INTO animals (name) VALUES  ('x');  

Result:
id name
1 dog
2 cat
3 penguin
4 lax
5 whale
7 x

Thanks for advice.

What is a Scalable Storage Mechanism for large TTL data collections

Posted: 15 Oct 2013 05:49 PM PDT

We currently have a legacy webservice that stores each xml request/response in Sql Server. The data only needs to persist for 3 days before it is considered expired. Sql Server is not good at deleting rows since every delete forms part of the transaction log. The db currently grows at 6-10gb per day and this is going to increase. Only around 1% of the responses that are stored are ever recalled therefore this is a very write heavy application. Each request/response xml document can be upto 14k in size.

What storage mechanism would you choose for upto 50/100gb of data per day?

I understand the solution is not sustainable and I am really looking for a tactical fix since we cannot easily change how all our clients query and re-query the data. We could look into a db that has native support for TTL (Riak, Postgres) etc or maybe a file/blob s3/azure storage solution is a better fit? The issue with a cloud blob storage solution could be lookup performance if we had to scan multiple buckets (since buckets have capacity limits) especially compared to the current sql server single table lookup.

Open to ideas and suggestions?

Install Oracle 11gR2 driver for Node.js 0.10.20

Posted: 15 Oct 2013 08:57 PM PDT

I want to connect Oracle 11gR2 and Node.js 0.10.20. I use this package but I don't understand this part of installation process. Can you explain it to me?

# Replace /opt/instantclient_11_2/ with wherever you extracted the Basic Lite files to  echo '/opt/instantclient_11_2/' | sudo tee -a /etc/ld.so.conf.d/oracle_instant_client.conf  sudo ldconfig  

How to get performance benefits from a view vs subquery?

Posted: 15 Oct 2013 11:39 AM PDT

I'd like to know how/if anyone has gained significant performance benefits from using views instead of subqueries.

I would prefer not to define a specific case, since I'm trying to establish good practice and rule of thumb in addition to case-by-case.

An example would be finding the last claim date in an insurance policy claim list where you started the search by sorted/filtered customer set and all claims are in their own table. I'm currently using a view and thinking about a subquery instead.

Things that might affect performance across cases:

  • Can views be used somehow to avoid a full scan where a subquery would need to?
  • Are there any limitations/caveats on ensuring that the best indexes are used when joining to a view?

Postgres functions vs prepared queries

Posted: 15 Oct 2013 06:45 PM PDT

I'm sure it's there but I've not been able to find a simple definitive answer to this in the docs or via Google:

In Postgres, are prepared queries and user defined functions equivalent as a mechanism for guarding against SQL injection? Are there particular advantages in one approach over the other? Thanks

Troubleshooting Mirroring Configuration

Posted: 15 Oct 2013 08:40 PM PDT

I am configuring database mirroring on SQL Server 2012. After configuration I am getting the following error when attempting to start the mirroring session:

tcp server cannot be reached or does not exist. Check the network address name and the ports for the local and remote are operational.

an Exception occured while executing a Transactional-SQL Statment.

multiple line text values in mysqldump text export file

Posted: 15 Oct 2013 10:27 AM PDT

I'm trying to export +100mil record table into txt file. My plan is split up txt file to small pieces by size or line then import.

I have one text field has multiple line like blog post text, in txt export file it exported as multiple lines which I want it to be 1 line 1 row so I can process it by lines.

I tried various fields-terminated-by, lines-terminated-by, fields-escaped-by parameters for export but nothing made that multiple line text into single, quoted and comma separated line.

It does quote well when I export the data in sql format but I haven't succeeded to convert new line characters in the text field to \n\r or \n whatever those characters are. Even if I escape it, still exported as new line with the quote.

Altering the location of Oracle-Suggested Backup

Posted: 15 Oct 2013 03:27 PM PDT

On one database, the Oracle-Suggested Backup scheduled from Enterprise Manager always ends up in the recovery area, despite RMAN configuration showing that device type disk format points elsewhere.

As far as I can see, the scheduled backup job is simply:

run {  allocate channel oem_disk_backup device type disk;  recover copy of database with tag 'ORA_OEM_LEVEL_0';  backup incremental level 1 cumulative  copies=1 for recover of copy with tag 'ORA_OEM_LEVEL_0' database;  }  

Asking RMAN to show all reveals that device type disk is indeed configured to store elsewhere:

CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT   '/s01/backup/PROD11/PROD11_%U';  

If I run the script manually, the backupset is placed at the above location, when the script is run from the job scheduler the backupset goes to the RECO group on ASM,

Why might Oracle still choose to dump the backupset to the db_recovery_file_dest?

Ultimately, how can I change the backup destination?

MYSQL Timezone support

Posted: 15 Oct 2013 05:27 PM PDT

We are having a shared hosting plan and they are saying that do provide MYSQL Timezone support in a shared hosting plan. I can create timezone related tables in our database and populate them with required data(data from from our local MYSQL Timezone related tables. How to view the code syntax for MySQL "CONVERT_TZ" function?

Thanks Arun

replication breaks after upgrading master

Posted: 15 Oct 2013 01:27 PM PDT

I have a set up of replication with master 5.1.30 and slave 5.5.16 and the replication is working good

Now i have upgraded mysql master to 5.1.47

As far as i know we have to turn off the log bin with sql_log_bin=0 before using mysql_upgrade program in order to up grade the replication setup as well

but the problem here is the binary log was not turned off while mysql_upgrade program is running

The reason i found is in 5.1 the sql_log_bin is a session variable and mysql_upgrade program runs in another session

so how to upgrade the replication as well along with the server with any breakage on replication setup.

any suggestions are really useful.....

How I can copy from local file to remote DB in PostgreSQL?

Posted: 15 Oct 2013 09:27 PM PDT

I am a novice in psql and need some help. How can I load a local CSV to a remote DB?

I am using the following command

\COPY test(user_id, product_id, value)         FROM '/Users/testuser/test.tsv' WITH DELIMITER '\t' CSV HEADER;  

but this searches the file on the remote DB while I need to do that on my local PC.

extproc env variables oracle 11g

Posted: 15 Oct 2013 09:27 AM PDT

I have oracle 11g with extproc separately configured in listener.ora.

Users report some environmental variables that should be exported are not set.

From where does extproc gets it environment besides ENV in its definition in listener.ora? They come from shell that started listener? Why variables included in ENV do not appear?

How could I efficiently check what env variabls extproc has set?

Cascading Inserts in MySql

Posted: 15 Oct 2013 02:37 PM PDT

I have a users table that has a one to one relationship to a user_preferences table (primary foreign key user_id). When a new row is added to the users table (new id added), is there a way to setup the relationship between the users_preferences table so that a row with the new user id is also added to it?

Need to suppress rowcount headers when using \G

Posted: 15 Oct 2013 02:27 PM PDT

Is there a command to suppress the rowcount headers and asterisks when using '\G' to execute a SQL statement? I am executing mysql with the -s and --skip-column-name options, but these don't suppress the rowcounts.

multivalued weak key in ER database modeling

Posted: 15 Oct 2013 04:27 PM PDT

I was wondering since i didnt find out any clarification for this. I want to store movies that exist in different formats (dvd, bluray etc) and the price for each format differs from each other as well as the quantity of each format, so i came up with this:

example

Is this correct from a design perspective? Does this implies redundancy? I dont understand how will this be stored in a table. Would it be better to do it like this :

enter image description here

Thanks in advance.

EDIT : I add some more descriptive information about what i want to store in this point of the design. I want to store information about sales. Each movie that exist in the company i need to store format, price and stock quantity. I will also need to store customer information with a unique id, name, surname, address, movies that he/she has already bought and his credit card number. Finally i will have a basket that temporary keeps items (lets suppose that other items exist apart from movies) that the customer wants to buy.

Microsoft Office Access database engine could not find the object 'tableName'

Posted: 15 Oct 2013 08:39 PM PDT

First a little background: I am using MS access to link to tables in an advantage database. I created a System DSN. In the past in Access I've created a new database, and using the exteranl data wizard, successfully linked to tables. Those databases and the linked tables are working fine.

Now I am trying to do the same thing, create a new access db, and link to this same DSN. I get as far as seeing the tables, but after making my selection, I get the error, " The Microsoft Office Access database engine could not find the object 'tableSelected'. Make sure the object exists and that you spell its name and the path name correctly.

I've tried creating another datasource (system and user) with no luck. Environment is Wn XP, Access 2007, Advantage DB 8.1

MYSQL 5.5 Fail start Fedora 16

Posted: 15 Oct 2013 12:27 PM PDT

I installed mysql and mysql-server from the repos (MySQL version 5.5). Then tried to start it, but got an error.

[root@server]# service mysqld start  Redirecting to /bin/systemctl start  mysqld.service  Job failed. See system logs and 'systemctl status' for details.  

Here is the log:

121118  2:41:38 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql  121118  2:41:38 [Note] Plugin 'FEDERATED' is disabled.  121118  2:41:38 InnoDB: The InnoDB memory heap is disabled  121118  2:41:38 InnoDB: Mutexes and rw_locks use GCC atomic builtins  121118  2:41:38 InnoDB: Compressed tables use zlib 1.2.5  121118  2:41:38 InnoDB: Using Linux native AIO /usr/libexec/mysqld: Can't create/write to file '/tmp/ibhsfQfU' (Errcode: 13)  121118  2:41:38  InnoDB: Error: unable to create temporary file; errno: 13  121118  2:41:38 [ERROR] Plugin 'InnoDB' init function returned error.  121118  2:41:38 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.  121118  2:41:38 [ERROR] Unknown/unsupported storage engine: InnoDB  121118  2:41:38 [ERROR] Aborting    121118  2:41:38 [Note] /usr/libexec/mysqld: Shutdown complete    121118 02:41:38 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended  

Fresh installation, nothing changed prior to that, just ran yum update.

Here is the systemctl status trace

[root@linyansho /]# systemctl status mysqld.service  mysqld.service - MySQL database server    Loaded: loaded (/lib/systemd/system/mysqld.service; disabled)    Active: failed since Sun, 18 Nov 2012 02:45:19 +0300; 5min ago    Process: 864 ExecStartPost=/usr/libexec/mysqld-wait-ready $MAINPID (code=exited, status=1/FAILURE)    Process: 863 ExecStart=/usr/bin/mysqld_safe --basedir=/usr (code=exited, status=0/SUCCESS)    Process: 842 ExecStartPre=/usr/libexec/mysqld-prepare-db-dir %n (code=exited, status=0/SUCCESS)    CGroup: name=systemd:/system/mysqld.service  

Sql Anywhere 11: Restoring incremental backup failure

Posted: 15 Oct 2013 11:27 AM PDT

We want to create remote incremental backups after a full backup. This will allow us to restore in the event of a failure and bring up another machine with as close to real time backups as possible with SQL Anywhere network servers.

We are doing a full backup as follows:

dbbackup -y -c "eng=ServerName.DbName;uid=dba;pwd=sql;links=tcpip(host=ServerName)"      c:\backuppath\full  

This makes a backup of the database and log files and can be restored as expected. For incremental backups I've tried both live and incremental transaction logs with a renaming scheme if there are multiple incremental backups:

dbbackup -y -t -c "eng=ServerName.DbName;uid=dba;pwd=sql;links=tcpip(host=ServerName)"      c:\backuppath\inc    dbbackup -y -l -c "eng=ServerName.DbName;uid=dba;pwd=sql;links=tcpip(host=ServerName)"       c:\backuppath\live  

However, on applying the transaction logs on restore I always receive an error when applying the transaction logs to the database:

10092: Unable to find table definition for table referenced in transaction log

The transaction log restore command is:

dbeng11 "c:\dbpath\dbname.db" -a "c:\backuppath\dbname.log"  

The error doesn't specify what table it can't find but this is a controlled test and no tables are being created or dropped. I insert a few rows then kick off an incremental backup before attempting to restore.

Does anyone know the correct way to do incremental backup and restore on Sql Anywhere 11?

UPDATE: Thinking it may be related to the complexity of the target database I made a new blank database and network service. Then added one table with two columns and inserted a few rows. Made a full backup, then inserted and deleted a few more rows and committed transactions, then made an incremental backup. This also failed with the same error when attempting to apply the incremental backups of transaction logs after restoring the full backup ...

Edit:

You can follow this link to see the same question with slightly more feedback on SA: http://sqlanywhere-forum.sybase.com/questions/4760/restoring-incrementallive-backup-failure

No comments:

Post a Comment

Search This Blog