Saturday, May 25, 2013

[how to] How do I deal with FK constraints when importing data using DTS Import/Export Wizard?

[how to] How do I deal with FK constraints when importing data using DTS Import/Export Wizard?


How do I deal with FK constraints when importing data using DTS Import/Export Wizard?

Posted: 25 May 2013 07:37 PM PDT

I am trying to use the SQL Server Import and Export Wizard to copy data from my production db to my dev db but when I do it fails with the error "The INSERT statment conflicted with the FOREIGN KEY constraint" i have over 40 tables with lots of FK constraints, is there some easy way to deal with this without having to write a drop constraint/add constrat script?

What should be Performance Tuning Approach?

Posted: 25 May 2013 04:11 PM PDT

As a DBA when the client reports a performance slowness issue what should be the Approach in solving the issue. Meaning that we can not always suspect the Database for performance slowness, many a times it is because of network, application miss behaving.

i would appreciate any expert DBA can spot light on What should be Performance Tuning Approach?

  1. Is it a Network issue.
  2. Is it a Application issue.
  3. Do we need a new index.
  4. Do we need to rewrite the query.
  5. Do we need to rebuild or reorg the Indexes. and many more could be possible...

In general the DBA's are blamed for performance slowness issues even without knowing if it is a database issue.

Is it possible to force delete of mysql tables with FK?

Posted: 25 May 2013 07:04 PM PDT

For deleting a table in which a Foreign Key is used in other tables, it is necessary to delete all other tables having FK connection with the corresponding table.

Is there a short way to delete a table, and its all child tables (in which they have FK to the parent table)?

For example,

CREATE TABLE test  (  id int(11) NOT NULL AUTO_INCREMENT,  PRIMARY KEY(id)  ) ENGINE=InnoDB  

and id is a FK in 5 child tables. Is it possible to delete TABLE test, and force delete of all 5 child tables having id FK? instead of deleting child tables one by one?

More CPU cores vs faster disks

Posted: 25 May 2013 11:21 AM PDT

I'm part of a small company so as usual covering a number of different roles. The latest of which is procuring a dedicated SQL Server box for our .NET web app. We've been quoted on a dual Xeon E5-2620 (six core) 2.00 GHz CPU configuration (12 cores in total), with 32 GB of RAM. This has left us with a limited budget for the disk array, which would essentially consist of two 2.5" SAS 300 GB drives (15k RPM) in a RAID 1 config.

I know that the disk setup is sub-optimal for SQL Server and I'd really like to push for RAID 10 so we can put the database, log files and tempdb on their own drives. In order to make this compatible with our budget should I consider reducing the number of CPU cores? or would I get better bank for buck keeping the cores and using fewer drives, perhaps 4 in a dual RAID 1 setup?

Here are some additional stats

  • The SQL Server database is tilted towards high numbers of reads to writes, probably 80% vs 20% respectively. The current DB size is around 10 GB at present, growing at rate of 250 MB per month.

  • Currently running on SQL Server 2008 R2 Standard on a single quad core Xeon box shared with the web-server (12 GB Ram, 2 x 10k 300GB SAS drives in RAID 1), looking to move to SQL Server 2012 Standard.

  • Database serves approx 100-150 concurrent users with some background scheduling tasks thrown in. Reading this, I'm thinking that 12 cores is serious overkill!

Setting mysql slow query to less than 1 second

Posted: 25 May 2013 08:26 PM PDT

I have a heavily loaded LAMP system serving about 40 requests per second and doing about 300 mysql queries per second. Here is the output of mysqladmin status

Uptime: 5051  Threads: 1  Questions: 1418500  Slow queries: 0  Opens: 456    Flush tables: 1  Open tables: 450  Queries per second avg: 280.835  

However, the global status show about 5-10% table lock contention.

So, i want to figure out the queries that are taking longer time.

Is it possible to set the slow query log to less than one second? For my queries, even execution time of 100ms is high.

When should tables NOT be compressed

Posted: 25 May 2013 07:01 AM PDT

We have two servers. One we just compressed, but I took over this little project and this guy says some tables are not good candidates. As with many others, performance is -critical-. Is it accurate to say that some tables shouldn't be compressed? If so, what is a general threshold to look for? I ran his script to check for each objects reads/writes, but since this looks like a history, not what's occurred during a time frame, this might not be an accurate way to gauge that.

Combining data from two databases with same structure into one database

Posted: 25 May 2013 09:44 AM PDT

I have five SQL Server databases with the same schema in five different geographical locations. These locations send periodic backups to the central server which I restore in the five respective databases.

The requirement now is that data from these five databases MUST be combined into one database for consolidation.

Any suggestion for the solution is most welcome.

Label data according to percentile

Posted: 25 May 2013 10:16 AM PDT

I have a table name temp with column customer_id , sum and I need to add extra column customer_label (may be on view) which is 1 if customer lies in top 10% according to value of sum, 2 if lies within 10%-20%, 3 if lies within 20%-60%, 4 if lies within 60-100%. How can I do this?

Some sample data is included at sqlfiddle

All users missing after a detach/reattach

Posted: 25 May 2013 08:43 PM PDT

After detaching and reattaching a database to do a copy operation, all of the user objects are now missing from the database (I should add that the copy failed; according to the logs, there was an issue with a number of views. I didn't get a chance to really look it over before I discovered my much bigger problem).

At a glance, the data itself seems to be intact - our most recent entries are present, etc., but the security->users folder looks like a virgin database - it contains DBO, guest, INFORMATION_SCHEMA, and Sys users, and that's it.

All of the logins still exist at the service level, but they're no longer associated with the database itself.

I've also noticed that the performance of the database seems to be hindered.

I do weekly full backups, and nightly differential backups, and I have the transaction log from today, so my current plan is to simply restore from backup and roll forward to just before the error occurred.

That said, for my own edification, I would still like to know if there is anything else that could be done to actually repair this. It seems like some system table must have been corrupted. Is there any sort of repair function (other than restoring from backup) that could remedy this?

How to design indexes for columns with NULL values in MySQL?

Posted: 25 May 2013 07:32 PM PDT

I have a database with 40 million entries and want to run queries with the following WHERE clause

...  WHERE    `POP1` IS NOT NULL     && `VT`='ABC'    && (`SOURCE`='HOME')    && (`alt` RLIKE '^[AaCcGgTt]$')    && (`ref` RLIKE '^[AaCcGgTt]$')    && (`AA` RLIKE '^[AaCcGgTt]$')    && (`ref` = `AA` || `alt` = `AA`)  LIMIT 10 ;  

POP1 is a float column that can also be NULL. POP1 IS NOT NULL should exclude about 50% of the entries, thats why I put it at the beginning. All other terms reduce the number only marginally.

Amongst others, I designed an index pop1_vt_source, which seems to be not used, while an index with vt as first column is used. EXPLAIN-output:

| id | select_type | table | type | possible_keys                          | key                 | key_len | ref         | rows     | Extra       |  |  1 | SIMPLE      | myTab | ref  | vt_source_pop1_pop2,pop1_vt_source,... | vt_source_pop1_pop2 | 206     | const,const | 20040021 | Using where |  

Why is the index with pop1 as firt column not used? Because of the NOT or because of NULL in general. How can I improve the design of my indices and WHERE clauses? Even when limiting to 10 entries, the query takes more than 30 seconds, although the the first 100 entries in the table should contain the 10 matches.

Postgres 9.2 select multiple specific rows in one query

Posted: 25 May 2013 04:03 PM PDT

I have a table with three columns,

  • Id Serial.
  • Value real.
  • timein Timestamp.

I want to select the values based on a range of days ie. from two days ago until now. The table may contain one or two rows per day, if it does contain two rows, I want to select the second row plus all the rows following it in the period of time.

To clarify:

id | value | timein  1  | 20.5  | 2013-04-25 11:25:42  2  |  4.2  | 2013-04-26 09:10:42  3  |  3.1  | 2013-04-26 15:45:42  4  |  100  | 2013-04-27 14:52:42  5  | 15.5  | 2013-04-28 17:41:42  6  | 23.3  | 2013-04-29 07:32:42  7  | 21.4  | 2013-04-29 14:32:42  

If I wanted to select the values from day 26 (only the second row) plus all the values until day 29 and so on, can it be done in one query?

Default values for a table, how to refactor?

Posted: 25 May 2013 01:10 PM PDT

I've created two tables, one listing the default prices of work for a lesson of given length (minutes) and topic, individual or not individual. And the other listing special price for "privileged" teacher with their own price of work.

Note that the salary in effect is the salary calculated by the newest entry for which startdate is below the date of the lesson.

CREATE TABLE `commonrates` (    `minutes` SMALLINT(5) UNSIGNED NOT NULL ,    `topic` SMALLINT(5) UNSIGNED NOT NULL ,    `individual` ENUM('false','true') CHARACTER SET 'utf8' COLLATE 'utf8_bin' NOT NULL ,    `price` DECIMAL(7,2) UNSIGNED NOT NULL ,    `startdate` DATE NOT NULL ,    UNIQUE INDEX `minutes` (`minutes` ASC, `topic` ASC, `individual` ASC, `startdate` ASC) ,    INDEX `topic` (`topic` ASC) )

and

CREATE  TABLE IF NOT EXISTS `employeerates` (    `userid` INT(10) UNSIGNED NOT NULL ,    `minutes` SMALLINT(5) UNSIGNED NOT NULL ,    `topic` SMALLINT(5) UNSIGNED NOT NULL ,    `individual` ENUM('false','true') CHARACTER SET 'utf8' COLLATE 'utf8_bin' NOT NULL ,    `price` DECIMAL(7,2) UNSIGNED NOT NULL ,    `startdate` DATE NOT NULL ,    UNIQUE INDEX `userid` (`userid` ASC, `minutes` ASC, `topic` ASC, `individual` ASC, `startdate` ASC) ,    INDEX `user` (`userid` ASC) ,    INDEX `topic` (`topic` ASC) )

Now I see two similar tables. How to refactor it? I think it should be re-factored for better managebility, but I don't know how exactly to do this. Please help.

WITH ROLLUP WHERE NULL = x

Posted: 25 May 2013 12:10 PM PDT

I tried to find this question somewhere else on here, and found a couple that were similar to it -- but all the answers seemed pretty convoluted, and I think it was surprising to me because I just imagined that SQL would have a way easier way to do this.

Essentially, I'm using a GROUP BY to group two values on top of each other. The coding is simple and works well, where I get these results using WITH ROLLUP at the end:

Type    Person  Count  Type A  Person 1    10  Type A  Person 2    91  Type A  Person 3    13  Type A  Person 4    10  Type A  Person 5    2  Type A  NULL        126  Type B  Person 6    16  Type B  Person 7    63  Type B  Person 8    15  Type B  Person 9    22  Type B  Person 10   16  Type B  Person 11   3  Type B  Person 12   20  Type B  Person 13   44  Type B  NULL        198  NULL    NULL        360  

Is there an easy way to substitute something saying if "NULL" is in the "Person" column, then make NULL AS "Subtotal" and if "NULL" is in the "Type" column, NULL AS "Grand Total?"

And if that's weirdly complicated, I'd be totally fine with just calling all of them "Total."

Thanks!

Database migration, how to

Posted: 25 May 2013 03:10 PM PDT

I have two databases DB1 and DB2. In both databases exists this two tables mo_sms and mt_sms. This is the structure of those tables:

CREATE TABLE IF NOT EXISTS `mo_sms` (    `id_MO` int(11) unsigned NOT NULL AUTO_INCREMENT,    `sms_proceso` char(1) NOT NULL COMMENT 's=SMS c=CHAT d=Descargas',    `msisdn` varchar(20) NOT NULL,    `texto_sms` varchar(160) DEFAULT NULL,    `brand_id` int(10) unsigned NOT NULL,    `fecha_sms` datetime NOT NULL,    `comando_id` int(10) unsigned NOT NULL DEFAULT '0',    `alias_desc` varchar(25) DEFAULT NULL,    `shortcode_id` int(10) unsigned NOT NULL,    `precio` float(11,2) unsigned DEFAULT '0.00' COMMENT 'Precio del MO',    `id_user` int(10) unsigned NOT NULL,    `state` char(1) NOT NULL DEFAULT '0' COMMENT '0=Por procesar 1=Procesado',    `tipo_sms` tinyint(4) NOT NULL DEFAULT '0' COMMENT '0=Normal, <>0 dependera del tipopredeterminado',    `anio_sms` smallint(4) unsigned NOT NULL DEFAULT '0',    `mes_sms` smallint(2) unsigned zerofill NOT NULL DEFAULT '00',    PRIMARY KEY (`id_MO`),    KEY `ix_carrier` (`brand_id`),    KEY `ix_fecha_sms` (`fecha_sms`),    KEY `ix_fecha_carrier_keyword` (`fecha_sms`,`brand_id`,`alias_desc`),    KEY `ix_msisdn` (`msisdn`),    KEY `ix_sms_proceso` (`sms_proceso`),    KEY `ix_sms_proceso_state` (`sms_proceso`,`state`),    KEY `ix_id_user` (`id_user`),    KEY `ix_fecha_sms_user` (`fecha_sms`,`id_user`),    KEY `ix_varios` (`anio_sms`,`mes_sms`,`comando_id`,`shortcode_id`,`brand_id`)  ) ENGINE=MyISAM  DEFAULT CHARSET=utf8 COMMENT='Tabla de MO' AUTO_INCREMENT=82972 ;    CREATE TABLE IF NOT EXISTS `mt_sms` (    `id_MT` int(11) unsigned NOT NULL AUTO_INCREMENT,    `sms_proceso` char(1) NOT NULL DEFAULT 'c' COMMENT 's=SMS c=CHAT d=Descargas',    `msisdn` varchar(20) NOT NULL,    `texto_sms` varchar(160) DEFAULT NULL,    `brand_id` int(10) unsigned NOT NULL,    `fecha_sms` datetime NOT NULL,    `comando_id` int(10) unsigned NOT NULL DEFAULT '0',    `alias_desc` varchar(25) DEFAULT NULL,    `shortcode_id` int(10) unsigned NOT NULL,    `id_user` int(10) unsigned NOT NULL,    `tipo_sms` tinyint(4) unsigned NOT NULL DEFAULT '0' COMMENT '0=Normal, <>0 dependera del tipopredeterminado',    `id_MO` int(11) unsigned NOT NULL,    `state` char(1) DEFAULT '0' COMMENT '0=Por Procesar 1=Procesado',    `anio_sms` smallint(4) unsigned NOT NULL DEFAULT '0',    `mes_sms` smallint(2) unsigned zerofill NOT NULL DEFAULT '00',    PRIMARY KEY (`id_MT`),    KEY `ix_carrier` (`brand_id`),    KEY `ix_fecha_sms` (`fecha_sms`),    KEY `ix_fecha_carrier_keyword` (`fecha_sms`,`brand_id`,`alias_desc`),    KEY `ix_msisdn` (`msisdn`),    KEY `ix_sms_proceso` (`sms_proceso`),    KEY `ix_id_user` (`id_user`),    KEY `ix_fecha_sms_user` (`fecha_sms`,`id_user`)  ) ENGINE=MyISAM  DEFAULT CHARSET=utf8 COMMENT='Tabla de MT' AUTO_INCREMENT=93853;  

I have some values in DB2 that should be migrated (inserted) in DB1. My problem here is I don't know how to get the records from DB2.mo_sms tables and those relations from DB2.mt_sms and then insert to DB1.mo_sms and DB2.mt_sms. For example I'll get all the records from mo_sms with fecha_sms LIKE '%2013-04-19%' then if I insert those records in DB1.mo_sms new IDs will be generated then the integrity will be lost. Should I make this programatically or there is any way to do this using just SQL queries? I'm using MySQL as DBMS.

Booking system structure

Posted: 25 May 2013 10:10 AM PDT

I am making a room booking system for a friends business.

Background: 3 rooms, multiple clients, bookings available 9-5, bookings last 1 hour.

For the database, is this too simple?

Booking record table

Reference | Client ID | Room ID | Timestamp

Client table

Client ID | Name | Phone | Email

Room table

Room ID | Name | Sink | Window | .....

Thanks for your help

Nathan.

why would command text not execute at all?

Posted: 25 May 2013 05:10 PM PDT

in my report i have::

<CommandText>SELECT         column1,          column2,           'poop'  from mytable  </CommandText>  

i am using sql server profiler to see exactly what statement is being set.

i have set only two filters:

  1. databaseName
  2. enter image description here

yet after running the report, no statement gets intercepted.

i suspect that because i am a beginner at SSRS, i am missing something crucial here.

for what reason would commandtext not be executed at all?

i did follow this question, to make sure that i am using sql profiler correctly, and indeed, i am: http://stackoverflow.com/questions/9107383/sql-server-profiler-capture-calls-to-your-databases-stored-procs-during-ssrs

another bit of important information is although the chart shows no data:

enter image description here

i actually am indeed showing data when i run the commandtext from ssms!

loading a csv file which is on local system in to Mysql DB which is on remote server

Posted: 25 May 2013 02:10 PM PDT

Can we directly load a CSV file ( which is on the local system) on MYSQL DB ( which is installed on the Remote server ) ?

'load data infile into table name' command can only be used for loading in local system only.

Avoiding performance hit from GROUP BY during FULLTEXT search?

Posted: 25 May 2013 09:10 AM PDT

Is there any clever way to avoid the performance hit from using group by during fulltext search?

SELECT p.topic_id, min(p.post_id)   FROM forum_posts AS p   WHERE MATCH (p.post_text) AGAINST ('baby shoes' IN BOOLEAN MODE)  GROUP BY p.topic_id  LIMIT 20;  

In this example it's fetching the lowest post_id for unique topic_ids that match the text.

With the group by to find the min, it's taking 600ms in a million row database, with about 50K rows examined.

If I remove the MIN but leave the GROUP BY, it's the same slowness, so it's the GROUP hit.

I suspect this is because it can only use one index, the fulltext ?

key: post_text | Using where; Using temporary; Using filesort    Query_time: 0.584685  Lock_time: 0.000137  Rows_sent: 20  Rows_examined: 57751  Full_scan: No  Full_join: No  Tmp_table: Yes  Tmp_table_on_disk: No  Filesort: Yes  Filesort_on_disk: No  Merge_passes: 0  

Without the GROUP BY it's 1ms so this has to be filesort speed?

(I've removed ORDER BY and everything else to isolate where the hit is)

Thanks for any insight and ideas.

(using MyISAM under mariadb if it matters)

AWS performance of RDS with provisioned IOPS vs EC2

Posted: 25 May 2013 11:10 AM PDT

Has anyone done a performance comparison of AWS RDS with the new provisioned IOPS vs EC2? I've found plenty of non-high IOPS RDS vs EC2 but nothing with the new high IOPS feature in RDS.

sp_startpublication_snapshot Parameter(s)

Posted: 25 May 2013 06:10 PM PDT

I am creating a stored procedure that:

  1. Restores a DB from a .bak giving the .mdf and .ldf a new name (so we have have several copies of the same DB up
  2. (If specified in the SP's parameter) Creates three merge replication publications
  3. (What I need help doing) Generating the snapshots for the three publications using sp_startpublication_snapshot

Here is my new brick wall... On this DB server, I have a 'shell' db that they will be running the SP from, that has a history table so I can keep track of who created/deleted databases using my SP's... The only parameter for sp_startpublication_snapshot is @publication... I can give it the publication name, but since I am not running it from the publishing database, how do I specify the publishing database?

i.e.: the publication shows up as:

[WC48_Database1]: upb_Inspection_PrimaryArticles  

but I am running the script from the database [WC_QACatalog]

Any ideas about how to accomplish this?

Thank you, Wes

Input requested on localization setup for DB

Posted: 25 May 2013 07:42 PM PDT

I am building an ASP.Net site that will pull dynamic data from SQL server. The site will be built with localization/globalization foundations in anticipation that in a year or so the site will start holding translations for non-English speaking users. I already know how accomplish this in ASP.Net for the static content but I would like some suggestions on a DB design for the dynamic data. I have toyed with the following 3 ideas and would like some input on them and any thoughts on other ways.

  1. Create a unique database for each language set and use ASP.Net to dynamically change which DB the data comes from.
  2. Create copies of each table appending the local code to hold the translated data (i.e. Table-en, Table-fr, Table1-en, Table1-fr)
  3. Leave DB and table design intact (only adding a single column to hold the local code) and insert translated data as a new row entry.

query processor ran out of internal resources and could not produce a query plan

Posted: 25 May 2013 07:01 PM PDT

This is showing up in the logs several times a night. How do I find the query causing the issue? SQL Server 2008 R2 Sp1.

Thank you

TRUNCATE TABLE statement sometimes hangs

Posted: 25 May 2013 02:34 PM PDT

Why does the TRUNCATE TABLE statement hang sometimes? What are the reasons for this type of issue?

I am migrating from MySQL to MariaDB. This problem doesn't happen with MySQL, only with MariaDB.

The hanging statement is simply:

TRUNCATE TABLE sampledb.datatable;  

What can cause this to happen, and how could I fix it?

Another one observation is if the table have some data, may be one or two rows, then the truncate query works successfully. Else the table have a lot of data, query becomes hang.

Binlog has bad magic number

Posted: 25 May 2013 09:10 PM PDT

I keep getting this error whenever I start MySQL.

121028  1:38:55 [Note] Plugin 'FEDERATED' is disabled.  121028  1:38:55 InnoDB: The InnoDB memory heap is disabled  121028  1:38:55 InnoDB: Mutexes and rw_locks use Windows interlocked functions  121028  1:38:56 InnoDB: Compressed tables use zlib 1.2.3  121028  1:38:56 InnoDB: Initializing buffer pool, size = 16.0M  121028  1:38:56 InnoDB: Completed initialization of buffer pool  121028  1:38:56 InnoDB: highest supported file format is Barracuda.  121028  1:38:57  InnoDB: Waiting for the background threads to start  121028  1:38:58 InnoDB: 1.1.8 started; log sequence number 3137114  121028  1:38:58 [ERROR] Binlog has bad magic number;  It's not a binary log file that can be used by this version of MySQL  121028  1:38:58 [ERROR] Can't init tc log  121028  1:38:58 [ERROR] Aborting    121028  1:38:58  InnoDB: Starting shutdown...  121028  1:38:58  InnoDB: Shutdown completed; log sequence number 3137114  121028  1:38:58 [Note] C:\PROGRA~2\EASYPH~1.1\MySql\bin\mysqld.exe: Shutdown complete  

I have already tried this.

I have an EasyPHP 12.1 setup on Windows 7x64 PC.

Passing path from batch file to mysql

Posted: 25 May 2013 08:10 AM PDT

My batch file will contain some thing like this :

    set MUSER=root       set MPASS=xprd123       set MHOST=172.33.49.55       set DB=DBName1      set MBIN=C:\Program Files\MySQL\MySQL Server 5.5\bin      set scriptpath=D:\STATS_COLLECTOR       "%MBIN%\mysql.exe" -u"%MUSER%" -p"%MPASS%" -h "%MHOST%" "%DB%" -e "set @2:='%scriptpath%'; source DBQuery1.sql;"  

Content of DBQuery1.sql will be :

SELECT 'INNODB_OS_LOG_WRITTEN','QUESTIONS'   UNION   SELECT  MAX(IF(variable_name = 'INNODB_OS_LOG_WRITTEN', VARIABLE_VALUE, NULL)) AS INNODB_OS_LOG_WRITTEN,   MAX(IF(variable_name = 'QUESTIONS', VARIABLE_VALUE, NULL)) AS QUESTIONS   FROM information_schema.GLOBAL_STATUS   INTO OUTFILE '@2/stats.csv'       FIELDS TERMINATED BY ','       ENCLOSED BY '"'       LINES TERMINATED BY '\n'  ;  

I am not sure this is the way to pass path to the SELECT query inside DBQuery1.sql.

This code is not working.

I looked at other posts on passing path to batch file, but whatever I tried, I am not able to solve this issue.

When I execute the batch file, I am not sure if the path is getting passed into the sql file and if yes, why @2 is not getting the path passed.

Could anybody kindly help on this.

No comments:

Post a Comment

Search This Blog