Tuesday, April 16, 2013

[SQL Server 2008 issues] Changing Execution Plans in Cache

[SQL Server 2008 issues] Changing Execution Plans in Cache


Changing Execution Plans in Cache

Posted: 15 Apr 2013 07:47 PM PDT

Hello,I've exerpeinced something I've not noticed before, We have a heavily stored procedure in on of our databases and we've seen intermitting perfromance issues over the last week and it looks like the execution plan changes even through the stored procedure remains in cache. We've fixed the issue my using the forceseek index hint.What would cause a plan to be changed? In our case an index seek on a very large table changed to scan and caused the procs performance to dive big time. Dave

Is there a better way to insert data than Access?

Posted: 14 Apr 2013 10:41 PM PDT

Is there a better tool to insert data into SQLserver than Access ?For managers. Small number of people. Very limited amount of data. For small administrations. To replace spreadsheets used by the managers.(Insert, read at and modify data).Ben BrugmanLast week, I tried to ask the same question but bumbled.

Reinitialize log shipping in sql server 2008 R2 after activate Secondary server DB

Posted: 15 Apr 2013 02:24 AM PDT

Hi All, I have set up logshipping between server A (Primary) and server B (Secondary), and tested log shipping by activate database from server B ( Stop the log shipping job and Restore database with Recovery on server B) . Now i wanted continue the log shiping from this break point . Can continue without restore the full backup from server A ? Please helpRegardsSuresh

Give access to SQL Server Agent ONLY?

Posted: 15 Apr 2013 05:15 PM PDT

Just wondering if it is possible to give someone SSMS access, however only allow them to run jobs within the SQL Server Agent and nothing else?

Ranking Query

Posted: 15 Apr 2013 09:53 AM PDT

Hi All,i have a requirement to find out which employee pass three exams continuously if you observe below query dataset a passed three exams continuously again b also passedc and d failed again e passedfinally it should display a,b,e rows only declare @table table(id int ,name varchar(20),row int identity(1,1) )insert into @table select 1,'a' union allselect 2,'a' union allselect 3,'a' union allselect 1,'b' union allselect 1,'b' union allselect 2,'b' union allselect 3,'b' union allselect 3,'c' union allselect 3,'c' union allselect 3,'c' union allselect 6,'d' union allselect 6,'d' union allselect 7,'d' union allselect 9,'e' union allselect 10,'e' union allselect 11,'e' select * from @table here i am trying to fetch by using ranking functions but not able to find out the query please help any oneRegards,Jagadeesh

Comparing Two Different Tables, Showing Values In Table 1 Not in Table 2

Posted: 15 Apr 2013 12:23 PM PDT

Hello,I'd like to compare two tables, and show values found in table 1 which are [b]NOT[/b] in table 2, then same for table 2, show values in table 2 which are [b]NOT[/b] in table 1.Here's what I came up with, looks ok?select T1.[Name], T2.[Address]from [dbo].[My Table 1] as T1left join [dbo].[My Table 2] as T2 on T1.[Name]=T2.[Name]where T2.[Address] is not nullThanks

No of rows in Temporary Table

Posted: 15 Apr 2013 04:54 PM PDT

What is the maximum number of rows can be stored in a emporary table?

Fixing Divide by Zero Error

Posted: 15 Apr 2013 03:52 PM PDT

Hello,I have in select statement select t1.A/t1.B but get divide by zero errorwhen I used with isnull isnull(t1.A/t1.B,'-')I get error Error converting data type varchar to float.how to fix?Thanks

Large dataset in one table or several small tables

Posted: 15 Apr 2013 10:48 AM PDT

Hello.Here is my question: Is it better to store a "large" set of data in one table or break it across smaller tables, each with the same schema?Here is the background: In my client's words there are currently "millions-soon-to-be-billions of records", and they are expecting "thousands of read and write hits per day". The data table is flat, has 33 columns total, and an expected row size of 510 bytes (based on directions at "http://msdn.microsoft.com/en-us/library/ms189124.aspx"). 5 of the 33 columns are used to identify a particular 'real-world entity' in the table. (this is where my vocabulary may trip me up - hopefully i am using 'Entity' correctly). A single real-world entity will likely have more than one row of data associated with it (imagine a single garden plot with many 'crops' planted in the garden. The 'entity' is the garden plot; the data are the various crops planted in the garden). The schema for the 5 columns used to identify an 'entity' is listed below (I left out the remaining 28 columns for brevity. I can include them if necessary). The remaining fields are predominantly varchar with a couple of ints and dates in the mix as well.CREATE TABLE [dbo].[TestTable]( [field1] [varchar](10) NOT NULL, [field2] [varchar](7) NOT NULL, [field3] [varchar](7) NOT NULL, [field4] [varchar](10) NOT NULL, [field5] [date] NOT NULL)I have recommended that the client keep the data in a single table, indexed based on the 5 identifying columns. The data are specific to a US State/County combination so the client is learning towards a "per state" approach, i.e. one table for each of the 50 states, or (really crazy i think) a "per-county" approach with 3000+ tables, one for each US county. The business-logic that has me concerned, or in doubt, about my recommendation is that when a single row for a given real-world entity is updated, all rows associated with that entity are deleted, and the complete set of data for the given 'real world entity' is inserted. My concern is that potentially 'thousands of writes' per day would negatively impact the table's index. But i am figuring that regular rebuilds of the index done as part of a maintenance plan would take care of the fragmentation issues (if that is even an issue).What do you all think - one 'large' table or several/many smaller tables?Thank you

Turn off Predictive Typing

Posted: 11 Nov 2010 07:09 PM PST

Hi,There does anyone know how to turn of the Predictive Typing feature in SQL Server 2008 and 2008 R2 when typing in a query analyser windowThis has to be the worst feature i have ever seen, it now takes longer to type a piece of SQL as i keep having to correct what it assumed i wanted to type.Thanks in advance for any helpCheers,Brunzire

Copy a table from one db to another including all configurations

Posted: 11 Apr 2013 01:22 AM PDT

How to copy a table from one db to another including all configurations, such as index, primary key...I use select * into newtable from originaltablebut missing indexes.

is there a way to skip or ignore unwanted files?

Posted: 15 Apr 2013 06:30 AM PDT

I want to skip or ignore those files that have a "unmatching" pattern in them (i.e, a_unmatched_b.xlsx, b_c_unmatched.xlsx, and so on). Is it possible to do this in SSIS?

Procedure Execution Plans

Posted: 14 Apr 2013 11:07 PM PDT

Hi all,I have a question regarding the best way to approach a situation where an application user can query a set of tables in two different ways. Say there is a "View Mode" that indicates whether the user wants to search by Location or by Property ID range. The underlying Select statements would differ only in the WHERE clause, the table joins would be the same. If two PROCS (B and C) are used they would have different parameter lists and WHERE clauses.) It seems there are these choices:1. PROC A would include View Mode and some optional parameters and would say: If ViewMode = 1 EXEC Proc B Else EXEC Proc C;2: PROC A would include View Mode and some optional parameters and then branch to one of two different SELECT statements within the PROC. It would not use Procs B and C at all.Am I correct in thinking that the execution plan for choice 2 would be inefficient at least part of the time? Would choice 1 be any better? We'll probably end up just letting the app code decide which proc (B or C) to call but I'm still curious about the general scenario.Thanks

Viewing SSRS pie chart report on mobile devices

Posted: 15 Apr 2013 06:52 AM PDT

I have an SSRS report that includes tables with data in them and also a pie chart with percentages. This report is currently emailed as a pdf attachment. Due to request of the data it needed to be viewed when not at a computer. So I changed the report to email the entire report in the body of the email as opposed to as an attachment. In viewing this emailed report thru any mobile device the pie chart is not viewable or readable depending on the mobile device. Does anyone know 1) are SSRS reports containing a pie chart only to be viewed on a computer 2) if a report does contain a pie chart how to make it presentable to view by any mobile device's email?

RFI: what do you do during production instances?

Posted: 15 Apr 2013 05:25 AM PDT

The CEO of our company wants to know what all the different departments within IT (server operations, network operations, DBAs, etc.) do during a "production incident" to help diagnose the problem. As one of the DBAs, I've been asked to work on the database part of this request.Realizing that this is a very wide and broad subject, what tools do you use, actions do you perform, etc. when "production incidents" occur in your environments? Some things I've identified, in no particular order, include: Check disk space and database file spaceExecute sp_who2 or a derivative thereof to see what other SQL processes / jobs may be running, causing blocking, etc.Check Task Manager to see what other applications, services, etc. are running on the server, consumption of CPU and memory resourcesCheck SQL and Windows error logsMy plan is to develop / plagiarize a script or series of scripts that I can "pull the trigger on" at the beginning of an incident to gather all of the above information and anything else that I'm overlooking, get a one-stop place to evaluate the results and make a decision on how to proceed.The majority of the SQL Server instances in our environment are SQL Server 2008 R2, of various editions, with a very few SQL 2005 instances hanging on, and just getting started in SQL Server 2012. Do any of you have suggestions on additional areas to focus? Scripts that you use? Blogs that have been read / written that I can refer to?Thanks in advance for any suggestions, advice, etc.Mike

Update query help

Posted: 15 Apr 2013 12:36 AM PDT

Hi,I need an update query which will get the 'scenario' and 'actType' values from all the rows in column strComment and insert the values of 'scenario' into column keyScenario and the value of 'actType' into column act_Act.Please look at the picture attached.Thank you in advance.

Exceeding 25 replication agents on an instance

Posted: 15 Apr 2013 02:37 AM PDT

Using SQL Standard Edition (2005, 2008) the server is not able to startup more than 25 replication agents.We've run into this using SQL 2000 and had to place a call to MS. There is an undocumented registry setting that will allow more than 25 replication agents to be configured per instance.Unfortunately it's been ten years since we've run into this problem and are unable to find any documentation on the setting. Anyone here familiar with the problem and have a solution?

Using linked server or create another connection string?

Posted: 15 Apr 2013 02:09 AM PDT

In my one asp.net app, I created a connection string to ServerA and then created a linked server from ServerA to ServerB. It working fine.But, new boss said that linked server is not good practice. App should void using linked server and should create another connection string to ServerB.Is it true?

Sqls2k8 R2 maximum database size

Posted: 15 Apr 2013 01:01 AM PDT

I have a data about 250Gb to 400gb.. And now i have doubt about which database i have use to store this data?

Cross Server Dictionary

Posted: 14 Apr 2013 10:55 PM PDT

I guess this post is not a 'question' per se, but more just me rambling on about my approach so far, and see what feedback comes in return.Maybe I've overlooked some points, maybe (likely) there's a better solution/approach out there....?I'm just seeking the opportunity to bounce my world against people and see what they think.BackgroundSo, I am tasked to document our databases.The idea being that whether you are a developer, reporter or occasional user.... you are not just victim to interpreting an ambiguous table/field name, but can access a wide range of support information.The new auxiliary tables would in simplest terms contain textual descriptions, links to supporting documentation outside of database.In addition, there would be a hierarchy to group field of equal and similar significance (departure location, arrival location, manufacturing location .. would all roll up to an entity called 'location')Phase 1 was just to prototype a dictionary with 1 database, but now Phase 2 is to create 1 single dictionary to contain the meta-data of multiple databases residing on different servers.In an ideal world, I'd like to have the best use of existing system tables as possible for the foundation level, after all, why duplicate something that already exists and if maintained for you automatically ?However, I have struggled with some points of merging and utilizing system data form multiple databases/servers... and I am beginning to re-invent a new system table structure.Thus I'm baring my progress so far in the hope that someone can help re focus how I can make all of this more streamline !Approach (so far!)As we know, we can find all sorts of information about tables, fields, relationships from various system tables and information_schema views.Yet my first challenge was how to link the SQL Server system data, to my auxiliary data.Linking full textual tablename/fieldname would be just inefficient; ID fields would be needed.In an ideal world, I could just use sys.objects object_id, with an additional field to differentiate severs.However, just utilizing the system object_id's might cause problems down the line, as these may change in the case of objects being modified/dropped/recreated. This would cause my subsequent hierarchies to fall apart.In the end I decided that I would require both a permanent non-system ID that would identify a unique field/table for my aux data, and also a procedure to assign to that row, the current system object_id. (weekly job to synchronize based on catalog/schema/table/field name).Hence I will maintain a permanent auxiliary structure with my own Identities, and be able to reference system tables with the normal object_id.My new table now looks something like this :[img]http://www.netshed.co.uk/temp/T1.JPG[/img]For Phase 2, we have to accommodate objects from multiple databases.[img]http://www.netshed.co.uk/temp/T2.JPG[/img]So that's where I am at right now.Feel free to rip it all apart and show me a simpler way to create a dictionary that will bridge databases and servers !

Is there a better way to insert data than Access?

Posted: 14 Apr 2013 10:41 PM PDT

Is there a better tool to insert data than Access ?

Need Link

Posted: 14 Apr 2013 09:28 PM PDT

Can anyone provide link for :Visual Studio 2005 as i am unable to find it

Views + OUTER APPLY

Posted: 14 Apr 2013 09:13 PM PDT

Hey,SQL Views don't seem to like OUTER APPLY, so is there an alternative (I know they work, just not in Diagrams etc.).This is what I need to achieve;[code="sql"]OUTER APPLY (SELECT TOP 1 acr.ACR_ACT_Code FROM [Activity Records] acr WHERE acr.ACR_Clock_No = cr.CLK_Clock_No AND CONVERT(date, acr.ACR_Shift_Date) = CONVERT(date, cr.CLK_Shift_Date) ORDER BY acr.ACR_Start_Date + acr.ACR_Start_Time DESC) acr [/code]Thanks

[SQL Server] Efficiently Reuse Gaps in an Identity Column



Weekly SQL Server newsletter with articles, forum posts, and blog posts via email. Subscribers receive our white paper with performance tips for developers.



SQLTeam.com Articles via RSS


SQLTeam.com Weblog via RSS



.

sqlteam.com

Monday, April 15, 2013

[SQL Server] Using REPLACE in an UPDATE statement



Weekly SQL Server newsletter with articles, forum posts, and blog posts via email. Subscribers receive our white paper with performance tips for developers.



SQLTeam.com Articles via RSS


SQLTeam.com Weblog via RSS



.

sqlteam.com

[SQL Server] Handling SQL Server Errors



Weekly SQL Server newsletter with articles, forum posts, and blog posts via email. Subscribers receive our white paper with performance tips for developers.



SQLTeam.com Articles via RSS


SQLTeam.com Weblog via RSS



.

sqlteam.com

[how to] impdp failing with ORA-01031: insufficient privileges

[how to] impdp failing with ORA-01031: insufficient privileges


impdp failing with ORA-01031: insufficient privileges

Posted: 15 Apr 2013 06:38 PM PDT

I'm trying to import a dmp file provided by our dba to my dev database. I've been able to export the contents to a SQLFILE and the contents look ok. I tried importing the schema and it kept on failing (same errors as shown below); so I tried running the SQL directly from the sqlfile - this created all the schema objects but didn't import the data.

The export was from the expenses schema from our test database. I already have another schema by that name in my dev database that I don't want to touch; so I'm remapping expenses to expenses1.

This schema (expenses1) has READ and WRITE privilege on the data_pump_dir directory. It has, as far as I can tell, all the privileges it needs (create table, view, procedure, sequence, etc) as well as IMP_FULL_DATABASE.

[oradev@DIAS45 dpdump]$ impdp expenses1/expenses1 logfile=expenses1.log    remap_schema=EXPENSES:EXPENSES1 directory=data_pump_dir    dumpfile=ossarct_expenses_expdp.dmp    content=DATA_ONLY schemas=EXPENSES include=TABLE    Import: Release 11.2.0.3.0 - Production on Tue Apr 16 09:28:21 2013    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production  With the Partitioning, OLAP, Data Mining and Real Application Testing options    UDI-31626: operation generated ORACLE error 31626  ORA-31626: job does not exist  ORA-39086: cannot retrieve job information  ORA-06512: at "SYS.DBMS_DATAPUMP", line 3326  ORA-06512: at "SYS.DBMS_DATAPUMP", line 4551  ORA-06512: at line 1    [oradev@DIAS45 dpdump]$ cat expenses1.log  ;;;  Import: Release 11.2.0.3.0 - Production on Tue Apr 16 09:28:21 2013    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.  ;;;  Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production  With the Partitioning, OLAP, Data Mining and Real Application Testing options  Master table "EXPENSES1"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded  Starting "EXPENSES1"."SYS_IMPORT_SCHEMA_01":  expenses1/******** logfile=expenses1.log    remap_schema=EXPENSES:EXPENSES1 directory=data_pump_dir    dumpfile=ossarct_expenses_expdp.dmp content=DATA_ONLY schemas=EXPENSES include=TABLE  ORA-39097: Data Pump job encountered unexpected error -1031  ORA-39065: unexpected master process exception in DISPATCH  ORA-01031: insufficient privileges  

I can't tell from the above what privilege is missing, or if my command is telling impdp to do more than I'm expecting it to. I've tried skipping the schemas and include parameters with no difference.

Avoiding Multiple Queries when Searching for Records Associated with a Set of Records

Posted: 15 Apr 2013 03:45 PM PDT

So, I am sure I have done something really stupid while designing this, and I'm open to schema changes if they'll really help me out.

On to the problem:

I have a custom shopping cart system (backed by a MySQL database) that includes a products table and a price_rules table that's used for computing discounts and applying promo code discounts. Some price rules don't have promo codes attached to them; some are simply "10% off of product X from March 1st through April 1st" or similar. Because a single price rule can apply to many individual products, I also have a join table called price_rule_product.

When showing a set of products (for example, on the main shop page or listing all products in a category) I'm currently running a separate query for each product to look for price rules that apply to that product. Here's what one of those queries looks like:

SELECT DISTINCT price_rule.* FROM price_rule      INNER JOIN price_rule_product ON          price_rule.id = price_rule_product.price_rule_id      INNER JOIN product ON          product.id = price_rule_product.product_id      WHERE product.id = 10          AND price_rule.apply_to = 'line_items'          AND ( price_rule.start_date IS NULL              OR price_rule.start_date = 0              OR price_rule.start_date <= 1366063902 )          AND ( price_rule.end_date IS NULL              OR price_rule.end_date = 0              OR price_rule.end_date >= 1366063902 )          AND ( price_rule.promo_code IS NULL              OR price_rule.promo_code = '' )      ORDER BY sort ASC  

Oh SQL Gods, I pray you have some suggestions/solutions for this. It is causing some significant performance issues, and I'm just not experienced enough with SQL to figure out where to go from here.

Can putting mysql DB into memory or having the innodb_buffer_pool_size match the size of DB increase performance?

Posted: 15 Apr 2013 02:28 PM PDT

We have a mysql database that has roughly 80 gigabytes (GB) of data using inno_db engine. We are looking to increase performance on the database (we looked at the slow log and gone through and optimized the db and indexes) and one avenue is to get better hardware.

Currently our mysql DB is on a 6 year old server with 16GB of mem with 2 x Xeon 2.00GHz (2 cores) with the ability to have 32 GB mem max.

We are looking to buy a new server for the mysql database with the following specs:

Dell PowerEdge R420 - Highlight specs

  • 2 Intel Xeon E5-2450 2.10GHz, 20M Cache
  • 128 GB RAM
  • 400GB Solid State Drive

So if the database is 80 GB and we have 128 GB of Memory on this new server my question (s) are:

Can we put the full database into memory and see performance gains?

Would setting the innodb_buffer_pool_size to 88GB effectively be the same as putting the "the DB in memory"?

Can mysql take advantage of this memory? Are there any limitations on this on the mysql side?

Would there be any pitfalls of loading the full mysql in database or having such a large innodb_buffer_pool_size?

mySQL data table seems to stuck at 16777215 rows

Posted: 15 Apr 2013 02:21 PM PDT

I am creating a MySQL table and loading data from a local file.

CREATE TABLE `patent` (      `seq_number` mediumint(9) unsigned NOT NULL auto_increment,    `seq_id` varchar(24) default NULL,    `date` date default NULL,    `patent_number` varchar(16) default NULL,    `pat_seq_number` smallint(6) unsigned default '0',    `organism` text,    `assignee` text,    `seq_length` int(8) default '0',    `seq` text,    PRIMARY KEY  (`seq_number`),    Index `seq_id` (`seq_id`),    Index `patent_number` (`patent_number`),    Index `pat_seq_number` (`pat_seq_number`),    Index `seq_length` (`seq_length`)    ) Engine = MyISAM MAX_ROWS=536870912;      load data local infile '/data/p_table.txt' into table  patent ignore 0 lines   (seq_id, date, patent_number, pat_seq_number, organism, assignee, seq, seq_length);  

It seems that I am stuck at the number of rows:

mysql> select count(*) from patent;    +----------+  | count(*) |  +----------+  | 16777215 |  +----------+    1 row in set (0.00 sec)  

Although my txt data table has about 20 million lines (records).

I tried Engine=InnoDB

I also tried Engine = MyISAM MAX_ROWS=200000000 (200 million)

mysql> show table status like 'patent'\G    *************************** 1. row ***************************               Name: patent             Engine: MyISAM            Version: 10         Row_format: Dynamic               Rows: 16777215     Avg_row_length: 244        Data_length: 4098768552    Max_data_length: 281474976710655       Index_length: 715227136          Data_free: 0     Auto_increment: 16777216        Create_time: 2013-04-14 12:46:10        Update_time: 2013-04-14 12:50:43         Check_time: 2013-04-14 12:54:08          Collation: latin1_swedish_ci           Checksum: NULL     Create_options: max_rows=536870912          Comment:     1 row in set (0.00 sec)  

The manuals and online advices all say I should be able to create large tables.

her is my uanme linux output.

Linux ... 2.6.32-279.el6.x86_64 #1 SMP Wed Jun 13 18:24:36 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux

Here is the verison for mySQL:

Welcome to the MySQL monitor. Commands end with ; or \g.

Your MySQL connection id is 318 Server version: 5.1.67 Source distribution

How can I create a larger table to increase the number of rows to larger than 2**24=1677215 rows?

Reclaiming space from a large partitioned database

Posted: 15 Apr 2013 02:36 PM PDT

  • We have a database where one of the business group decided to remove several millions of rows as part of their upgrade process. (we removed for them..).. This database is partitioned based on a date and removing all these rows would help us gain around 30% of the DB size back. But obviously, if we run the shrink command, there is a possibility of the database getting fragmented and running rebuild obvious increases the space of the database. Is there any better way of reclaiming the space other than moving to a different file group as suggested by Paul Randall?.
  • If we go via the traditional shrink and rebuild index route to reclaim the space on several of these partitions, we will be having to put the db in simple (bulk did not help much with reindex ) to avoid the transaction log from getting filled. This obviously will break the log shipping and we will have to setup the logshipping on this VLDB database back again which would be another tedious work. For a VLDB, what would be the best way to reclaim space for few of these tables without breaking the logshipping and using the existing commands?

Thanks for your help with this

Generate script for SQL Broker objects

Posted: 15 Apr 2013 01:50 PM PDT

I'm using SQL Server Management Studio to generate scripts for all the objects in an existing database:

enter image description here

but the objects related to SQL Broker (messages, queues and services) are not generated.

I've fiddled with the advanced options but still can't get them out. Is there a way to do it?

import csv data stored in a blob column

Posted: 15 Apr 2013 08:03 PM PDT

Here's what i need to do:

  1. Users will upload CSV files into SQL Server 2012 blob column
  2. Each night, i would like to take each file and import the data into table

Is it possible to use a stored procedure to read the file inside the blob column, loop on each line and insert the data in a table?

  1. Should i export the file to a temporary system file
  2. Read the file and do the importation

or can i

  1. Read the file directly from the blob colum and do the importation

Order by on an alphanumerical column

Posted: 15 Apr 2013 12:52 PM PDT

I have a column that has strings of the following type:
Miller 10
Allen 20
King 10
....
Jones 100

I try to sort the column based on the numerical part of the data.
I tried the following:

SELECT * FROM the_table ORDER BY CAST(RIGHT(data,2) AS UNSIGNED);

But this does not work. It places Jones 100 as the first entry etc.

What am I doing wrong?

MySQL database drop insanely slow

Posted: 15 Apr 2013 03:04 PM PDT

I just installed MySQL 5.0.45-log Source distribution on my Debian 6.0.6 server.

I installed it under my user home directory like I'm used to doing.

But this time the queries are extremely slow to run.

Running a create table or a database drop takes ages. I can literally watch tv in the meantime.

So I did a profiling of the database drop statement.

mysql> SHOW PROFILES;  +----------+--------------+------------------------------+  | Query_ID | Duration     | Query                        |  +----------+--------------+------------------------------+  |        1 | 369.54719400 | drop database db_madeintouch |   |        2 |   0.00004600 | SELECT DATABASE()            |   +----------+--------------+------------------------------+  2 rows in set (0.00 sec)    mysql> SHOW PROFILE FOR QUERY 1;  +----------------------+-----------+  | Status               | Duration  |  +----------------------+-----------+  | (initialization)     | 0.000001  |   | checking permissions | 369.54705 |   | Opening table        | 0.000103  |   | System lock          | 0.000003  |   | Table lock           | 0.000018  |   | query end            | 0.000004  |   | freeing items        | 0.000004  |   | logging slow query   | 0.000002  |   +----------------------+-----------+  8 rows in set (0.00 sec)  

We can see the time it takes for the checking of permissions is of 369 seconds.

I also did a show status of the InnoDB engine.

mysql> show engine innodb status\G  *************************** 1. row ***************************  Status:   =====================================  130415 23:11:27 INNODB MONITOR OUTPUT  =====================================  Per second averages calculated from the last 9 seconds  ----------  SEMAPHORES  ----------  OS WAIT ARRAY INFO: reservation count 781, signal count 781  Mutex spin waits 0, rounds 8629, OS waits 231  RW-shared spins 379, OS waits 190; RW-excl spins 380, OS waits 342  ------------  TRANSACTIONS  ------------  Trx id counter 0 7599  Purge done for trx's n:o < 0 7430 undo n:o < 0 0  History list length 3  Total number of lock structs in row lock hash table 0  LIST OF TRANSACTIONS FOR EACH SESSION:  ---TRANSACTION 0 0, not started, process no 14133, OS thread id 140617364518656  MySQL thread id 16, query id 1305 localhost stephane checking permissions  drop database db_madeintouch  ---TRANSACTION 0 0, not started, process no 14133, OS thread id 140617364383488  MySQL thread id 13, query id 1307 localhost stephane  show engine innodb status  ---TRANSACTION 0 7597, COMMITTED IN MEMORY, process no 14133, OS thread id 140617364518656 dropping table  COMMITTING , undo log entries 16  MySQL thread id 16, query id 1305 localhost stephane checking permissions  drop database db_madeintouch  --------  FILE I/O  --------  I/O thread 0 state: waiting for i/o request (insert buffer thread)  I/O thread 1 state: waiting for i/o request (log thread)  I/O thread 2 state: waiting for i/o request (read thread)  I/O thread 3 state: waiting for i/o request (write thread)  Pending normal aio reads: 0, aio writes: 0,   ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0  Pending flushes (fsync) log: 1; buffer pool: 0  174 OS file reads, 3781 OS file writes, 2099 OS fsyncs  0.00 reads/s, 0 avg bytes/read, 0.00 writes/s, 0.00 fsyncs/s  -------------------------------------  INSERT BUFFER AND ADAPTIVE HASH INDEX  -------------------------------------  Ibuf: size 1, free list len 0, seg size 2,  0 inserts, 0 merged recs, 0 merges  Hash table size 17393, used cells 122, node heap has 1 buffer(s)  0.00 hash searches/s, 0.00 non-hash searches/s  ---  LOG  ---  Log sequence number 0 7801057  Log flushed up to   0 7798962  Last checkpoint at  0 7798962  1 pending log writes, 0 pending chkp writes  1535 log i/o's done, 0.00 log i/o's/second  ----------------------  BUFFER POOL AND MEMORY  ----------------------  Total memory allocated 22136914; in additional pool allocated 1048576  Buffer pool size   512  Free buffers       2  Database pages     509  Modified db pages  18  Pending reads 0  Pending writes: LRU 0, flush list 0, single page 0  Pages read 183, created 1444, written 6980  0.00 reads/s, 0.00 creates/s, 0.00 writes/s  No buffer pool page gets since the last printout  --------------  ROW OPERATIONS  --------------  0 queries inside InnoDB, 0 queries in queue  1 read views open inside InnoDB  Main thread process no. 14133, id 140617334142720, state: waiting for server activity  Number of rows inserted 0, updated 0, deleted 0, read 0  0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s  ----------------------------  END OF INNODB MONITOR OUTPUT  ============================    1 row in set (0.00 sec)  

And here is my environment.

mysql> SHOW VARIABLES;

+---------------------------------+--------------------------------------------------------------------+ | Variable_name | Value
| +---------------------------------+--------------------------------------------------------------------+ | auto_increment_increment | 1
| | auto_increment_offset | 1
| | automatic_sp_privileges | ON
| | back_log | 50
| | basedir | /home/stephane/programs/mysql-5.0.45/install/ | | binlog_cache_size | 32768
| | bulk_insert_buffer_size | 8388608
| | character_set_client | latin1
| | character_set_connection | latin1
| | character_set_database | latin1
| | character_set_filesystem | binary
| | character_set_results | latin1
| | character_set_server | latin1
| | character_set_system | utf8
| | character_sets_dir | /home/stephane/programs/mysql-5.0.45/install/share/mysql/charsets/ | | collation_connection | latin1_swedish_ci
| | collation_database | latin1_swedish_ci
| | collation_server | latin1_swedish_ci
| | completion_type | 0
| | concurrent_insert | 1
| | connect_timeout | 5
| | datadir | /home/stephane/programs/mysql/install/data/ | | date_format | %Y-%m-%d
| | datetime_format | %Y-%m-%d %H:%i:%s
| | default_week_format | 0
| | delay_key_write | ON
| | delayed_insert_limit | 100
| | delayed_insert_timeout | 300
| | delayed_queue_size | 1000
| | div_precision_increment | 4
| | engine_condition_pushdown | OFF
| | expire_logs_days | 0
| | flush | OFF
| | flush_time | 0
| | ft_boolean_syntax | + -><()~*:""&|
| | ft_max_word_len | 84
| | ft_min_word_len | 4
| | ft_query_expansion_limit | 20
| | ft_stopword_file | (built-in)
| | group_concat_max_len | 1024
| | have_archive | NO
| | have_bdb | NO
| | have_blackhole_engine | NO
| | have_compress | YES
| | have_crypt | YES
| | have_csv | NO
| | have_dynamic_loading | YES
| | have_example_engine | NO
| | have_federated_engine | NO
| | have_geometry | YES
| | have_innodb | YES
| | have_isam | NO
| | have_merge_engine | YES
| | have_ndbcluster | NO
| | have_openssl | NO
| | have_ssl | NO
| | have_query_cache | YES
| | have_raid | NO
| | have_rtree_keys | YES
| | have_symlink | YES
| | hostname | server1
| | init_connect |
| | init_file |
| | init_slave |
| | innodb_additional_mem_pool_size | 1048576
| | innodb_autoextend_increment | 8
| | innodb_buffer_pool_awe_mem_mb | 0
| | innodb_buffer_pool_size | 8388608
| | innodb_checksums | ON
| | innodb_commit_concurrency | 0
| | innodb_concurrency_tickets | 500
| | innodb_data_file_path | ibdata1:10M:autoextend
| | innodb_data_home_dir |
| | innodb_doublewrite | ON
| | innodb_fast_shutdown | 1
| | innodb_file_io_threads | 4
| | innodb_file_per_table | OFF
| | innodb_flush_log_at_trx_commit | 1
| | innodb_flush_method |
| | innodb_force_recovery | 0
| | innodb_lock_wait_timeout | 50
| | innodb_locks_unsafe_for_binlog | OFF
| | innodb_log_arch_dir |
| | innodb_log_archive | OFF
| | innodb_log_buffer_size | 1048576
| | innodb_log_file_size | 5242880
| | innodb_log_files_in_group | 2
| | innodb_log_group_home_dir | ./
| | innodb_max_dirty_pages_pct | 90
| | innodb_max_purge_lag | 0
| | innodb_mirrored_log_groups | 1
| | innodb_open_files | 300
| | innodb_rollback_on_timeout | OFF
| | innodb_support_xa | ON
| | innodb_sync_spin_loops | 20
| | innodb_table_locks | ON
| | innodb_thread_concurrency | 8
| | innodb_thread_sleep_delay | 10000
| | interactive_timeout | 28800
| | join_buffer_size | 131072
| | key_buffer_size | 16384
| | key_cache_age_threshold | 300
| | key_cache_block_size | 1024
| | key_cache_division_limit | 100
| | language | /home/stephane/programs/mysql-5.0.45/install/share/mysql/english/ | | large_files_support | ON
| | large_page_size | 0
| | large_pages | OFF
| | lc_time_names | en_US
| | license | GPL
| | local_infile | ON
| | locked_in_memory | OFF
| | log | ON
| | log_bin | OFF
| | log_bin_trust_function_creators | OFF
| | log_error | /home/stephane/programs/mysql/install/mysql.error.log | | log_queries_not_using_indexes | OFF
| | log_slave_updates | OFF
| | log_slow_queries | ON
| | log_warnings | 1
| | long_query_time | 10
| | low_priority_updates | OFF
| | lower_case_file_system | OFF
| | lower_case_table_names | 0
| | max_allowed_packet | 1047552
| | max_binlog_cache_size | 18446744073709551615
| | max_binlog_size | 1073741824
| | max_connect_errors | 10
| | max_connections | 100
| | max_delayed_threads | 20
| | max_error_count | 64
| | max_heap_table_size | 16777216
| | max_insert_delayed_threads | 20
| | max_join_size | 18446744073709551615
| | max_length_for_sort_data | 1024
| | max_prepared_stmt_count | 16382
| | max_relay_log_size | 0
| | max_seeks_for_key | 18446744073709551615
| | max_sort_length | 1024
| | max_sp_recursion_depth | 0
| | max_tmp_tables | 32
| | max_user_connections | 0
| | max_write_lock_count | 18446744073709551615
| | multi_range_count | 256
| | myisam_data_pointer_size | 6
| | myisam_max_sort_file_size | 9223372036854775807
| | myisam_recover_options | OFF
| | myisam_repair_threads | 1
| | myisam_sort_buffer_size | 8388608
| | myisam_stats_method | nulls_unequal
| | net_buffer_length | 2048
| | net_read_timeout | 30
| | net_retry_count | 10
| | net_write_timeout | 60
| | new | OFF
| | old_passwords | OFF
| | open_files_limit | 1024
| | optimizer_prune_level | 1
| | optimizer_search_depth | 62
| | pid_file | /home/stephane/programs/mysql/install/data/server1.pid | | port | 3306
| | preload_buffer_size | 32768
| | profiling | OFF
| | profiling_history_size | 15
| | protocol_version | 10
| | query_alloc_block_size | 8192
| | query_cache_limit | 1048576
| | query_cache_min_res_unit | 4096
| | query_cache_size | 0
| | query_cache_type | ON
| | query_cache_wlock_invalidate | OFF
| | query_prealloc_size | 8192
| | range_alloc_block_size | 2048
| | read_buffer_size | 258048
| | read_only | OFF
| | read_rnd_buffer_size | 258048
| | relay_log_purge | ON
| | relay_log_space_limit | 0
| | rpl_recovery_rank | 0
| | secure_auth | OFF
| | secure_file_priv |
| | server_id | 1
| | skip_external_locking | ON
| | skip_networking | OFF
| | skip_show_database | OFF
| | slave_compressed_protocol | OFF
| | slave_load_tmpdir | /tmp/
| | slave_net_timeout | 3600
| | slave_skip_errors | OFF
| | slave_transaction_retries | 10
| | slow_launch_time | 2
| | socket | /tmp/mysql.sock
| | sort_buffer_size | 65528
| | sql_big_selects | ON
| | sql_mode |
| | sql_notes | ON
| | sql_warnings | OFF
| | ssl_ca |
| | ssl_capath |
| | ssl_cert |
| | ssl_cipher |
| | ssl_key |
| | storage_engine | MyISAM
| | sync_binlog | 0
| | sync_frm | ON
| | system_time_zone | MSK
| | table_cache | 4
| | table_lock_wait_timeout | 50
| | table_type | MyISAM
| | thread_cache_size | 0
| | thread_stack | 131072
| | time_format | %H:%i:%s
| | time_zone | SYSTEM
| | timed_mutexes | OFF
| | tmp_table_size | 33554432
| | tmpdir | /tmp/
| | transaction_alloc_block_size | 8192
| | transaction_prealloc_size | 4096
| | tx_isolation | REPEATABLE-READ
| | updatable_views_with_limit | YES
| | version | 5.0.45-log
| | version_comment | Source distribution
| | version_compile_machine | x86_64
| | version_compile_os | unknown-linux-gnu
| | wait_timeout | 28800
| +---------------------------------+--------------------------------------------------------------------+ 225 rows in set (43.41 sec)

Is there a way to truncate table that has foreign a key?

Posted: 15 Apr 2013 01:49 PM PDT

Is there a way to truncate table that has foreign keys ? Delete and reseed can take too long. Is deleting and recreating keys only way ? If so is there a tool that does this ?

How do I list all schemas in PostgreSQL?

Posted: 15 Apr 2013 12:59 PM PDT

When using PostgreSQL v9.1, how do I list all of the schemas using SQL?

I was expecting something along the lines of:

SELECT something FROM pg_blah;  

Multiple like conditions for same column?

Posted: 15 Apr 2013 11:17 AM PDT

How do I write a query for a Oracle database when I have several "like" values for the same column (in my example data "name")?

My example data for table "address":

name like "John"  name like "Jon%"  name like "Jonathan"  name like "Frank"  

Is there something like IN() with LIKE in Oracle or do I have to write it for each value OR separated?

How to Return XML Node Ordinal, or delete node based on element value?

Posted: 15 Apr 2013 01:16 PM PDT

I have an XML document similar to this:

<Root>    <Sub>      <Record>        <Guid>aslkjflaksjflkasjfkljsd</Guid>      </Record>      <Record>         <Guid>opqiwuerl;kasdlfkjawop</Guid>      </Record>    </Sub>  </Root>  

I am replacing an entire <Record> node based on some criteria. The <Record> nodes contain a <Guid> that I can use to identify them (pretend those are valid GUIDs please!).

Knowing the GUID, I am returning that node's XML into a variable for further processing. Downstream, I need to be able to delete that particular node so I that I can insert the modified version of the node back into the original document.

Is there a way to determine the ordinal, or another way to use a delete/replace method to remove a node based on an element's value?

SQL Procedure working when run manually, not running from sql server agent

Posted: 15 Apr 2013 06:31 PM PDT

I have a procedure that runs fine using the execute command in SSMS, however putting the same command in a job gives the following error.

line 9, character 9, unexpected end of input    

The code takes a very long XML string in UTF-8 encoding and puts it into a single nvarchar(max) cell. Then puts this string into a XML cell in a different table, allowing me to query the individual parts of the XML code using the nodes function. I cannot put the data directly into a nvarchar cell due to encoding differences.

I can't reproduce the string here as it is very very long.

I'm just looking for ideas really as to where it might be going wrong.

Here is what I know so far:

  1. The procedure runs without issue when executed manually

  2. I have checked permission issues, and that doesn't seem to be the problem. The agent runs under my own account and I am a sysadmin on the database

  3. I split the procedure into seperate parts to locate exactly where the problem is occuring. Once again the seperate procedures run fine when executed manually but an error occurs when run through SQL Server agent.

  4. When the query is run seperately through SQL Server Agent it gives a slightly different error. This leads me to believe it is an encoding issue. However I am getting the XML from a webpage and I can't change the encoding on the webpage.

    line 1, character 38, unable to switch the encoding

I know this is a long shot since you can't replicate the issue but if anyone could give an idea as to where to start looking for an answer, it would be greatly appreciated.

Memcache implementation

Posted: 15 Apr 2013 01:29 PM PDT

I have a Wordpress site that uses WP Super Cache plugin in order to make my blog posts and pages stay as HTML so that less PHP/MySQL code is executed. However, there's a featured box that can't get cached. So, I configured it manually to bypass cache and stay dynamic (code is executed on every page load).

So, most part of the entire page is executed using HTML and a specific part (a featured bar box) uses a PHP/MySQL.

The code that runs on every page load are some simple <?php $query = mysql_query("SELECT * [... rest of my code here ...]"); ?> to select some data from my database and some $fetch and <?php echo [...] ?> to print the results in the webpage, of course. But instead of everytime calling database to get this results, it would be much better to cache this results. So, I have three options:

  1. Let WP Super Cache caches my entire page. But I can't do that, because inside my box that currently isn't getting cached, the database data can change anytime (normally 1 time per day). So, I can't cache this box and we have to discart this option 1.
  2. Activate query_cache in my.cnf. However, this will only cache MySQL and don't PHP. :(
  3. Than, we have memcache (or APC, I don't know much about it's difference). After all my explanations, here's the final question: I'd like to know if installing memcache in my server is enought to get it working imediatally or I have to make any changes in my code to adapt it with memcache; I'd like to know if it will really help me with both PHP and MySQL for this specific part of code I have to cache or it's results is similar to enabling query_cache? Or is using APC rather than memcache better in my case?

Thanks,
André.

Trying to allocate 3G for buffer pool in InnoDB fails

Posted: 15 Apr 2013 02:06 PM PDT

I'm trying to boost my InnoDB driver on my Windows 7 PC since each page request takes just about 1.7 seconds to load versus 0.002 seconds on my Ubuntu 12.10 workstation.

The setting I'm playing around with is the well-known innodb_buffer_pool_size line, which should increase the performance a lot, and it did on my Ubuntu workstation - but my Windows 7 workstation can only seem to allocate 1G of RAM, my system currently have 16GB.

If I try to exceed 4GB I get [ERROR] innobase_buffer_pool_size can't be over 4GB on 32-bit systems, so I try to allocate 3G but now my server wont start, same goes for 2G. 1GB works but did only contribute to about 0.1s worth of speed increase - which isn't much.

Is there something more I have to account for on Windows to get better performance, and why can I not allocate more than 1G of RAM? (I get no errors when I try to allocate more than 1GB, the server just won't start)

SHOW TABLE STATUS very slow on InnoDB

Posted: 15 Apr 2013 11:09 AM PDT

Recently we've been migrating from MyISAM to InnoDB and I understand that MyISAM uses meta information against each table to track information and such - however, the database is about 1.8gb with 1+ million records across 400+ or so tables.

The problem comes with software like PHPMyAdmin runs "SHOW TABLE STATUS FROM dbase;" where it can take up to 50 seconds to execute that command. Is there a way to optimise this? (MySQL or MariaDB)

Thanks!

Does WITH SCHEMABINDING on a multi-statement TVF improve cardinality estimates?

Posted: 15 Apr 2013 08:59 PM PDT

Based on http://blogs.msdn.com/b/psssql/archive/2010/10/28/query-performance-and-multi-statement-table-valued-functions.aspx and other articles, SQL Server assumes that a multi-line table valued function returns one row. This causes the selection of a poor execution plan for the calling statement if it actually returns many rows.

Does adding WITH SCHEMABINDING to the RETURNS clause of the CREATE FUNCTION result in a more correct cardinality estimate for the return value of the function?

If we assume that we are passing a UserId to this function and getting back a table of RecordId values that the user is allowed to access, and that some users are only allowed to see a few records and that some are allowed to see many or even all records, would either the function or the calling statements (or the procedures that include them) benefit from using FORCE RECOMPILE? Does the use of WITH SCHEMABINDING in the function change this answer?

I realize that I could figure this out by experimentation, but I am hoping that someone has already figured out the answer. A pointer to someplace where this is well documented would be helpful.

PostgreSQL newbie - how to create objects in a database

Posted: 15 Apr 2013 08:24 PM PDT

I've just finished installing PostgreSQL and pgadmin3 on Ubuntu. Using pgadmin, I've been able to create a new user called 'test' and also a new database with 'test' as the owner. Now I'm trying to create tables using SQL statements... but I can't seem to be able to figure out how to do that in pgAdmin. I don't' want to create each column individually using the GUI. Is there any way I can run a SQL statement like:

create table test   (     id serial primary key,     name varchar(64),     handler varchar(16),     desc varchar(255)   );  

sqlplus command not working

Posted: 15 Apr 2013 08:59 PM PDT

I am trying to install Oracle Application Express on Windows 7, that is why I need SQL*plus . However SQL*Plus cannot be defined in cmd.

I am new to Oracle. What can I do in order to run SQL*Plus ?

Designing Simple Schema for Disaggregation of Demand Forecast

Posted: 15 Apr 2013 12:02 PM PDT

I am doing a simple database design task as a training exercise where I have to come up with a basic schema design for the following case:

I have a parent-child hierarchy of products (example, Raw Material > Work in Progress > End Product).

  • Orders are placed at each level.
  • Number of orders shall be viewable in weekly buckets for the next 6 months.
  • Demand forecast can be done for each product level.
  • Demand forecast for any week within next 6 months can be done today.
  • Demand forecast is done for weekly buckets, for the next 6 months.

Demand Forecast is usually done at the higher level in hierarchy (Raw Material or Work in Progress level) It has to be disaggregated to a lower level (End Product).

There are 2 ways in which demand forecast can be disaggregated from a higher level to lower level:

  1. User specifies percentage distribution for end product. Say, there's a forecast of 1000 for Work In Progress.. and user says I want 40% for End Product 1 and 60% for End Product 2 in bucket 10.. Then for 10th week (Sunday to Saturday) from now, forecast value for End Product 1 would be 400 and, for End Product 2 would be 600.
  2. User says, just disaggregate according to orders placed against end products in Bucket 5, and orders in bucket 5 for End Product 1 and 2 are 200 and 800 respectively, then forecast value for EP1 would be ((200/1000) * 100)% and for EP2 would be ((800/1000) * 100)% of forecast for 'Work in Progress'.

Forecast shall be viewable in weekly buckets for the next 6 months and the ideal format should be:

product name | bucket number | week start date | week end date | forecast value | created_on  

PRODUCT_HIERARCHY table could look like this:

id  |   name                |   parent_id  __________________________________________  1   |   raw material        |   (null)  2   |   work in progress    |   1  3   |   end product 1       |   2  4   |   end product 2       |   2  

ORDERS table might look like this:

id | prod_id | order_date | delivery_date | delivered_date  

where,

prod_id is foreign key that references id of PRODUCT_HIERARCHY table,

How to store forecast? What would be a good basic schema for such a requirement?


My idea to select orders for 26 weekly buckets is:

SELECT      COUNT(*) TOTAL_ORDERS,      WIDTH_BUCKET(          delivery_date,          SYSDATE,          ADD_MONTHS(sysdate, 6),           TO_NUMBER( TO_CHAR(SYSDATE,'DD-MON-YYYY') - TO_CHAR(ADD_MONTHS(sysdate, 6),'DD-MON-YYYY') ) / 7      ) BUCKET_NO  FROM      orders_table  WHERE      delivery_date BETWEEN SYSDATE AND ADD_MONTHS(sysdate, 6);  

But this will give weekly buckets starting from today irrespective of the day. How can I convert them to Sunday to Saturday weeks in Oracle?

Please help designing this database structure.

(will be using Oracle 11g)

Query is slow for certain users

Posted: 15 Apr 2013 05:21 PM PDT

I have a couple queries called from a C# .NET web application which are always fast for me (I am an local admin on the SQL Server) but for a group of users (domain group with required permissions), the query is incredibly slow to the point it times out in the application.

What would cause the exact same query to run differently for different users?

More info:

  • The query is inline SQL in the C# code, not a stored procedure
  • The app uses domain authentication and both the user and myself run the query through the app
  • Seems like the issue is different plans and one was cached so that is why it was different for different users. Something is affecting the cache because now the query is slow for me via the app and fast in SQL Server Management Studio.

DB2 Authentication TRUST_CLNTAUTH

Posted: 15 Apr 2013 04:02 PM PDT

I am using db2inst1 to connect to a database in DB2 which I have installed on my machine. Therefore, db2inst1 user does not require username/password authentication (borrows them from the OS). I would like to change that, and force every time a connection is initiated a username/password to be requested.

More specifically, this is how the authentication configuration looks like:

db2 get dbm cfg|grep -i auth

 GSS Plugin for Local Authorization    (LOCAL_GSSPLUGIN) =    Server Connection Authentication          (SRVCON_AUTH) = NOT_SPECIFIED   Database manager authentication        (AUTHENTICATION) = CLIENT   Alternate authentication           (ALTERNATE_AUTH_ENC) = NOT_SPECIFIED   Cataloging allowed without authority   (CATALOG_NOAUTH) = NO   Trusted client authentication          (TRUST_CLNTAUTH) = SERVER   Bypass federated authentication            (FED_NOAUTH) = NO  

db2 connect to dbName

   Database Connection Information       Database server        = DB2/LINUXX8664 10.1.0     SQL authorization ID   = DB2INST1     Local database alias   = DBNAME  

db2 connect to dbName user db2inst1 using password

   SQL1639N  The database server was unable to perform authentication because      security-related database manager files on the server do not have the required      operating system permissions.  SQLSTATE=08001  

I have played with some authentication combinations for "AUTHENTICATION" and "TRUST_CLNTAUTH" without much luck.

mysql duplicate entry error 1062 when restoring backup

Posted: 15 Apr 2013 08:11 PM PDT

Sorry, I seen similar threads but I still couldn't find it addressing my issue plus, I needed some more info on this.

Requirement: To create an exact replica 'db4' of an existing DB 'db3'.

Procedure followed:

  • mysqldump -uuser -ppass db3 > db3.sql (size is 6G)
  • mysql -uuser -ppass db4 < db3.sql (db4 was a newly created blank database)

The 2nd step throws in the error:

ERROR 1062 (23000) at line 5524: Duplicate entry '600806' for key 1"  

I ran the 2nd step again with --force. The restore completed but with 2 additional similar errors:

ERROR 1062 (23000) at line 6309: Duplicate entry '187694' for key 1      ERROR 1062 (23000) at line 6572: Duplicate entry '1567400' for key 1  

On completion when I queried certain tables of db4 database, I was able to see missing records.

Question:

  1. Does this indicate a corrupted/problematic db3 database?

  2. How to proceed to create a 'consistent/working' replica (db4) of db3?

  3. If (2) fails, how to possibly troubleshoot and find the reason behind why it occurs?

Thanks,

Upgrade production database during software development lifecycle

Posted: 15 Apr 2013 02:57 PM PDT

Background

Using Oracle, there are a number of tools that help with migrating and applying development changes into the production environment (such as Embarcadero Change Manager). These tools can be configured to perform any database DDL upgrade with little to no human intervention.

Problem

I have development and production servers running PostgreSQL 9.x. After the initial deployment of the database DDL to the production server, I will continue to make changes to the development database. These changes will include bug fixes to stored procedures, changes to tables, additional sequences, new views, more types, more tables, and so forth.

Question

What are the steps to upgrade/migrate the DDL for a production PostgreSQL database application in an automatic fashion (or nearly automatically) such that it has the new changes made in development?

Related

Thank you!

Setting up DRBD on an active MySQL server

Posted: 15 Apr 2013 07:02 PM PDT

When it comes to setting up DRBD and MySQL, is the following possible?

  • Set up DRBD on an active MySQL server
  • Set up DRBD with no downtime allowed

MySQL optimization - year column grouping - using temporary table, filesort

Posted: 15 Apr 2013 02:02 PM PDT

I have a transactions table which is having 600,000 records, I need to list the count for the dashboard on financial year basis. The table used is MyISAM. I tried adding index for the transaction date (tran_date). Even though it is using the index it creates temporary table which is taking more time because of the temporary table and the filesort. Is there any way to optimize the query to improve the query time?

  SELECT COUNT( * ) AS cnt, CASE WHEN MONTH( tran_date ) >=3  THEN concat( YEAR( tran_date ) , '-', YEAR( tran_date ) +1 )  ELSE concat( YEAR( tran_date ) -1, '-', YEAR( tran_date ) )  END AS financial_year  FROM `transactions1`  WHERE tran_date >= '2010-06-01'  GROUP BY financial_year    Showing rows 0 - 4 (5 total, Query took 1.2095 sec)  
  id select_type  table       type  possible_keys     key key_len    ref  rows    Extra  1   SIMPLE    transactions1 range PRIMARY,tran_date tran_date 8  NULL   346485  Using where; Using index; Using temporary; Using filesort  
  Keyname     Type    Unique  Packed  Field       Cardinality   Collation   PRIMARY     BTREE   Yes       No    tran_date      205720         A                                           tran_ID        617162         A   coupon_No   BTREE   No        No    coupon_No      617162         A       account_typeBTREE   No        No    account_type   3              A       prodCode    BTREE   No        No    prodCode       430            A                                           tran_date      308581         A   tran_date   BTREE   No        No    tran_date      205720         A       cust_ID     BTREE   No        No    cust_ID        3265           A                                           tran_date      308581         A                                       account_type   308581         A                                       points_earned  617162         A  

Update :

Tried adding partition which is not that much helpful in comparison with non partitioned one. Does replication help in this case for reading this table?. There will be more grouping based on the dates (using the date functions) when reading the data.

Edit:

I altered the query and reduced the query execution time. The query I used is,

  SELECT SUM( count )  FROM (  SELECT COUNT( * ) AS count,  CASE WHEN MONTH( tran_date ) >=3  THEN concat( YEAR( tran_date ) , '-', YEAR( tran_date ) +1 )  ELSE concat( YEAR( tran_date ) -1, '-', YEAR( tran_date ) )  END AS format_date  FROM transactions1  GROUP BY tran_date  ) AS s  GROUP BY format_date    Showing rows 0 - 4 (5 total, Query took 0.5636 sec)  
  id  select_type     table     type  possible_keys   key     key_len     ref     rows    Extra  1   PRIMARY     <derived2>    ALL       NULL        NULL      NULL      NULL    229676  Using temporary; Using filesort  2   DERIVED     transactions1 index     NULL        tran_date   8       NULL    617162  Using index  

But when using

  SELECT COUNT( * ) AS count,  CASE WHEN MONTH( tran_date ) >=3  THEN concat( YEAR( tran_date ) , '-', YEAR( tran_date ) +1 )  ELSE concat( YEAR( tran_date ) -1, '-', YEAR( tran_date ) )  END AS format_date  FROM transactions1  GROUP BY tran_date    Showing rows 0 - 29 (229,676 total, Query took 0.0006 sec)  

gives less time without using the SUM(count) in the derived table. Is there any other way to get the sum without using the subquery in MySQL or can the subquery be optimized to get the index.

MySQL join for multiple child records in a single joined row

Posted: 15 Apr 2013 03:02 PM PDT

I have one master table (teacher) structured like

teacherId   Name        Class  1       Praveen     10  2       John        9  

and having a child table (student) structured like

studentId   teacherId   Name  1       1       David  2       1       Creg  3       2       Mike  4       2       Steve  5       2       Jim  

How I can get a result set like

teacherId   teacher     studentId_1 student_1   studentId_2 student_2   studentId_3 student_3    1       Praveen     1       David       2       Creg        null        null  2       John        3       Mike        4       Steve       5       Jim  

Thanks in advance..

Regards, Praveen

Add a new Column and define its position in a table

Posted: 15 Apr 2013 03:59 PM PDT

I have Table A with 5 columns:

TableA  --  Name  Tel  Email  Address  

I want to add a new column (mobile) in between Tel & Email:

TableA  --  Name  Tel  Mobile  Email  Address  

If I use

ALTER TABLE TableA  ADD COLUMN Mobile INT NOT NULL  

the mobile column is added to the end of the table.

Is there a way to achieve this without dropping the table and moving data to a new table?

[SQL Server] Export data to flat file based on field once.

[SQL Server] Export data to flat file based on field once.


Export data to flat file based on field once.

Posted: 15 Apr 2013 11:26 AM PDT

Hi all,I have a table that is selecting all the fields from the table and exporting to flat file in SSIS. One of the field in the table "Sent" is set to "0" by default. So when the SSIS package runs the query will set the field "Sent to "1" before moving the data to flat file. My question is how should I enforce the SSIS process to not send the data again after the field is set to 1. The process should only move the data to flat file once based on the "Sent" field. If the SSIS process runs again it should either fail or run with no data move to file.Please reply if anyone know regarding this.Thanks.

Date Intervals like YTD

Posted: 15 Apr 2013 04:28 AM PDT

My requirement is,we have a drop down in our application with YTD,QTD,MTDYTD--Year To Till Date.QTD--Quarterly To Till Date.MTD--Monthly To Till Date.Please suggest me a query for retrieving values for the selected item from dropdown.

Find client accounts missing a specific service

Posted: 15 Apr 2013 12:51 AM PDT

I am looking for the right way to determine a missing service from all similar client accounts. We have six different regions with multiple clients. Only One region has this specific service for all of its regional clients. This service is mandatory, so all client accounts must have/show this service. Each client account has a "Portfolio" containing all the offered services.What I want to do is to run a query to find all portfolios for Region 1 where this specific service is missing for this one region. Since all Portfolios across the six regions are set up the same way a simple WHERE NOT EXIST statement will return thousands of records that are irrelevant since they don't contain this service already.The Code I am trying to use is as follows:SELECT dbo.ACCOUNT.PORTFOLIOFROM dbo.ACCOUNT INNER JOIN dbo.SERVICE_JOIN ON dbo.ACCOUNT.PORT_ROW_ID = dbo.SERVICE_JOIN.PORTFOLIO_ID INNER JOIN dbo.SERVICES ON dbo.SERVICE_JOIN.SERVICE_ID = dbo.SERVICES.ROW_ID INNER JOIN dbo.CONTACT_JOIN ON dbo.SERVICE_JOIN.SERVICE_ID = dbo.CONTACT_JOIN.SERVICE_ID INNER JOIN dbo.CONTACTS ON dbo.CONTACT_JOIN.CONTACT_ID = dbo.CONTACTS.ROW_ID FULL OUTER JOIN dbo.NOTES ON dbo.SERVICES.SERVICE_TYPE = dbo.NOTES.SERVICE_TYPE AND dbo.ACCOUNT.ACC_ROW_ID = dbo.NOTES.ACCOUNT_IDWHERE EXISTS (SELECT dbo.ACCOUNT.PORTFOLIO FROM dbo.ACCOUNT WHERE dbo.CLIENT.REGION = 'R1') WHERE NOT EXISTS (SELECT dbo.ACCOUNT.PORTFOLIO FROM dbo.ACCOUNT WHERE dbo.DM_SERVICES.SERVICE = 'MISSING_SERVICE')My reasoning is that I am trying to find the portfolios where the service is missing and trying to restrict the search to the one region so we may add them back in. I have included the full code containing the joins for multiple tables where the combined informaton for the accounts are which is correct. My issue seems to begin after the where clause.My error message is this:"Msg 156, Level 15, State 1, Line 13Incorrect syntax near the keyword 'WHERE'."Line 13 ia actually the WHERE NOT EXIST statement.Any tips and suggestions would be greatly appreciated.James

SQL SSRS 2008

Posted: 17 Nov 2008 05:01 AM PST

Where are the Report Properties for setting the default Start and End Date and Calendar in the ReportThank You from NewBies

[SQL Server] Advanced SQL Server 2008 Extended Events with Examples



Weekly SQL Server newsletter with articles, forum posts, and blog posts via email. Subscribers receive our white paper with performance tips for developers.



SQLTeam.com Articles via RSS


SQLTeam.com Weblog via RSS



.

sqlteam.com

Search This Blog