Tuesday, September 17, 2013

[how to] How to know the user doing a delete action on an audit table, when using a shared login?

[how to] How to know the user doing a delete action on an audit table, when using a shared login?


How to know the user doing a delete action on an audit table, when using a shared login?

Posted: 17 Sep 2013 07:58 PM PDT

Background information:

  • I'm creating a collection of audit tables to keep track of updates and deletes to a set of data tables for my app.
  • Audit records are created via triggers.
  • DML in my app's database will generally come from a login that a service uses to get into the database. Because of this, I think the result from SYSTEM_USER will always be the same when called in a trigger.
  • My app does not store user data currently, though a string UserId is given to it each time DML is to done (done exclusively in stored procedures).

The problem I ran into is that when a user deletes a record, I want to know who did it. Because it will done by the same login, I don't want to see that all action were done by service, I want to see which user did it. This isn't an issue on an update, because we have ModifiedBy columns that will be updated via a sent in UserId on updates.

The question is: Is there a way to set the SYSTEM_USER or otherwise get the user information into the trigger when a delete is run?

The "best" idea I have right now, though I'm not sure if it is a good idea yet, is that in the service I check to see if the current UserId is in the database as a user, and if not create a user object for them. Then run stored procedures with EXECUTE AS User = @UserId. Then when DML is done in the stored procedure and the trigger fires, SYSTEM_USER should return the user from the EXECUTE AS.

How to store column values on index in PostgreSQL?

Posted: 17 Sep 2013 06:05 PM PDT

As described here: https://devcenter.heroku.com/articles/postgresql-indexes#managing-and-maintaining-indexes indexes not store the entire row data on themselves. Is it possible to store the row with the index to fast retrieve? I don't care about space.

(please, a good soul, edit and rewrite it in native English and remove this message? thanks!)

what server role that needs to be configured for the user to enable sql server agent

Posted: 17 Sep 2013 03:13 PM PDT

I'm on SQL SERVER 2008,

I want to enable the SQL Agent for a particular user, who has only read access to the database. I don't want to give sysadmin role?

Is there any way, I can do this?

Parsing the value and copying it to another column in postgresql

Posted: 17 Sep 2013 02:45 PM PDT

Column_1(varchar) has values with the format like 1024 MB , 2048 MB etc. There also exits Column_2(int). Now i would like to parse the value in column_1 and update the column_2 with values like 1024, 2048 . I am using postgressql. how do i do this ? I am fairly new to databases. Any help/suggestion s appreciated.

Percona MySQL 5.5.33 not reading my.cnf?

Posted: 17 Sep 2013 01:50 PM PDT

At some point Percona MySQL on CentOS 6.4 seems to have stopped reading my.cnf, possibly after MySQL updated to 5.5.33 a few days ago.

Selection from /etc/my.cnf:

[mysqld]    # GENERAL #  user                    = mysql  default-storage-engine  = InnoDB  socket                  = /var/lib/mysql/mysql.sock  pid_file                = /var/run/mysql.pid  symbolic-links          = 0     # CHARACTER SET AND COLLATION #  collation-server        = utf8_unicode_ci  character-set-server    = utf8  init-connect            = 'SET NAMES utf8'  init-connect            = 'SET collation_connection = utf8_unicode_ci'    # DATA STORAGE AND LOGGING #  datadir                 = /var/lib/mysql  tmpdir                  = /tmp  slow_query_log_file     = mysql-slow.log  slow_query_log          = 1   long_query_time         = 8   log_error               = mysql-error.log    # CACHES AND LIMITS #  max_connections         = 250  max_connect_errors      = 10000  open_files_limit        = 20000  back_log                = 100  wait_timeout            = 10800  interactive_timeout     = 600  max_allowed_packet      = 256M  tmp_table_size          = 512M  max_heap_table_size     = 64M   query_cache_size        = 32M  thread_cache_size       = 64  

Some output from 'mysqld --help --verbose'

log-error                                         mysql-error.log  slow-query-log                                    TRUE  slow-query-log-file                               mysql-slow.log  max-connections                                   250  max-heap-table-size                               67108864  

Ok that seems good, matches my config. How about logging into mysql and doing some SHOW VARIABLES:

log_error   /var/lib/mysql/hostname.err  log_slow_queries    OFF  slow_query_log_file /var/lib/mysql/hostname-slow.log  max_connections 151  max_heap_table_size 16777216  

Huh, well that ain't right. Slow log is off, error log has the wrong name, connection and heap sizes are wrong, and the server collation is wrong as well.

Processes running

root     23533     1  0 12:32 pts/3    00:00:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/hostname.pid  mysql    23624 23533 91 12:32 pts/3    01:05:32 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/lib/mysql/hostname.err --pid-file=/var/lib/mysql/hostname.pid  

I see the error log listed in the process has the wrong name right off the bat. I've tried manually restarting mysql from /etc/init.d, via 'service mysql restart', via restarting the entire server. Nothing seems to get MySQL to read the conf.

Maybe something is breaking when mysqld_safe spins up a mysql instance? No problems reported in the error log when the server starts. Any ideas?

How does one query a SQL Server Linked Server when SQL Server Authentication is specified?

Posted: 17 Sep 2013 01:54 PM PDT

I have established a linked server and am attempting to query a database to which I have a SQL Server username and password, however I can't find the syntax for specifying the userid/pw in the query and get the following error:

Msg 18452, Level 14, State 1, Line 1 Login failed. The login is from an untrusted domain and cannot be used with Windows authentication.

I know from http://msdn.microsoft.com/en-us/library/ms175537(v=sql.105).aspx that windows authentication is preferred, but it is a vendor database and not up to me.

Also from that link: "you must set up a local login mapping from a login that uses Windows Authentication to a specific login on the linked server that is not a Windows authenticated login. In this case, the remote login uses SQL Server Authentication if the linked server is an instance of SQL Server."
I am not an Windows admin on the local machine and not any kind of admin on the linked machine. Is there no other way?

Why does MySQL force the tradeoff between swallowing errors and leaking commits in transactions?

Posted: 17 Sep 2013 01:50 PM PDT

Referring specifically to the choice of whether to use DECLARE EXIT HANDLER FOR SQLEXCEPTION. If you use it, you swallow errors and there doesn't seem to be any way to access them programmatically, at least up to 5.5. If you don't use it, you leak transactions, and drivers such as JDBC auto-commit them instead of aborting them.

What gives? This sounds... awful.

Booted by MYSQL Error (2003) 10060 mid way through work

Posted: 17 Sep 2013 12:28 PM PDT

I was working on some querries and then my HeidiSQL froze, I tried to reboot the connection and I get good old MYSQL Error (2003) (10060). It worked just fine before that.

I havent made any firewall changes, and I checked the "white list" of IPs on AWS it still was ine. I encountered this error code before but never during work with no changes.

Thoughts?

Edit 1

I turned OFF firewall and still same error

Edit 2

It works all of a sudden, but I would like to know why such thing happened. Connection issues?

Repository Creation Utility for Oracle Database 12c

Posted: 17 Sep 2013 02:25 PM PDT

I need the Repository Creation Utility for the Oracle Database 12c.

I understand that since 12c the RCU is part of the Oracle Fusion Middleware Infrastructure Software. This brings me to the question: Where can I download the Oracle Fusion Middleware Infrastructure Software? (and yes, I really just need the link to the site where I can download it. And no, Google wasn't much of a help in the kafkaesque structure, the Oracle website/documentation is.)

Understanding two NICs requirement for SQL fail over cluster

Posted: 17 Sep 2013 11:27 AM PDT

I am trying to make an SQL Server 2013 fail over cluster. I have two DB machines. I understand that both machines need 2 NICs each. In my organization the IP scheme assigned is something like 192.168.1.X

So I want to know if I assign 1 IP to each NIC, will that be enough? Like assigning 192.168.1.50, 192.168.1.51, 192.168.1.52 and 192.168.1.53?

Or two of the NICs in each machine have to have some private network scheme which will allow them to communicate directly?

Select the latest changed row in a table

Posted: 17 Sep 2013 04:20 PM PDT

I have a Mysql table test with the fields id, changedate, name.

some names have multiple changedates.

I can select all rows with

SELECT id, changedate, name FROM `test` ORDER BY changedate;  

How can I select only the newest rows of that table for each name?

How to design DB for statistics service?

Posted: 17 Sep 2013 01:42 PM PDT

I'm going to create statistics on RoR. I have 3 models: User, Project and Channel. User owns Projects, whereas Projects have many Channels. Now I'm designing all these models using MySQL.

The goal of the service is to provide user the variation in time of different parameters of each Channel. E.g.: Channel Google Adwords has 1000 show/day, 100 clicks/day and 2 people which came from Google Adwords bought something that day. So I could have just stored all the information in columns, like: Channel "Google Adwords", shows: 1000, clicks: 100, buyings: 2. But I need to make distribution by days, please, give me an advice how to design such database? Maybe I should use another solution, maybe NoSQL?

Upd.

Input: User daily fills in data about ads from different sources (I call them channels, e.g.: Google Adwords): number of impressions and clicks (later data will be imported automatically usiing API). Output: Various stats like graphs and CTR of different channels, all channels and so on.

SQL Server : openrowset problem

Posted: 17 Sep 2013 12:18 PM PDT

I'm looking for a solution to use openrowset with variables, without an execute, so that I can use it in a table function.

I know that openrowset doesn't accept '+' and that I can't call exec() to create a dynamic call.

I'm looking for something like

SELECT *   FROM OPENROWSET(           'SQLOLEDB',            'DRIVER={SQL Server};SERVER=(local);Trusted_Connection=yes',            @sql       )   

SQL Server 2012 x64 - cannot safely allocate more than 50% RAM

Posted: 17 Sep 2013 02:37 PM PDT

We're running SQL Server 2012 (CU4) on a 64-bit 12-core, 2 socket server using NUMA architecture server. OS is Windows Server 2008 R2 x64.

Whenever we allocate more than 50% of the physical RAM to SQL server, the machine becomes unstable or unresponsive.

The symptoms are typical of OS memory starvation - i.e. process fail to start, GUI object fail to render, applications misbehave, remote desktop sessions become unresponsive, etc.

We've seen this on 2 different machines – one with 198GB RAM, the other with 256GB RAM… as soon as SQL is given more than 50% of the respective physical total, the symptoms appear.

Has anyone else seen this behavior?

After RAM upgrade on 32-bit Oracle server, ORA-04031: unable to allocate x bytes of shared memory

Posted: 17 Sep 2013 03:06 PM PDT

We are in the process of upgrading our Oracle database server at work, from SUSE Linux Enterprise Server 10.4 / Oracle 10g / 4GB RAM (Intel 32-bit) to Scientific Linux 6.4 / Oracle 12c / 8GB RAM (Intel 64-bit).

As the first step, the additional RAM has been installed, but due to some application-related issue we had to pend the rest of the planned upgrade. To clarify, the server is still running Oracle 10g and SLES 10.4. No change has been made on the software side

Since the installation of the additional RAM, however, the only database instance on that machine has been running out of shared memory (with the error in the title) two afternoons in a row. The depletion was severe enough that the only way to recover was using SHUTDOWN ABORT.

No configuration change has been made, e.g. to the memory settings of the database:

NAME                                 TYPE        VALUE  ------------------------------------ ----------- ------------------------------  lock_sga                             boolean     FALSE  pre_page_sga                         boolean     FALSE  sga_max_size                         big integer 1632M  sga_target                           big integer 1632M  pga_aggregate_target                 big integer 384M  db_16k_cache_size                    big integer 0  db_2k_cache_size                     big integer 0  db_32k_cache_size                    big integer 0  db_4k_cache_size                     big integer 0  db_8k_cache_size                     big integer 0  db_cache_advice                      string      ON  db_cache_size                        big integer 960M  

It seems that the extra RAM actually cause the Oracle database software to have less RAM available to itself than before, but how exactly is this happening?

SQL Server 2005 - Query optimization for fetching large number of rows from table with 750 million rows

Posted: 17 Sep 2013 04:40 PM PDT

Brief on application:

This is audio fingerprinting application, being developed in Java with Microsoft SQL Server 2005 database.

I have one application to create fingerprints of original songs and put these fingerprints in database. To store fingerprint in database I have table:

CREATE TABLE [dbo].[fp_core](      [hashkey] [bigint] NOT NULL,      [note_id] [int] NOT NULL,      [timeoffset] [int] NOT NULL  ) ON [PRIMARY]  

The application processes song and takes 100 sample per second, so around 15000 samples for complete song. These sample values are stored in database, 1 row for each sample as {HASHKEY, NOTE_ID, TIMEOFFSET}. For fingerprint of complete song, I may have around 15000 rows in fp_core table. I am planning to put fingerprints of 50000 songs in database, so around 750 million rows will be in fp_core table.

I have other application to process recordings and detect songs played in it. Process is, create set of HASHKEY from recording audio, same as for creating fingerprint of original song. Recording audio will generate around 20000-30000 HASHKEYs. Then application retrieves rows from fp_core table for all matching HASHKEYs generated by recording audio.

To retrieve data from fp_core table by processing recording, I am doing is, filling these all HASHKEYs of recording in one more table, table is:

CREATE TABLE [dbo].[fp_core_keys](      [hashkey] [bigint] NOT NULL  ) ON [PRIMARY]  

then I am joining these two tables to retrieve all matching rows, the query is:

select fp.hashkey, fp.note_id, fp.timeoffset  from dbo.fp_core fp   INNER JOIN dbo.fp_core_keys keys ON fp.hashkey = keys.hashkey  

I have following indexes:

CREATE CLUSTERED INDEX [index_fp_core] ON [dbo].[fp_core]   (      [hashkey] ASC  )WITH (SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF) ON [PRIMARY]    CREATE UNIQUE CLUSTERED INDEX [IX_fp_core_keys] ON [dbo].[fp_core_keys]   (      [hashkey] ASC  )WITH (SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF) ON [PRIMARY]  

Problem:

Retrieving data using above query is so slow, taking time around 40 seconds.

Right now, here is stats:

Query:

select count(hashkey) from fp_core  go  select count(distinct(hashkey)) from fp_core  

Result:

57177764  13675633  

Plan:

enter image description here

Can anybody help me?

Choose security group for restore-db-instance-to-point-in-time

Posted: 17 Sep 2013 12:26 PM PDT

When using the AWS CLI restore-db-instance-to-point-in-time command, I can't figure out how to set the security group. The doc on this page says:

The target database is created from the source database with the same configuration as the original database except that the DB instance is created with the default DB security group.

Is it possible to override this parameter to use the same security group as the original instance?

Sqlcmd command strips $ out of query variable

Posted: 17 Sep 2013 02:46 PM PDT

What I am trying to do

Restore a database from backup using sqlcmd. Sqlcmd is being called by PowerShell.

What is the issue

The server folder which contains the database backups has a $ inside of the name. ie Hostname$InstanceName. When I try to restore any databases I get an error and notice it is trying to restore from \\networkshare\Hostname\DatabaseName\Database.bak

You'll notice that everything after the $ got stripped out. I believe this is due to sqlcmd's support of variables (-v parameter). I have tried to use the -x parameter (disable variable substitution) with no success.

This process works fine with folders that do not have $ inside of their names.

Does anyone know what I can try from here? Or see if I'm doing something wrong.

Thanks

Sample code

$Source_Server = "Hostname`$InstanceName"  $FQFile_DatabaseBackup = $Network_Share + "\" + $Source_Server + "\" + $Source_Database + "\Database.bak"  $Query_RestoreDatabase = "RESTORE DATABASE [$Source_Database] FROM DISK = '$FQFile_DatabaseBackup' WITH RECOVERY, REPLACE"  Invoke-Expression -Command "sqlcmd -x -S `"$Destination_Server`" -d `"master`" -E -Q `"$Query_RestoreDatabase`""  

Looking for help on a database-design to model a availability problem [closed]

Posted: 17 Sep 2013 11:29 AM PDT

I need to store and query information about disposability of cars in a care-sharing-community in which users can rent cars provided by other users.

The data which is relevant for this query will be:

  • general availability of a car - set by the car owner
  • rent contracts with a specific or recurring date/time - which reduces the availability

This query should be possible:

get all cars available each Tuesday to Thursday from 10am to 6pm from 2013-11-01 until 2013-11-20

Any ideas how to store the information when a car is available or not available to query for all available cars?

The query should have as less negative performance impacts through difficult/large queries or large joins as possible, because large data sets (1.000.000 cars and 50.000.000 or more bookings contracts per year) are expected.

Tips on improving/refactoring slow query?

Posted: 17 Sep 2013 04:26 PM PDT

Let's play lotto!

Ok. I have a numbers table with 80 rows (numbers 1-80):

create table dbo.numbers (knum tinyint primary key identity);  

I have a table with the drawid, and each number drawn.

create table dbo.draws (drawid int, drawnumber tinyint);  

Now, given that the draws table may have data like:

drawid   drawnumber    1         10    1         36    1         54    1         75    2          9    2         45    2         46    2         72  

I want to find out when was the last time every possible three-number permutation occurred.

I'm using this:

declare @curdraw int  select @curdraw = max( drawid)-100 from draws;    select TOP 10 K1,K2,K3, @curdraw-max(d1.drawid)  from THREES   inner join DrawNumbers d1 WITH (NOLOCK) ON K1 = D1.DRAWNUMBER  INNER JOIN DRAWNUMBERS D2 WITH (NOLOCK) ON K2 = D2.DRAWNUMBER AND D1.DRAWID = D2.DRAWID  INNER JOIN DRAWNUMBERS D3 WITH (NOLOCK) ON K3 = D3.DRAWNUMBER AND D1.DRAWID = D3.DRAWID  WHERE D1.DRAWID < @CURDRAW    GROUP  BY K1, K2, K3  ORDER BY @curdraw-max(d1.drawid) DESC  

(Oh, and table Threes is a table with all 492,000+ three-number combinations from 1-80)

Is there a better way to do what I'm trying to do here? This particular query is just very slow and I'm sure someone with better math/grouping skills could do better.


Answers to comments - clarifications:

  • It's for a keno game. Every draw has 20 numbers out of the 80. I just showed 4 each as an example.

  • Two years of accumulated draws, at 254 draws a day with 20 numbers per draw... Drawnumber table is currently 3.7 mil rows. Also, what I'm looking for specifically is "out of every 3-number combination possible can you give me a sorted list of how long it has been since any one of them has hit, ordered by that length".

  • The top 10 is merely because I don't need to see all 490K results.

The primary keys:

USE [OKD]   GO     ALTER TABLE [dbo].[DrawNumbers]     ADD CONSTRAINT [PK_DrawNumbers]     PRIMARY KEY CLUSTERED       ( [DrawID] ASC, [DrawNumber] ASC )    WITH      (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,       SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF,       ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON      ) ON [PRIMARY]  GO    USE [OKD]   GO    ALTER TABLE [dbo].[threes]     ADD PRIMARY KEY CLUSTERED       ( [THREEID] ASC )    WITH      (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,        SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF,        ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON      ) ON [PRIMARY]  GO  

SQL Server Designers, Failed Saves, and Generated Scripts

Posted: 17 Sep 2013 03:56 PM PDT

I am a big fan of the simple diagramming tool that comes with SSMS, and use it frequently. When I save changes to the model, I have it configured to automatically generate the change scripts that go along with the save. I then save (and source control) the resulting change script. This works great and an important piece of the process my team(s) uses.

What occasionally happens is that a save fails, and I still get the option to save my change script. I then fix the problem and save again (which results in another change script).

I'm never clear what I need to do at this point to maintain a consistent set of change scripts. There seems to be overlap between the two scripts (the failed and the successful), but they are not identical.

If I want to continue to use this feature, what should I be doing with the resulting script as soon as I get a failed save of the model?

The InnoDB log sequence number is in the future

Posted: 17 Sep 2013 07:21 PM PDT

Our site was getting intermittent database errors and I asked my webhost to check what's happening. After some inspection they found that the database has few issues and tried repairing it. Finally I got following message from them-

I have attempted every repair on the InnoDB database and we are still getting the InnoDB log sequence number is in the future. At this point to get the ibdata and the iblogfile to match up once again we will need to restore the MySQL directory (which includes databases) from the backup we have on the server. The process should not take to long but there will be some downtime associated with a restore like this. If this is not the best time to restore the MySQL directory I can schedule this for a different time. Please let me know how you would like to proceed with this.

Can someone tell me what'd be the best way to address this issue. I really don't want to lose out on any data and want the dB to be repaired.

PS: If you need more information, please let me know and I'll get it from our web hosts.

Would really appreciate your help.

how to verify mysql backup file

Posted: 17 Sep 2013 06:21 PM PDT

Is There any tool to verify the backup taken using mysqldump without manual restoration ?

I have to take full backup daily so it is difficult for me to verify the backups daily by restoring them. As it will take time and also temporary disk space.

Please suggest me how to verify my backup online

After 5 years of working, SQL Server 2k5 Table now locking without any changes made to DB

Posted: 17 Sep 2013 04:12 PM PDT

I have a table: OrderDetails.

For 5 years, only the odd Deadlock has occured. If it happened, it happened on indexes which was on OrderDetails table Now all of a sudden, deadlocks are everywhere - happens very minute. We have not made any changes to the application or database - the indexes rebuild every week.

The problem is that the table OrderDetails gets locked on occasion now and all hell breaks loose

Could someone please help and see if they spot anything from the Logs: How can I get rid of the objectLock on the whole OrderDetails Table? Why would the table lock -in the past it was only the Indexes that caused deadlocks. Does this mean that the Indexes failed to find the appropriate records and now the whole table is locked?

It is an Update and a Select that are deadlocking here, and the rows that are locking are not related - they don't have the same Reference number or OrderID

 spid18s,Unknown,waiter id=process91bb58 mode=X requestType=convert   spid18s,Unknown,waiter id=process9a9018 mode=S requestType=convert   spid18s,Unknown,waiter-list   spid18s,Unknown,owner id=process9a9018 mode=IS   spid18s,Unknown,owner id=process91bb58 mode=IX   spid18s,Unknown,owner-list   spid18s,Unknown,objectlock lockPartition=0 objid=1194487334 subresource=FULL dbid=5 objectname=CompZ.dbo.OrderDetails id=lock32163640 mode=IX associatedObjectId=1194487334   spid18s,Unknown,resource-list   spid18s,Unknown,Proc [Database Id = 5 Object Id = 711009614]   spid18s,Unknown,inputbuf   spid18s,Unknown,OrderNoRef = @OrderNoRef and ClientID = @ClientID and  OrderNo <> @OrderNo   spid18s,Unknown,where   spid18s,Unknown,select  Distinct OrderNoRef from OrderDetails   spid18s,Unknown,frame procname=CompZ.dbo.SP_Duplicate_Ref line=12 stmtstart=530 sqlhandle=0x030005004e25612a5f3d2a0194a100000100000000000000   spid18s,Unknown,executionStack   spid18s,Unknown,process id=process9a9018 taskpriority=0 logused=0 waitresource=OBJECT: 5:1194487334:0  waittime=8640 ownerId=17099323 transactionname=SELECT lasttranstarted=2013-06-11T14:34:34.230 XDES=0x2567fb58 lockMode=S schedulerid=2 kpid=15548 status=suspended spid=76 sbid=0 ecid=0 priority=0 transcount=0 lastbatchstarted=2013-06-11T14:34:26.570 lastbatchcompleted=2013-06-11T14:34:26.570 clientapp=.Net SqlClient Data Provider hostname=WEBSVR hostpid=16664 loginname=*** isolationlevel=read committed (2) xactid=17099323 currentdb=5 lockTimeout=4294967295 clientoption1=671088672 clientoption2=128056   spid18s,Unknown,Proc [Database Id = 5 Object Id = 562101043]   spid18s,Unknown,inputbuf   spid18s,Unknown,Update OrderDetails set OrderState = 15 where OrderNo = @OrderNo And OrderState in (1,2,3)   spid18s,Unknown,frame procname=CompZ.dbo.SP_Insert_OrderDet line=59 stmtstart=1726 stmtend=1890 sqlhandle=0x0300050033fb8021a4e1360011a100000100000000000000   spid18s,Unknown,executionStack   spid18s,Unknown,process id=process91bb58 taskpriority=0 logused=0 waitresource=OBJECT: 5:1194487334:0  waittime=8609 ownerId=17099322 transactionname=UPDATE lasttranstarted=2013-06-11T14:34:34.230 XDES=0x3a2844e0 lockMode=X schedulerid=1 kpid=23896 status=suspended spid=63 sbid=0 ecid=0 priority=0 transcount=2 lastbatchstarted=2013-06-11T14:34:25.347 lastbatchcompleted=2013-06-11T14:34:25.327 clientapp=.Net SqlClient Data Provider hostname=WEBSVR hostpid=16664 loginname=*** isolationlevel=read committed (2) xactid=17099322 currentdb=5 lockTimeout=4294967295 clientoption1=671088672 clientoption2=128056   spid18s,Unknown,process-list   spid18s,Unknown,deadlock victim=process9a9018   spid18s,Unknown,deadlock-list  

One of the Indexes had the page and row locks removed (was not documented so I did not know of this - I apologize for stating that nothing changed in the title)- is it correct to say that if both the row and page locks for a Index is disabled that it would then lock the table which uses that index? (Yes, if row and page locks are turned off, then only the table lock is available.)

Hierarchical Structure

Posted: 17 Sep 2013 05:21 PM PDT

Every single user has say, 3 of GROUP_A, 10 GROUP_B's per GROUP_A, and 20 GROUP_C's per GROUP_B. And each of the 20 GROUP_C's involve lots of inserts/deletes...

I'm not an expert, but I've done research but it's all theoretical at this point of course, and I don't have hands on experience with the implementation that's for sure. I think my options are something like 'adjacency lists' or 'nested sets'?

Any guidance into the right direction would be very much appreciated!

Why does that query cause lock wait timeouts?

Posted: 17 Sep 2013 02:21 PM PDT

From time to time, I find a lot of these errors in my PHP error log:

MYSQL.1213: Deadlock found when trying to get lock; try restarting transactionSQL  

The problem persists for about 2 or 3 minutes. Thanks to stackoverflow, the reason was quite easy to find:

------------------------  LATEST DETECTED DEADLOCK  ------------------------  130320 15:53:37  *** (1) TRANSACTION:  TRANSACTION 0 83395751, ACTIVE 35 sec, process no 26405, OS thread id 140507872417536 starting index read  mysql tables in use 3, locked 3  LOCK WAIT 3 lock struct(s), heap size 1216, 2 row lock(s)  MySQL thread id 1163191, query id 199629038 localhost sosci Updating  UPDATE `database`.`table` SET `invalidate`='2013-03-21 03:53:02' WHERE ((token='C7G8X3HABCDEFGH') AND (invalidate IS NULL)) AND (project=26118) LIMIT 1  *** (1) WAITING FOR THIS LOCK TO BE GRANTED:  RECORD LOCKS space id 0 page no 65548 n bits 192 index `PRIMARY` of table `database`.`table` trx id 0 83395751 lock_mode X locks rec but not gap waiting  Record lock, heap no 4 PHYSICAL RECORD: n_fields 12; compact format; info bits 0   0: len 4; hex 000c2591; asc   % ;; 1: len 6; hex 000004e36ace; asc     j ;; 2: len 7; hex 8000000a830110; asc        ;; 3: len 4; hex 80000001; asc     ;; 4: len 4; hex 80006606; asc   f ;; 5: len 1; hex 07; asc  ;; 6: len 16; hex 32455637363853485447444734584252; asc 2EV768SHTGDG4XBR;; 7: SQL NULL; 8: len 30; hex 3935363436362c656e672c616e6e612e63616d706f7265736940676d6169; asc 956466,eng,anna.camporesi@gmai;...(truncated); 9: SQL NULL; 10: len 8; hex 8000124ef477640e; asc    N wd ;; 11: len 8; hex 8000124ef495e88e; asc    N    ;;    *** (2) TRANSACTION:  TRANSACTION 0 83395676, ACTIVE 37 sec, process no 26405, OS thread id 140507856160512 fetching rows, thread declared inside InnoDB 451  mysql tables in use 1, locked 1  4 lock struct(s), heap size 1216, 53 row lock(s), undo log entries 1  MySQL thread id 1163198, query id 199628885 localhost sosci updating  DELETE FROM `database`.`table` WHERE ((action="limit") AND (info='login') AND (creation < DATE_SUB(NOW(), INTERVAL 10 MINUTE)))  *** (2) HOLDS THE LOCK(S):  RECORD LOCKS space id 0 page no 65548 n bits 192 index `PRIMARY` of table `database`.`table` trx id 0 83395676 lock_mode X  Record lock, heap no 4 PHYSICAL RECORD: n_fields 12; compact format; info bits 0   0: len 4; hex 000c2591; asc   % ;; 1: len 6; hex 000004e36ace; asc     j ;; 2: len 7; hex 8000000a830110; asc        ;; 3: len 4; hex 80000001; asc     ;; 4: len 4; hex 80006606; asc   f ;; 5: len 1; hex 07; asc  ;; 6: len 16; hex 32455637363853485447444734584252; asc 2EV768SHTGDG4XBR;; 7: SQL NULL; 8: len 30; hex 3935363436362c656e672c616e6e612e63616d706f7265736940676d6169; asc 956466,eng,anna.camporesi@gmai;...(truncated); 9: SQL NULL; 10: len 8; hex 8000124ef477640e; asc    N wd ;; 11: len 8; hex 8000124ef495e88e; asc    N    ;;    *** (2) WAITING FOR THIS LOCK TO BE GRANTED:  RECORD LOCKS space id 0 page no 65548 n bits 192 index `PRIMARY` of table `database`.`table` trx id 0 83395676 lock_mode X waiting  Record lock, heap no 117 PHYSICAL RECORD: n_fields 12; compact format; info bits 0   0: len 4; hex 000c31d2; asc   1 ;; 1: len 6; hex 000004f884fc; asc       ;; 2: len 7; hex 80000011040110; asc        ;; 3: len 4; hex 80000001; asc     ;; 4: SQL NULL; 5: len 1; hex 06; asc  ;; 6: SQL NULL; 7: len 15; hex 3133322e3139392e3132312e313632; asc 132.199.121.162;; 8: len 5; hex 6c6f67696e; asc login;; 9: len 1; hex 81; asc  ;; 10: len 8; hex 8000124ef49502aa; asc    N    ;; 11: SQL NULL;    *** WE ROLL BACK TRANSACTION (1)  

What I do not understand is: Why? The locked table is very small, only 61 entries (about 30 new and 30 deleted per day, the auto-incremented primary index is near 800.000). No column is especially large.

I use the InnoDB enging for this table (one key refers to another table with approx. 20.000 entries) and the problem occurs from time to time. RAM should not be an issue.The webserver and MySQL server run on the same (virtual) machine that usually does not suffer performance problems. Other transactions (there were thousands during the locked minutes) in large tables (1-2 mio. entries) did not make any trouble.

Thanks for your hints!

SQL Server 2005 not collecting logins in the log file

Posted: 17 Sep 2013 01:21 PM PDT

I am currently running SQL Server 2005 Standard Edition on a Windows 2003 server machine.

I have gone to the properties of the server and confirmed that the Login Auditing is set to both failed and successful logins. For some reason though there is nothing in the logins for fails or successes in the SQL Server logs when I know there have been logins for both.

I have searched out the reason for this and have not been able to come up with anything helpful, so I am hoping that someone here could give me a little direction. This is working on my other SQL Servers, so I know where to look for the results, but there are none there.

Cast to date is sargable but is it a good idea?

Posted: 17 Sep 2013 01:19 PM PDT

In SQL Server 2008 the datatype date datatype was added.

In this connect item you can see that casting a datetime column to date is sargable and may use an index on the datetime column.

select *  from T  where cast(DateTimeCol as date) = '20130101';  

The other option you have is to use a range instead.

select *  from T  where DateTimeCol >= '20130101' and        DateTimeCol < '20130102'  

Are these queries equally good or should one be preferred over the other?

SQL Fiddle

SQL Server 2008 R2 corrupted after changing MAXDOP

Posted: 17 Sep 2013 04:21 PM PDT

My SQL Server 2008 R2 doesn't work after changing the MAXDOP parameter on instances.

I have 6 instances located on the same physical server and I changed MAXDOP from 1 to 2 on 5 instances. After that all these 5 instances don't work. After restart of the server any services don't start and I just see the error

The request failed or the service did not respond in a timely fashion. Consult the event log or other application error logs for details

In Event Viewer I didn't find anything useful, also error log wasn't updated when I tried to run instance.

Also I noticed a strange thing. When I open the Configuration Manager and open properties of instance, I don't see the fonts of parameter's name in Advanced tab. In another tabs everything is fine. The screen is attached. Does anybody faced the same problem?

I don't have any idea how repair this...

error

upd: By the way, i tried start sql server manually from command line with -f or -m parameter but nothing.. I just see empty error window

Applying user-defined fields to arbitrary entities

Posted: 17 Sep 2013 08:21 PM PDT

Currently we have an old (rather crude) system that has user-defined fields, which are mapped against rows in arbitrary tables. This was an after-the-fact modification based on a customer request, and it wasn't really designed to scale well. Our system has around 60 different types of entities, which makes things even more complicated. Essentially the implementation looks like this:

USER_DEFINED_FIELDS:

UDF_ID         int  ENTITY_TYPE    int  UDF_NAME       nvarchar(64)  UDF_DATA_TYPE  int  UDF_FORMAT     nvarchar(16)  UDF_MASK       nvarchar(24)  UDF_FLAGS      int  

UDF_VALUES_NUMBER:

UDF_ID         int  ENTITY_ID      int  VALUE          int  MODIFIED       datetime  

UDF_VALUES_TEXT:

UDF_ID         int  ENTITY_ID      int  VALUE          nvarchar(255)  MODIFIED       datetime  

etc...

This gets nice and fun when we generate our own ways to index compound primary keys, but that's another DailyWTF-worthy story.

Obviously this is pretty hideous, and leads to some spectacularly horrific queries being generated, but it's worked alright for now because we limit each entity to a maximum of 5 user-defined fields. As a quick disclaimer, I wasn't with the company when this design decision was made! ;)

Anyway, we're about to start a shiny new project and will inevitably need a better way of doing this, with no restrictions on the number of UDFs we can apply to entities, increased performance, and less horror in the generated query department. If we could run a NoSQL solution like Mongo or Redis I'd be happy and wouldn't need to bother you all, but sadly that's not an option. Instead, I need a way to do this from within SQL Server 2008 R2.

So far, I've come up with the following options:

  • Individual UDF table for each entity type, with identical structures.
    • Benefits: Queries are faster and simpler, solves the compound index problem, can search by UDF.
    • Downsides: Table versioning is more cumbersome, lots of extra tables, difficult to index across similar entity types (e.g. purchase orders and sales orders)
  • Binary blob UDF column in every entity's table.
    • Benefits: Really fast: no fetch overhead, no extra JOINs.
    • Downsides: Can't search by UDF, extra proprietary code required.
  • XML column in every entity's table.
    • Benefits: Fast like binary blobs, seems to be indexable.
    • Downsides: No idea how they work - looks complicated!

So, do any of these ideas have merit? Is there a better alternative?

No comments:

Post a Comment

Search This Blog