Tuesday, April 16, 2013

[how to] Trouble with ACE.OleDB.12.0 connection across domains

[how to] Trouble with ACE.OleDB.12.0 connection across domains


Trouble with ACE.OleDB.12.0 connection across domains

Posted: 16 Apr 2013 09:13 PM PDT

Here is the situation: I have two servers, for simplicities sake we'll call them database-server and website-server. The database-server is to hold an Access database that we're using during development of a web application. The website server has it's own domain and is isolated from the database-server for security reasons, save for a VPN connection to the shared folder that the database is held in. I have confirmed that I can create, edit and delete files within that folder from the website-server. (The website, it's it's relevant is ASP.Net front end with VB.Net back end.)

When the database is put into the |DataDirectory| on the website server it works exactly as supposed to. However, if I modify my connection string to point at the database across the VPN, it doesn't work, saying there is either open exclusively or there is insufficient read/write permissions. It is definitely not opened exclusively.

Here is the connection string: "Provider=Microsoft.ACE.OLEDB.12.0; Data Source=\database-server\database\db.accdb;"

Things I've tried:

Tried mapping the drive. This causes a different error; that the path is invalid. Tried giving EVERYONE full rights to the database share. Tried creating a UDL file that points at the database. Something about a multi-step oledb operation generating errors. Tried using the IP address of the database-server. The VPN allows for other users to connect via it.

All resources I've read (and since I've been trying to resolve this problem for days now, it's been a lot) indicate it should work.

Optimizing ORDER BY for simple MySQL query

Posted: 16 Apr 2013 07:04 PM PDT

I'm trying to optimize this really simple query, and it gives me grief for a day now :(

Seems pretty straightforward, I need to select with a JOIN from 2 tables, and get top X results sorted in a particular order. Here's the query:

SELECT * FROM `po`   INNER JOIN po_suppliers s ON po.supplier_id = s.id  ORDER BY po.id ASC  LIMIT 10  

However, it runs really slow (half a second to 2 seconds). Here are table structures:

CREATE TABLE `po` (    `id` int(11) NOT NULL AUTO_INCREMENT,    `supplier_id` int(11) NOT NULL,    PRIMARY KEY (`id`),    KEY `supplier_id` (`supplier_id`)  ) ENGINE=InnoDB AUTO_INCREMENT=457790 DEFAULT CHARSET=latin1    CREATE TABLE `po_suppliers` (    `id` int(11) NOT NULL AUTO_INCREMENT,    PRIMARY KEY (`id`)  ) ENGINE=InnoDB AUTO_INCREMENT=9386 DEFAULT CHARSET=latin1  

Running EXPLAIN reveals the issue:

+----+-------------+-------+-------+---------------+-------------+---------+--------------+------+----------------------------------------------+  | id | select_type | table | type  | possible_keys | key         | key_len | ref          | rows | Extra                                        |  +----+-------------+-------+-------+---------------+-------------+---------+--------------+------+----------------------------------------------+  |  1 | SIMPLE      | s     | index | PRIMARY       | PRIMARY     | 4       | NULL         |  480 | Using index; Using temporary; Using filesort |  |  1 | SIMPLE      | po    | ref   | supplier_id   | supplier_id | 4       | sergiis.s.id |    1 | Using index                                  |  +----+-------------+-------+-------+---------------+-------------+---------+--------------+------+----------------------------------------------+  

Can you help me figure out how I can optimize this to run faster? I have index on the column I sort by. I have index on columns I join by. If I remove the JOIN - it's crazy fast. If I remove ORDER BY, it's crazy fast. Why am I getting this dreaded temporary + filesort?

Validate Primary Key and Index Selection

Posted: 16 Apr 2013 06:42 PM PDT

I have a table that will store transaction data from store sales registers, I have read quite a bit on index and key choice and below is what I have concluded is the best option but I am new to this level of table design so advice would be greatly appreciated.

The table will be queried mainly on looking for a specific transaction (TrxDate, StoreNo, RegNo and TrxNo) or summarising data by TrxDate/ StoreNo.

Do my indecies look correct? Should the UX_POS_eCommTrx_TrxDate_StoreNo_RegNo_TrxNo_SeqNo index be a primary key? Is it worth adding another index just to cover looking for specific transactions (same a UX but without SeqNo)?

This is going to be a key table as the database grows and I really want to avoid having to change things round once the table has grown significantly.

CREATE TABLE sfs.tbl_POS_eCommTrx      (TrxDate NOT NULL int,          StoreNo NOT NULL smallint,          RegNo NOT NULL smallint,          TrxNo NOT NULL smallint,          SeqNo NOT NULL tinyint,          SKN NOT NULL int,          Pcs NOT NULL smallint,          Retail NOT NULL decimal(38,4),          CorrTend NOT NULL numeric(1,0),          OrderID int,          UID NOT NULL identity(1,1),          DateStamp datetime Default(getdate())      )     CREATE UNIQUE INDEX UX_POS_eCommTrx_TrxDate_StoreNo_RegNo_TrxNo_SeqNo       ON sfs.tbl_POS_eCommTrx (TrxDate,StoreNo,RegNo,TrxNo,SeqNo)    CREATE INDEX IX_POS_eCommTrx_TrxDate_StoreNo      ON sfs.tbl_POS_eCommTrx (TrxDate,StoreNo)  

Thanks in advance.

Linked Server Risks

Posted: 16 Apr 2013 02:57 PM PDT

I'm implementing a new feature which requires data from databases on multiple servers. I just need to union data from all these servers and sort it. The two options that come to mind are:

  1. Use linked servers and write a simple query to union and sort the data which will run from one server and gather data from the others.

  2. Use the application to gather the data from all servers, and send it back to SQL Server to sort (don't want to implement the sort in the application).

We run our servers in active/active clusters in SQL Server 2008 r2. All the databases have the same permissions, if you have access to one database/server, you have permission to them all. This is a public facing application (which requires user login).

What are the risks of using linked servers? Are there any security flaws I should be concerned with? Are there any issues running linked servers in active/active clusters? Would there be any significant performance issues compared to the alternative?

There seems to be a general negative "buzz" about linked servers, but I cannot find anything concrete that would lead me to believe there are any real concerns there.

How to insert binary data into a PostgreSQL BYTEA column using libpqxx?

Posted: 16 Apr 2013 02:51 PM PDT

I'm a newbie at SQL, and even greener when it comes to using the PostgreSQL C++ API libpqxx.

What I'd like to do is insert some binary data into a BYTEA column, but I find the Doxygen output is lacking in details. The fact that http://pqxx.org/ has been down for the past few days isn't helping me...

How would I go about inserting the contents of somefile.bin into a table with a BYTEA column?

What I have is along these lines:

pqxx::work work( conn );  work.exec( "CREATE TABLE test ( name varchar(20), data BYTEA )" );  work.exec( "INSERT INTO test( name, data ) VALUES ( 'foo', <insert filename.bin here> )" );  work.commit();  

If it makes a difference, I'd like to use the new hex format for BYTEA available in PostgreSQL 9.1.

If an account has REQUIRE SUBJECT, does it still need a password?

Posted: 16 Apr 2013 02:53 PM PDT

I'm in the process of setting up SSL-secured replication between two servers. Each server has its own public/private keypair, and the CA cert is just the concatenation of the two public certs, like this answer.

Now I'm updating the replication account with REQUIRE SUBJECT "exact subject of the client"

Is there any practical value to also having a password on the replication account (IDENTIFIED BY "secret")?

MySQL "hangs" after every restart.. corrupted tables every now and then.. (5.1.68-cll)

Posted: 16 Apr 2013 02:30 PM PDT

I was seeking an advice in forums about a problem that we face last days. We have a vBulletin forum, we never had any problems in general, but the last couple of days MySQL server hangs and when we are going to restart it, the vbpost table is becoming corrupted (it says that some connections didn't closed properly). Is there any way to restart MySQL without having such issues or a way to close the whole connections properly in case we need a restart? We probably think that it's about the setting of my.cnf but we are not sure, we had someone who did a new config but since we bought the server we have problems.. We have a very good machine of 24gb ram Intel quad core HTT.

Any ideas if this is causing the problem and / or any recommendations to avoid corrupted tables on each restart? Thanks!

Our my.cnf settings are:

[mysqld_safe]  nice = -15    open_files_limit=18916  [client]  socket = /var/lib/mysql/mysql.sock  default-character-set = utf8    [mysql]  default-character-set=utf8    [mysqld]  ## Charset and Collation  character-set-server = utf8  default-character-set = utf8  collation-server = utf8_general_ci  init-connect='SET NAMES utf8'  local-infile=0  bind-address=127.0.0.1  skip-networking    ## Files  back_log = 300  open-files-limit = 8192  open-files = 1024  port = 3306  socket = /var/lib/mysql/mysql.sock  pid-file = /var/lib/mysql/mysql.pid  skip-external-locking    ## Logging  datadir = /var/lib/mysql  #relay_log = mysql-relay-bin  relay_log_index = mysql-relay-index  log = mysql-gen.log  log_error = mysql-error.err  log_error = mysql-error.err  log_warnings  log_bin = mysql-bin  log_slow_queries = mysql-slow.log  #log_queries_not_using_indexes  long_query_time = 10 #default: 10  max_binlog_size = 256M #max size for binlog before rolling  expire_logs_days = 4 #binlog files older than this will be purged    ## Per-Thread Buffers * (max_connections) = total per-thread mem usage  thread_stack = 256K #default: 32bit: 192K, 64bit: 256K  sort_buffer_size = 1M #default: 2M, larger may cause perf issues  read_buffer_size = 1M #default: 128K, change in increments of 4K  read_rnd_buffer_size = 1M #default: 256K  join_buffer_size = 1M #default: 128K  binlog_cache_size = 64K #default: 32K, size of buffer to hold TX queries  ## total per-thread buffer memory usage: 8832000K = 8.625GB    ## Query Cache  query_cache_size = 32M #global buffer  query_cache_limit = 512K #max query result size to put in cache    ## Connections  max_connections = 2000 #multiplier for memory usage via per-thread buffers  max_connect_errors = 100 #default: 10  concurrent_insert = 2 #default: 1, 2: enable insert for all instances  connect_timeout = 30 #default -5.1.22: 5, +5.1.22: 10  max_allowed_packet = 128M #max size of incoming data to allow  wait_timeout = 360    ## Default Table Settings  sql_mode = NO_AUTO_CREATE_USER    ## Table and TMP settings  max_heap_table_size = 1G #recommend same size as tmp_table_size  bulk_insert_buffer_size = 1G #recommend same size as tmp_table_size  tmp_table_size = 1G #recommend 1G min  table_definition_cache = 4K  #tmpdir = /data/mysql-tmp0:/data/mysql-tmp1 #Recommend using RAMDISK for tmpdir    ## Table cache settings  #table_cache = 512 #5.0.x <default: 64>  #table_open_cache = 512 #5.1.x, 5.5.x <default: 64>    ## Thread settings  thread_concurrency = 16 #recommend 2x CPU cores  thread_cache_size = 100 #recommend 5% of max_connections    ## Replication  #read_only  #skip-slave-start  #slave-skip-errors = <default: none, recommend:1062>  #slave-net-timeout = <default: 3600>  #slave-load-tmpdir = <location of slave tmpdir>  #slave_transaction_retries = <default: 10>  #server-id = <unique value>  #replicate-same-server-id = <default: 0, recommend: 0, !if log_slave_updates=1>  #auto-increment-increment = <default: none>  #auto-increment-offset = <default: none>  #master-connect-retry = <default: 60>  #log-slave-updates = <default: 0 disable>  #report-host = <master_server_ip>  #report-user = <replication_user>  #report-password = <replication_user_pass>  #report-port = <default: 3306>  #replicate-do-db =  #replicate-ignore-db =  #replicate-do-table =  #relicate-ignore-table =  #replicate-rewrite-db =  #replicate-wild-do-table =  #replicate-wild-ignore-table =  ## MyISAM Engine  key_buffer = 1M #global buffer  myisam_sort_buffer_size = 128M #index buffer size for creating/altering indexes  myisam_max_sort_file_size = 256M #max file size for tmp table when creating/alering indexes  myisam_repair_threads = 4 #thread quantity when running repairs  myisam_recover = BACKUP #repair mode, recommend BACKUP    innodb_data_home_dir = /var/lib/mysql  innodb_data_file_path = ibdata1:18M;ibdata2:10M:autoextend  innodb_log_file_size = 512M #64G_RAM+ = 768, 24G_RAM+ = 512, 8G_RAM+ = 256, 2G_RAM+ = 128  innodb_log_files_in_group = 4 #combined size of all logs <4GB. <2G_RAM = 2, >2G_RAM = 4  innodb_buffer_pool_size = 18G #global buffer  innodb_additional_mem_pool_size = 4M #global buffer  innodb_status_file #extra reporting  innodb_file_per_table #enable always  innodb_flush_log_at_trx_commit = 2 #2/0 = perf, 1 = ACID  innodb_table_locks = 0 #preserve table locks  innodb_log_buffer_size = 128M #global buffer  innodb_lock_wait_timeout = 60  innodb_thread_concurrency = 16 #recommend 2x core quantity  innodb_commit_concurrency = 16 #recommend 4x num disks  #innodb_flush_method = O_DIRECT #O_DIRECT = local/DAS, O_DSYNC = SAN/iSCSI  innodb_support_xa = 0 #recommend 0, disable xa to negate extra disk flush  skip-innodb-doublewrite    ## Binlog sync settings  ## XA transactions = 1, otherwise set to 0 for best performance  sync_binlog = 0    ## TX Isolation  transaction-isolation = REPEATABLE-READ #REPEATABLE-READ req for ACID, SERIALIZABLE req XA    ## Per-Thread Buffer memory utilization equation:  #(read_buffer_size + read_rnd_buffer_size + sort_buffer_size + thread_stack + join_buffer_size + binlog_cache_size) * max_connections    [mysqldump]  quick  quote-names  max_allowed_packet = 128M  

sql server 2008 error in working

Posted: 16 Apr 2013 08:27 PM PDT

I have Windows 7 professional and SQL Server 2008 Standard

In management studio when press connect it shows this error

( A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 2))

I did these after got error

  1. In windows firewall : add sql server management studio
  2. Checked that sql server run (sql server service >> SQL server (SQLEXPRESS))
  3. In SQL Native client 10.0 configuration >> client protocols for >> set namepipes is enabled
  4. In SQL Native client 10.0 configuration >> client protocols for >> set name SQLS for server for Alias
  5. In firewall >> change setting >> Exception >> add port : 1433 for protocol tcp for sql server

How do I execute an Oracle SQL script without sqlplus hanging on me?

Posted: 16 Apr 2013 12:35 PM PDT

For an automated task I would very much like to run some SQL scripts and make sure that sqlplus does not hang under any circumstancees, i.e.:

  • If the script contains any substitution variable, sqlplus should return with an error instead of prompting for the variable
  • The script must not "hang" when it doesn't contain an exit; at the end
  • Anything else: If it would require a prompt, it should return with an error.

How can i do this with Oracle (and sqlplus or something else)?

viewing the population? (sql server full text search)

Posted: 16 Apr 2013 12:21 PM PDT

I'm thinking that when we are working on full text search, we might want to view how sql server FT engine breaks the data and indexes it. For my case the column being indexed are simple nvarchar strings with spaces (no punctuation).

I'd want to be able to take the "tokens" out. Any idea if this is possible?

Using postgres with the command line without pgAdmin

Posted: 16 Apr 2013 09:15 PM PDT

I started working with postgresql and till now I only had issues with it ( pgAdmin3 issues the most ) now I work a lot with MySQL and sometimes on the command line by just doing

mysql -u ... -h localhost -p for example and I'm connected trough the command line.

Is there also a way to download postgres database without the pgAdmin and run it trough the command line (terminal) (I am working on OSX mountain Lion)

p.s. I am also not a real big fan of letting postgresql create a user. It annoys me when I start up my mac and see Postgres as user in the login screen. Is it possible to avoid that?

Prevent users from using power pivot excel connections to a database

Posted: 16 Apr 2013 11:45 AM PDT

A couple of weeks ago my boss connected to our production database via excel and was able to query the database. Because of how we're doing security to the database (AD groups) he should have access to the database. I'd rather not have users just willy nilly creating pivot tables against our OLTP databases. Is there a way to prevent these sort of connections?

The way our security works, when a user accesses the application, it checks the AD group, and if the user is in the group the application lets them in. We use the same AD group to allow access to the database, the application passes the user's information to SQL Server.

MySQL: Join one Row in a table with two Rows in other table

Posted: 16 Apr 2013 03:19 PM PDT

In my MYSQL Database COMPANY. I have two tables, like below in my diagram (arrow shows relations):

  `users`                          `user_login`  +--------------+            +-----------------------+  | column_name  |            | column_name           |  +--------------+            +-----------------------+  | user_id      |<---|       | user_login_id         |  | first_name   |    |-------| user_id               |  | email_id     |            | user_name             |<---|  +--------------+            | created_by            |----|                              +-----------------------+  
  1. Table: user_login with recursive association, such that an employee's username can be created by his boss in some web-based application . A self relationship of something like (created_by (1)- user_name (∞)).

  2. Table: users contains personal information for each user(*or say for each username in user_login table*)

All fields are varchar(64) in both tables. And user_name is has uniqueness constraint. (the actual database I am working with is enough big with more columns and many rows, but I am just putting only useful informations)

[Query]

Input to my query is user_name.

I need a query If I give user_name value it return me information from user table (fist_name and email_id) for both user and his immediate boss.

Suppose my user table is:

mysql> SELECT `user_id`, `first_name`, `email_id` FROM `users`;  +----------+------------+---------------------------+  | user_id  | first_name | email_id                  |  +----------+------------+---------------------------+  | 1        | Grijesh    | grijesh.mnit@gmail.com    |  | 8        | Sumit      | sumit@cscape.in           |  | b        | OMD        | grijesh.chauhan@cscape.in |  +----------+------------+---------------------------+  3 rows in set (0.00 sec)  

And user_login is

mysql> SELECT user_login_id, user_id,  user_name , created_by FROM `user_login`;  +----------------+-----------+-----------+------------+  | user_login_id  | user_id   | user_name | created_by |  +----------------+-----------+-----------+------------+  | 13             | 1         | grijesh   | omdadmin   |  | 89             | 8         | sumit01   | grijesh    |  | bd             | b         | omdadmin  | SuperAdmin |  +----------------+-----------+-----------+------------+  3 rows in set (0.00 sec)  

Then for the input user_name = 'grijesh' output details should be from user table about omd and grijesh.

For this I written a query like below (that is working fine):

mysql> SELECT  `user_login`.`user_name`,                  `users`.`first_name`,                  `users`.`email_id`          FROM    `user_login`, `users`          WHERE   `user_login`.`user_id` = `users`.`user_id` AND                  (`user_login`.`user_name` = 'grijesh'                     OR                    `user_login`.`user_name` IN ( SELECT `created_by`                                                  FROM `user_login`                                                    WHERE `user_login`.`user_name` = 'grijesh' ));  

its output is like:

+-----------+------------+---------------------------+  | user_name | first_name | email_id                  |  +-----------+------------+---------------------------+  | grijesh   | Grijesh    | grijesh.mnit@gmail.com    |  | omdadmin  | OMD        | grijesh.chauhan@cscape.in |  +-----------+------------+---------------------------+  2 rows in set (0.06 sec)                                

[QUESTION]:

Can we have better efficient query equivalent of my above. I tried to think a solution with Joins but its hard to me for this query because for a row in user_login I need to join with two rows in user table.

I tried with join but I couldn't. I need a solution with Join instead nested query like I did.

Any help or suggestion will be great help.

Query runs slow in test site on first execution. Why?

Posted: 16 Apr 2013 12:51 PM PDT

I found this query by watching a test site with sql profiler for anything taking over 10 seconds. I plopped the code right out of sql profiler and into sql studio, where it was able to execute quickly. The slow "first run" behavior can be reset by using DBCC DROPCLEANBUFFERS.

Here's the slow query:

exec sp_executesql N'SELECT [t0].*  FROM [dbo].[MyTable] AS [t0]  WHERE [t0].[ParentID] IN (@p0, @p1, @p2, @p3, @p4, @p5, @p6, @p7, @p8, @p9, @p10, @p11, @p12, @p13, @p14, @p15, @p16, @p17, @p18, @p19, @p20, @p21, @p22, @p23, @p24, @p25, @p26, @p27, @p28, @p29, @p30, @p31, @p32, @p33, @p34, @p35, @p36, @p37, @p38, @p39, @p40, @p41, @p42, @p43, @p44, @p45, @p46, @p47, @p48, @p49, @p50, @p51, @p52, @p53, @p54, @p55, @p56, @p57, @p58, @p59, @p60, @p61, @p62, @p63, @p64, @p65, @p66, @p67, @p68, @p69, @p70, @p71, @p72, @p73, @p74)',N'@p0 int,@p1 int,@p2 int,@p3 int,@p4 int,@p5 int,@p6 int,@p7 int,@p8 int,@p9 int,@p10 int,@p11 int,@p12 int,@p13 int,@p14 int,@p15 int,@p16 int,@p17 int,@p18 int,@p19 int,@p20 int,@p21 int,@p22 int,@p23 int,@p24 int,@p25 int,@p26 int,@p27 int,@p28 int,@p29 int,@p30 int,@p31 int,@p32 int,@p33 int,@p34 int,@p35 int,@p36 int,@p37 int,@p38 int,@p39 int,@p40 int,@p41 int,@p42 int,@p43 int,@p44 int,@p45 int,@p46 int,@p47 int,@p48 int,@p49 int,@p50 int,@p51 int,@p52 int,@p53 int,@p54 int,@p55 int,@p56 int,@p57 int,@p58 int,@p59 int,@p60 int,@p61 int,@p62 int,@p63 int,@p64 int,@p65 int,@p66 int,@p67 int,@p68 int,@p69 int,@p70 int,@p71 int,@p72 int,@p73 int,@p74 int',@p0=121888,@p1=317624,@p2=278130,@p3=299426,@p4=128786,@p5=553917,@p6=169682,@p7=316993,@p8=319430,@p9=321347,@p10=377276,@p11=388570,@p12=233344,@p13=304376,@p14=318493,@p15=318190,@p16=144455,@p17=342559,@p18=309867,@p19=258251,@p20=139296,@p21=530970,@p22=288191,@p23=127107,@p24=547572,@p25=617531,@p26=238898,@p27=606923,@p28=267113,@p29=140833,@p30=122554,@p31=298846,@p32=562964,@p33=554626,@p34=414874,@p35=534996,@p36=614977,@p37=230423,@p38=261899,@p39=149666,@p40=179537,@p41=148420,@p42=262955,@p43=298094,@p44=575449,@p45=246861,@p46=572334,@p47=172152,@p48=529420,@p49=129074,@p50=266589,@p51=194619,@p52=376201,@p53=608389,@p54=162335,@p55=405965,@p56=125671,@p57=146195,@p58=538850,@p59=575254,@p60=129485,@p61=243677,@p62=615828,@p63=236197,@p64=343015,@p65=294449,@p66=562013,@p67=138933,@p68=614729,@p69=561779,@p70=-1,@p71=-1,@p72=-1,@p73=-1,@p74=-1  

Here's the table definition, with indexes:

CREATE TABLE [dbo].[MyTable] (  [MyID] [int] IDENTITY (1, 1) NOT NULL ,  [GrandParentID] [int] NOT NULL ,  [ParentID] [int] NOT NULL ,  [Col1] [nvarchar] (200) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,  [Col2] [nvarchar] (20) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,  [Col3] [nvarchar] (200) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,  [Col4] [nvarchar] (200) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,  [Col5] [decimal](18, 2) NULL ,  [Col6] [char] (2) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,  [Col7] [char] (4) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,  [OtherKey] [int] NULL ,  [Col8] [nvarchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,  [Col9] [datetime] NULL ,  [Col10] [nvarchar] (150) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,   CONSTRAINT [PK__7E8439BE] PRIMARY KEY  CLUSTERED   (      [MyID]  )  ON [PRIMARY]   ) ON [PRIMARY]    CREATE  INDEX [MyTable_ParentID] ON [dbo].[MyTable]([ParentID]) WITH  FILLFACTOR = 90 ON [PRIMARY]  CREATE  INDEX [MyTable_OtherKey] ON [dbo].[MyTable]([OtherKey] DESC ) WITH  FILLFACTOR = 90 ON [PRIMARY]  CREATE  INDEX [MyTable_GrandParentID] ON [dbo].[MyTable]([GrandParentID]) ON [PRIMARY]  

Here are the timings and IO:

-- Test site - first run:  (7064 row(s) affected)  Table 'MyTable'. Scan count 71, logical reads 49255, physical reads 3, read-ahead reads 13160, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.   SQL Server Execution Times:     CPU time = 140 ms,  elapsed time = 30400 ms.    -- Test site - second run:  (7064 row(s) affected)  Table 'MyTable'. Scan count 71, logical reads 29054, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.   SQL Server Execution Times:     CPU time = 78 ms,  elapsed time = 169 ms.    -- Production site - first run:  (7064 row(s) affected)  Table 'MyTable'. Scan count 71, logical reads 50513, physical reads 3, read-ahead reads 13157, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.   SQL Server Execution Times:     CPU time = 62 ms,  elapsed time = 276 ms.    -- Production site - second run:  (7064 row(s) affected)  Table 'MyTable'. Scan count 71, logical reads 29054, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.   SQL Server Execution Times:     CPU time = 63 ms,  elapsed time = 262 ms.  

The test site and production site are setup the same way - database files are on a single virtual harddrive, which is backed by a many-disk load balancing SAN.

Execution plan:

constantscan -> sort -> nested loop -> index seek -> nested loop -> key lookup

Is high Extent Fragmentation a problem?

Posted: 16 Apr 2013 09:21 PM PDT

DBCC SHOWCONTIG scanning 'MyTable' table...  Table: 'MyTable' (2048062382); index ID: 1, database ID: 28  TABLE level scan performed.  - Pages Scanned................................: 1019182  - Extents Scanned..............................: 127400  - Extent Switches..............................: 127399  - Avg. Pages per Extent........................: 8.0  - Scan Density [Best Count:Actual Count].......: 100.00% [127398:127400]  - Logical Scan Fragmentation ..................: 0.01%  - Extent Scan Fragmentation ...................: 77.25%  - Avg. Bytes Free per Page.....................: 135.7  - Avg. Page Density (full).....................: 98.32%  

I have read that Scan Density = 100% is very good, and Logical Scan Fragementation <1% is also great. 77% Extent Scan Fragmentation troubles me, but the internet says to ignore it.

I'm analyzing a single-table slow performing query. It runs ~30 seconds on first execution, then 200 ms on second and subsequent executions. I can reset this behavior with DBCC DROPCLEANBUFFERS.

Is the high Extent Scan Fragmentation an important clue?

(If not, I'll likely add another question about my single-table query).

Referencing multiple tables from a single table and store additional data

Posted: 16 Apr 2013 01:30 PM PDT

I have a database where I want to store lap time information from a racing game. I have tables for car, track, game, tuning and upgrade data and want to put all these together to create a single dataset, so that I can e.g. display all times for a track, or detailed information about a lap time on a specific track in a specific car.

My schema is:

Game  -------------  GameID (PK), GameName    Cars  -------------  CarID (PK), Manufacturer, Model, GameID (FK)    Tracks  -------------  TrackID (PK), TrackName    Parts  ------------  PartID (PK), PartName, GameID (FK)    Tuning  -----------  TuneID (PK), ElementName, GameID (FK)    TrackGame (-- A Track can appear in multiple Games)  -----------  TrackID (FK), GameID (FK)  

To achieve that, I created a table TimeSet where I want to put everything together, but since a car can have multiple Parts installed, and many different Tuning values, I'm having a hard time integrating this properly into a TimeSet. I though of splitting the data over three tables (see below), but I don't think this looks like a good solution, because inserting data cleanly will probably be difficult to do.

TimeSet  -----------  TimeSetID (PK), GameID (FK), TrackID (FK), CarID (FK), (TimeSetID, TuneID) (FK), (TimeSetID,PartID) (FK), time    Time_Tune  -----------  (TimeSetID (FK), TuneID (FK)) (PK), value    Time_Part  -----------  (TimeSetID (FK), PartID (FK)) (PK)  

What is the proper way to do this? I'm working with SQLite 3 on Android.

db2 tables altered, reorg

Posted: 16 Apr 2013 02:25 PM PDT

When I alter a table in DB, I need to reorg it:

Call Sysproc.admin_cmd ('reorg Table myTable');  

if I had done many modifications, how can I know which tables need a reorg?

MySQL: CAST + SUBSTR vs. FLOOR + MOD

Posted: 16 Apr 2013 01:33 PM PDT

I have a field in a mysql database where dates are stored as BIGINT like YYYYMMDDHHMMSS (PHP: date('YmdHis')). I'm building a query, where I check the amount of rows per hour and have tried two different things:

SELECT x hour, ... FROM table WHERE ... GROUP BY x ...

  1. x = SUBSTR(CAST(time AS CHAR(14)), 9, 2)
  2. x = MOD(FLOOR(time / 10000), 100)

I built a quick benchmark (did 1000 querys with each version) and both of them seem to be equally quick so I'm really interested in what database designers would advise me to do?

MongoDB master - slave not syncing anymore

Posted: 16 Apr 2013 12:47 PM PDT

I have a setup with two MongoDB instances, one master and one slave. The sync worked perfectly fine for quite a while now, but suddenly stopped. The major problem is, that the guy who setup the servers is not here anymore. So I started to dig into the problem, without having much MongoDB know-how. Strangely the initial sync does work. If I stop the slave instance, empty the dbpath directory and start the MongoDB again, it does sync the data. So the instances can see each other and they know about the master-slave setup. If I let them run for a while and check the replication information the slave says:

> db.printSlaveReplicationInfo();  source:   <MASTER_SERVER_IP>:64001       syncedTo: Fri Mar 29 2013 11:53:08 GMT+0100 (CET)           = 808225secs ago (224.51hrs)  > use admin;  switched to db admin  > db.runCommand({resync: 1})  { "errmsg" : "not dead, no need to resync", "ok" : 0 }  

Startup options of the slave instance:

bin/mongod --rest --dbpath data_01 --port 64000 --logpath log/mongodb_01.log  --fork --journal --slave --source < MASTER_SERVER_IP>:64001  --only <DB_NAME> --autoresync  

We use MongoDB version 1.8.1.

We do have a test setup, same version, same configuration. Things work just fine there. I'm out of ideas :-( What did I miss?

How to disable oracle's MAX_ENABLED_ROLES limit

Posted: 16 Apr 2013 01:06 PM PDT

How to disable oracle's MAX_ENABLED_ROLES limit or expand the value of limitation. [oracle 10g (win32)]

How can I convert an Oracle dump file into SQL Server?

Posted: 16 Apr 2013 05:20 PM PDT

I want to ask about converting Oracle dump files (.dmp) into SQL Server files (.bak) where during conversion I don't have to be connected to any database server.

I've searched for related technologies, such as Oradump to SQL Server. Do you have another suggestion to solve this? Open source ones I mean.


Thanks for both of your response. I see how difficult it will, but is there any possibility to use another way in converting oracle dump file? because all of solution's converter tools always provide a connection database server. I'm so thankful what if you can suggest another tool. thanks anyway

Databases list not showing up?

Posted: 16 Apr 2013 08:32 PM PDT

This morning when I logged into SQL Server Management Studio (v 10.0.16) I clicked on the databases node and saw all by db's (they are hosted remotely) as I have done for the last 3 years. This evening, when I click on the databases node I see NOTHING - except the system databases.

According to my hosting company, this is an issue with my local permissions. I have searched everywhere and found nothing to help me. I'm now desperate and and help would be massively appreciated.

Lock escalation problem on a trigger

Posted: 16 Apr 2013 01:33 PM PDT

I've inherited a SQL Server 2005 database that is getting 2-3 deadlocks a day.

I've tracked it down to a scheduled job that runs during the day and inserts into a table with a trigger.

The trigger consists of 10 updates to another table for slightly different criteria. The deadlock occurs in the trigger.

When a person makes an application and the job is running that's when the deadlock occurs. The application inserts into the same table as the scheduled job.

enter image description here

Looking at the trace it seems to happen when process 1 obtains a key lock, process 2 obtains a page lock, then process 1 escalates the key lock to a page lock and process 2 tries to obtain a key lock.

I've added missing indexes which seems to have helped but its still happening. I'm not a DBA so any advice on an approach to solving this would be appreciated.

I've added a link to the deadlock xml - this is from a test I did to duplicate the problem.

deadlock xml

"Variant" values structure and their use for multiple columns

Posted: 16 Apr 2013 07:40 PM PDT

note to the edit: the question originally was only about the opportunity of using a single "variant" structure for multiple columns, but all initial comments asked clarifications about the actual structure itself, something I hoped to eventually discuss later, it is probably hard to separate the two matters so now I want to discuss both the best implementation of a "variant values" structure and the convenience to use them for more than one column.

I reworked the old question because it would get too messy otherwise. Previous readers do not need to re-read the question, although that may clarify some things.

Reasons

Sometimes you have columns that legitimately can have values of any simple type at all the times but your DBMS does not support a variant type.

Probably this can occur only for metadata or other exotic uses of databases, let's not discuss about the appropriateness of these things, assume there can be legitimate reasons for them.

Just note that I'm meaning to use the system for a limited set of columns for complex functionalities, not as a convenience for OTLT or such things.

You can find more information on my present specific case in the comments to this question and to this answer.

Clarifications

I want a column to be able to have one single value for each row, and I don't want ways to restrict the types accepted by single rows - the value for every row can legitimately come from any of the types supported by the structure at all the times.

There are cases when you want for example another column to determine the type of the valid values, but in this question I'm considering only the unrestricted case.

Note: most of the times the information you store cannot truly accept any type, for example if you store metadata about another database where each row is about a column from that database that column has obviously a defined type, but if you don't need the type information for other uses it is not necessary to store a separate "Type" column, unless you assigned different security permissions it would be exactly equivalent to set a value in a "Type" column or to directly choose a value from one of the supported types.

Structure example / proposal

This structure uses a table for the values' IDs (Values) that would be referenced by the column / columns. The various Values_ tables contain the actual values; I here put some tables, if you used the structure for many columns you might put more tables, one advantage of using one structure per column is that you may need less tables for some columns, for example for some you may be sure they will never need fractional numbers of a great precision.

Note: if you used a structure for multiple columns you would always only allow types all valid for all of the columns, the minor number of tables in the single structure per table I just mentioned would only be due to including only the expected types, but it wouldn't be a catastrophe to set by mistake one of the less-expected types in the single-structure case.
Ok if this is not clear it is not important, don't mind.

The referential constraints to Values.Type are there only to ensure that only one actual value can be assigned to each Values.ID. I was not sure about referencing columns that does not constitute a primary key but it seems to be ok, I saw it used in several answers on this site.

Values_Null indicates Null "values", irrespective of the type; sometimes you might need this - and you wouldn't care of what type the column containing the Null actually had - we only want to indicate values here; this table would actually normally contain only one or zero rows if you reuse the values - more on this "reusing the values" under "Use" below.

    CREATE TABLE ValueTypes (  ID INT PRIMARY KEY,   Name VARCHAR (30)  )    INSERT INTO ValueTypes (ID, Name) VALUES (1, 'Int')  INSERT INTO ValueTypes (ID, Name) VALUES (2, 'VarChar255')  INSERT INTO ValueTypes (ID, Name) VALUES (3, 'Money')  INSERT INTO ValueTypes (ID, Name) VALUES (4, 'Boolean')  INSERT INTO ValueTypes (ID, Name) VALUES (5, 'Null')    CREATE TABLE Values (  ID INT PRIMARY KEY,   Type INT NOT NULL,  Notes CHAR LARGE OBJECT,   FOREIGN KEY (Type) REFERENCES ValueTypes (ID)  )    CREATE TABLE Values_Int (  ValueID INT PRIMARY KEY  Type INT CHECK (Type=1),  Value INT,  FOREIGN KEY (ValueID, Type) REFERENCES Values (ID, Type)  )    CREATE TABLE Values_VarChar255 (  ValueID INT PRIMARY KEY,  Type INT CHECK (Type=2),  Value VARCHAR (255),  FOREIGN KEY (ValueID, Type) REFERENCES Values (ID, Type)  )    CREATE TABLE Values_Money (  ValueID INT PRIMARY KEY  Type INT CHECK (Type=3),  Value DECIMAL (15,4),  FOREIGN KEY (ValueID, Type) REFERENCES Values (ID, Type)  )    CREATE TABLE Values_Boolean (  ValueID INT PRIMARY KEY  Type INT CHECK (Type=4),  Value BOOLEAN,   FOREIGN KEY (ValueID, Type) REFERENCES Values (ID, Type)  )    CREATE TABLE Values_Null (  ValueID INT PRIMARY KEY,   Type INT CHECK (Type=5),  FOREIGN KEY (ValueID, Type) REFERENCES Values (ID, Type)  )      


Alternative "variant" structures and reasons for my example

The alternative "variant" solutions I can think of are:

  • The most trivial of using a simple textual column for any type. I'm ruling out this because of the high chances of formats mistake, in addition to space waste and poor performance.
  • Putting all the columns for the various different types in a single table. This would mean a lot of Nulls, thus waste of space, but maybe it would be acceptable? Colin 't Hart below advocated this.
  • Making the single Values_ tables reference directly the table where the "variant" column is contained. This would mean of course one set of tables for each of these columns, confusion if more than one column is needed in one table, and most of all impossibility to enforce one single value per row, unless you put also a "Type" column for each of these columns.


Use

I would delete old values when they're not referenced anymore, thus forbid their direct external use in queries, and when needing values already used allow at user will to either reference the existing record or insert a duplicate value (thus in general the values should never change, only be deleted - although maybe a functionality to give a precise meaning to a record, using appropriate additional flag columns, could be useful, though probably confusing).

Questions

  1. What could be the best "variant" structure in general?
  2. Would it be better to use one structure per column or one for all colums (meaning all with the same set of accepted values)?

    The downsides I can see of using one single structure for multiple colums is the need to search among more values and possibly the need for one more table in the middle and thus one more join - although this table can be convenient also when using one structure per column.
    I would like to get ideas about at what point a lot of (indexed) values in a table start to slow their search significantly and make division in more tables significantly convenient.

    The downside I can see of one structure per column is a lot more tables, meaning more confusion - maybe too much.


Thanks

Thank you if you read all this

How can I verify I'm using SSL to connect to mysql?

Posted: 16 Apr 2013 02:22 PM PDT

I have configured my server to allow SSL, and have modified my client ~/.my.cnf so I use SSL:

[client]  ssl  ssl-cipher=DHE-RSA-AES256-SHA  ssl-ca=~/certs/ca-cert.pem  

When I log in with my client and view the status, it lists a cipher on the SSL line:

mysql> \s  --------------  SSL:            Cipher in use is DHE-RSA-AES256-SHA  

Without installing something like wireshark to verify that the connection is secure, can I assume that I'm connecting via SSL based on this information?

MySQL Replication using SSL

Posted: 16 Apr 2013 02:27 PM PDT

I am in the process of replicating my database so i can have a master slave configuration, one of the issues i have is with security i am basically generating my server/client keys and certificates using openssl i also generate my own CA key and certificate to self sign, i understand the issues with self signing certificates on a public website, but do you think this will be as a serious problem when used in replication?

Delete word, its meanings, its meaning's example sentences from DB

Posted: 16 Apr 2013 01:02 PM PDT

I have three tables as below (simplified for demonstration):

words  =====  integer id  text    word    meanings  ========  integer id  integer word_id  text    meaning    examples  ========  integer id  integer meaning_id  text    sentence  

where, word_id stores id of the word in words table and meaning_id stores id of the meaning in meanings table. I am trying to figure out a sql query, given a word's id, to delete the word with all its meanings and example sentences all at one time. Is such sql query possible to compose? If so, how?

Edit1: I am using SQLite3 as the database.

Edit2: I figured the following solution which requires 3 sql queries in order:

DELETE FROM examples WHERE meaning_id IN (SELECT id FROM meanings WHERE word_id=the_given_id);  DELETE FROM meanings WHERE word_id=the_given_id;  DELETE FROM words WHERE id=the_given_id;  

I'm still looking for the answer to my question: is the whole process possible to be done in one query?

Query to find and replace text in all tables and fields of a mysql db

Posted: 16 Apr 2013 03:02 PM PDT

I need to run a query to find and replace some text in all tables of a mysql database.

I found this query, but it only looks for the text in the tbl_name table and just in the column field.

update tbl_name set column=REPLACE(column, 'fuschia', 'fuchsia');   

I need it to look in all tables and all fields: (everywhere in the database)

When should a primary key be declared non-clustered?

Posted: 16 Apr 2013 03:44 PM PDT

While creating a test database for another question I asked earlier, I remembered about a Primary Key being able to be declared NONCLUSTERED

When would you use a NONCLUSTERED primary key over a CLUSTERED primary key?

Thanks in advance

No comments:

Post a Comment

Search This Blog