Friday, September 6, 2013

[how to] Query_cache doesn't work with join

[how to] Query_cache doesn't work with join


Query_cache doesn't work with join

Posted: 06 Sep 2013 07:13 PM PDT

I have a simple join query, but for some reason query_cache won't cache it!

SELECT id, news, approve, FIXED, DATE, allow_main FROM post LEFT JOIN post_plus ON post.id = post_plus.news_id  WHERE approve =1 AND allow_main =1 ORDER BY FIXED DESC , DATE DESC  LIMIT 7 , 7;  

without the JOIN post_plus ON ( post.id = post_plus.news_id ) it's working.

query_cache is on

| query_cache_limit              | 10485760             |  | query_cache_min_res_unit       | 4096                 |  | query_cache_size               | 536870912            |  | query_cache_strip_comments     | OFF                  |  | query_cache_type               | ON                   |  | query_cache_wlock_invalidate   | OFF                  |  

Server version: 5.6.12-56 Percona Server (GPL), Release rc60.4, Revision 393

both tables are innodb with utf8 charset

SSRS switch statement

Posted: 06 Sep 2013 04:17 PM PDT

I have a report with a variable number of columns. I can hide columns with the ismissing function based on an expression like Iif(Fields!Site.ismissing, true, false). However, the hidden columns still show when it's exported to Excel and the main purpose of this report is to be mailed out monthly via SharePoint.

My thought was to create a multivalue column whose value depends on a switch statement:

=Switch(  Fields!State.IsMissing = true, Fields!Region.Value    , Fields!Site.IsMissing = true, Fields!State.Value    , Fields!Site.IsMissing = False, Fields!Site.Value)  

The idea is:

  • if the state is missing show the region field
  • if the site is missing show the state field
  • if the site is present show the site field

It works if I run the report at the maximum drill down, which is to say if I run it for a specific state the sites in the state show. But, if I go up levels in the hierarchy I get #error returned. Any help would be appreciated.

Calculating the number of times a group occures

Posted: 06 Sep 2013 04:17 PM PDT

I'm trying to figure out how many times in my database a particular grouping has occurred. I have two relevant tables.

mysql> describe logins;  +-------------+-------------+------+-----+---------------------+----------------+  | Field       | Type        | Null | Key | Default             | Extra          |  +-------------+-------------+------+-----+---------------------+----------------+  | id          | int(11)     | NO   | PRI | NULL                | auto_increment |  | username    | varchar(10) | NO   | MUL |                     |                |  | login_time  | datetime    | NO   | MUL | 0000-00-00 00:00:00 |                |  | logout_time | datetime    | YES  |     | NULL                |                |  | duration    | int(11)     | YES  |     | NULL                |                |  | location    | varchar(15) | YES  | MUL | NULL                |                |  +-------------+-------------+------+-----+---------------------+----------------+  

and

mysql> describe login_apps;  +-------------+-------------+------+-----+---------+----------------+  | Field       | Type        | Null | Key | Default | Extra          |  +-------------+-------------+------+-----+---------+----------------+  | id          | int(11)     | NO   | PRI | NULL    | auto_increment |  | sid         | int(11)     | YES  |     | NULL    |                |  | programName | varchar(63) | YES  |     | NULL    |                |  | duration    | int(11)     | YES  |     | NULL    |                |  +-------------+-------------+------+-----+---------+----------------+  

Lets say I want to know what applications were used in a given session. I can use:

mysql> select distinct programName from login_apps where sid="35" and programName!="NULL";  +--------------------+  | programName        |  +--------------------+  | Acrobat  Distiller |  | FireFox            |  | Minitab 16         |  +--------------------+  

to get the distinct program names, but I'm interesting in how often particular groups of applications are used in a given session. So essentially, how often are these three applications grouped together. Is there a way that can get me results in the form of

application x-y-z, 12  application x-z, 9  application y-z, 7  

mysql doesnt have write access in data dir?

Posted: 06 Sep 2013 03:16 PM PDT

I am running a MySQL on my Windows 7 64 bit development machine at work and I can't seem to get around a database corruption issue. In my unit tests it is telling me that the storage engine (which is innodb) is issuing an error message of -1.

In my mysql error log I am reading a log message that says

130906 13:31:04  InnoDB: Operating system error number 5 in a file operation.  InnoDB: The error means mysqld does not have the access rights to  InnoDB: the directory. It may also be you have created a subdirectory  InnoDB: of the same name as a data file.  InnoDB: File name .\databaseName\employee_leastfavoriterealnumber.ibd  InnoDB: File operation call: 'create'.  130906 13:31:04  InnoDB: Error creating file '.\databaseName\employee_leastfavoriterealnumber.ibd'.  130906 13:31:04  InnoDB: Operating system error number 5 in a file operation.  InnoDB: The error means mysqld does not have the access rights to  InnoDB: the directory. It may also be you have created a subdirectory  InnoDB: of the same name as a data file.  

but looking through my file permissions it is showing that both myself and SYSTEM have full access. I have run mysqld from command line as well as from a service, restarted my computer, and even reinstalled mysql in a vain attempt to be able to fix things so I can get back to work. I changed my my.ini file to see if that would fix it.

I am at a loss as to how to fix this? Did I miss something in my my.ini file that I should change? Should I be looking for gremlins mucking around with my user permissions? Is this the beginning of the end?

Which schema is better for a shopping project?

Posted: 06 Sep 2013 03:06 PM PDT

I'm working on a business-to-customer project that has variety of product types. There are a few properties like name, description, brand_id that each product has but there are also many specialized properties for different products.

I'm looking for a reasonable solution for handling different types of products like cell phones and air conditioners. For example a cell phone has a property called CPU whereas an air condition doesn't have that property. Also an air condition has a property called BTU whereas a cell phone doesn't have a property like BTU.

The thing I want to do is a classic category based product system. Users will be able to create categories that contains different fields like CPU of BTU and when they want to create a product, they will be able to enter values for these fields to that product.

The SQL world, I see that many projects use a schema like this one:

products -> (id, name, brand_id, description, category_id)  categories -> (id, name)  category_fields -> (id, category_id, field_name, field_type)    product_fields -> (product_id, category_field_id, value)  

However I wonder that if there is a better solution for this problem. There are two different solutions came into my mind.

products -> (id, name, brand_id, description, category_id, (json) category_values)  categories -> (id, name)  category_fields -> (id, category_id, field_name, field_type)   

As you may noticed there is a column called category_values in products table. This column is JSON type (which Postgresql has) and I will index the keys that I need to search.

The other schema:

products -> (id, name, brand_id, description, category_table)  products_cellphones -> (product_id, memory, cpu, gps ...)  products_...  

In this schema, when a user creates a category the system will create a new table with its fields on database in runtime. And I will create a view that generates the sql selects on the fly and executes them.

Which way should I choice, or do you have any other solutions?

MSG 666 when running an insert query on 80M-row indexed table

Posted: 06 Sep 2013 02:20 PM PDT

Strangely, my stored procedure started to receive Msg 666 for some input data.

Stored procedure fails on the last step when it tries to insert a row into a table with the following structure:

Columns:  A_Id: PK, int  B_Id: PK, FK, int  C_Id: PK, FK, int  D_Id: PK, smallint   

This is essentially a table that connects all referenced entities together. I've got this design from previous developer :)

Indexes:  IX_TableName_D_id - Clustered index on D_id column  PK_TableName - Unique non-clustered index on all columns (A_Id, B_Id, C_Id, D_Id)  

Fragmentation for both indexes is low (<25%). However PK_TableName fragmentation quickly grows, since the amount of operation on the table is quite intense.

Table size:

Row count: ~80,000,000 rows  

So, when I try to run a veeery simple query, for some of D_Id's I get the following message:

Msg 666. The maximum system-generated unique value for a duplicate group was exceeded for index with partition ID 422223771074560. Dropping and re-creating the index may resolve this; otherwise, use another clustering key.

Query example:

INSERT INTO TableName  (A_Id,B_Id,C_Id,D_id)  VALUES (1,1,1,14)  

For example, when I set D_Id to some values - it fails, '14' for example. If I set D_ID to other values (1,2,3,...13, 15,16,...), the query runs fine.

I suspect there's something really bad going on with indexes... But I cannot get to the bottom of this... :( Why it fails?

Locking the table so that no transactions can complete

Posted: 06 Sep 2013 01:37 PM PDT

I am using following procedure to do some duplicate checking before inserting into master table from staging table and if there any duplicate records, these records should be moved to duplicate table instead of master table.

CREATE  PROCEDURE `SP_MOVE_STAGING_TO_MASTER`()  BEGIN        //creating index on staging table to make sure inner join to find duplicate records bit speed up       CREATE INDEX ind_staging      ON tblstaging (SP);        //find duplicate records and insert them into duplicate table       insert INTO tblspduplicate      SELECT T2.SP,T1.FileImportedDate,T2.XYZFileName FROM  tblspmaster T1      INNER JOIN tblstaging T2 ON T1.SP=T2.SP;        //creating temporary table to hold the duplicate records Id which need to be deleted from staging table       CREATE TEMPORARY TABLE Temp (id int);         //insert into temp table duplicate records Id.        INSERT INTO Temp SELECT T2.CSN FROM  tblspmaster T1      INNER JOIN tblstaging T2 ON T1.SP=T2.SP;        //delete duplicate records from staging table       delete from tblstaging where CSN IN(      SELECT id FROM  Temp      );        //insert into master table remaining records(after deleted duplicated records)      insert INTO tblspmaster(SP,FileImportedDate,XYZFileName,XYZBatch,BatchProcessedDate,ExpiryDate,Region,FCCity)      SELECT T2.SP,T2.FileImportedDate,T2.XYZFileName,T2.XYZBatch,T2.BatchProcessedDate,T2.ExpiryDate,T2.Region,T2.FCCity       FROM  tblstaging T2;        //blank the tblstaging so next time no old records.       truncate table tblstaging;      ALTER TABLE tblstaging DROP INDEX ind_staging;  END$$  DELIMITER ;  

This stored procedure is not throwing any error but is locked in an infinite state ..

It may be there's some problem with my staging table schema so my table schema as follow

CREATE TABLE `tblstaging` (    `CSN` bigint(20) NOT NULL AUTO_INCREMENT,    `SP` varchar(50) NOT NULL,    `FileImportedDate` date NOT NULL,    `XYZFileName` varchar(50) NOT NULL,    `XYZBatch` varchar(50) NOT NULL,    `BatchProcessedDate` date NOT NULL,    `ExpiryDate` date NOT NULL,    `Region` varchar(50) NOT NULL,    `FCCity` varchar(50) NOT NULL,    `VendorID` int(11) NOT NULL,    `LocationID` int(11) NOT NULL,    PRIMARY KEY (`CSN`),    KEY `ind_staging` (`SP`)  ) ENGINE=InnoDB AUTO_INCREMENT=458746 DEFAULT CHARSET=latin1;  

In the master table there will be millions of records, and at once in the staging table there will be a max of 800 k records.

Export/Import Subset of data in postgres

Posted: 06 Sep 2013 01:04 PM PDT

Using the following query, I'm able to export two tables, selecting 500 items randomly from the first:

copy (select * from (            select 1 + floor(random() * 1040000)::integer as "documentUUID"            from generate_series(1, 600) g            group by 1) r         join "Document" using ("documentUUID")        inner join "Citation" on ("Document"."citation_id" = "Citation"."citationUUID")        limit 500)   to '/tmp/data.sql';  

I want to import this data into my testing database, but COPY FROM doesn't seem to be able to handle multiple table imports.

How can I make this work?

How do I set up high availability at the instance level? [on hold]

Posted: 06 Sep 2013 12:26 PM PDT

I have two database instances (Instance A, Instance B) of SQL Server, each one hosted on a separate VM using windows server 2012. Instance A has SQL Server 2008 R2, instance B has Sql server 2012. I'm not a DBA, and I'm completely lost as to how to provide high availability for both of these instances.

Ideally, I'd like someone to point me in the right direction to a solution that allows these instances to automatically fail over gracefully if we have problems, preferably at the instance level, since I have a bunch of databases in each instance. I have enough hardware to host these instances somewhere else on the network, and several gigabits of network bandwidth to get this done.

Please go ahead and assume that I am an idiot.

Mew your cat into the sun, brother!

SQL Server - How to determine ideal memory for instance?

Posted: 06 Sep 2013 11:46 AM PDT

We have some virtual machines that have X memory allocated to them. This amount of memory is somewhat random. It was allocated to the machine because that was the amount of memory the physical machine had, because of a vendor recommendation, or because we threw out a best guess as to how much memory we thought the instance would require.

I understand the more memory the better, but I would also like to avoid over allocating memory to the VM when it isn't necessary. The memory could be better utilized by another machine.

What would be the best way to determine an ideal amount of memory per instance that is actually active and being used? Are there other counters we should be looking at in addition to page life expectancy? We have instance that have PLE's of 10k+ and others that have 100k+.

Any insight is much appreciated. Thanks, Sam

How to use OR in query with priority?

Posted: 06 Sep 2013 09:56 AM PDT

In a simple query like

SELECT *   FROM table  WHERE id='x' OR id='y'  

How to give a priority to get only one of the WHERE clauses?

I mean getting rows with id='x'. Only if there is no row, get row with id='y'.

In other words, do not use id='y' if there is a row for id='x'.

Change MySQL database name in SQL file

Posted: 06 Sep 2013 09:36 AM PDT

I have a 2GB mysql backup file that has several databases in it and would like to change the name for one of the databases. What approaches can I use to tackle this issue ? I have used sed but it ends up making changes where it should not.

sed 's/CREATE DATABASE \/*!32312 IF NOT EXISTS*\/ `mydb`/`mydb2`/g' all_databases.sql > out.sql  

Can SQL update script update all rows even though there is where clause

Posted: 06 Sep 2013 09:16 AM PDT

I ran this script earlier today

update table   set message =     'XXXXXXXx'      where bundleid =41059       and bundletype = 4     and  BundleCampaign = '00000000-0000-0000-0000-000000000000'  

The result said 4 rows affected, but when I checked the entire table was updated. The entire message column in that table was updated even though only 4 rows satisfy the where condition.

Has anyone ever faced this before, I cannot seem to find a way to explain what would have caused this. Any help would be great.

MySQL : Extract SQL statements from binlog

Posted: 06 Sep 2013 11:08 AM PDT

I am trying the following command to extract SQL statements from binlog for using it in another database (Amazon RDS)

mysqlbinlog /var/lib/mysql/binlog.000027 --start-position=54375264 --base64-output=NEVER > /home/ec2-user/output2.sql  

But I got the following error:

ERROR: --base64-output=never specified, but binlog contains a Table_map event which must be printed in base64.  

I read in the documentation that mysqlbinlog exits with an error if a row event is found that must be displayed using BINLOG. Is there any work around to extract SQL statements from binlog?


Update 1 : I run the command again with option --verbose but I did not get any return in the command line

[root@server ec2-user]# mysqlbinlog /var/lib/mysql/binlog.000027 --start-position=54375264 --base64-output=DECODE-ROWS --verbose  > /home/ec2-user/output4.sql  [root@server ec2-user]#   [root@server ec2-user]#  

When I browse output4.sql, I can see some valide SQL + some commented SQL like this:

#130906 14:54:55 server id 1  end_log_pos 269447893     Query   thread_id=1221718733    exec_time=0 error_code=0  SET TIMESTAMP=1378479295/*!*/;  BEGIN  /*!*/;  # at 269447893  # at 269447950  #130906 14:54:55 server id 1  end_log_pos 269447950     Table_map: `dbname`.`tablename` mapped to number 102  #130906 14:54:55 server id 1  end_log_pos 269448102     Update_rows: table id 102 flags: STMT_END_F  ### UPDATE `dbname`.`tablename`  ### WHERE  ###   @1='edited by me'  ###   @2=0  ###   @3=1373178370  ### SET  ###   @1='edited by me'  ###   @2=0  ###   @3=1378479295  # at 269448102  #130906 14:54:55 server id 1  end_log_pos 269448162     Query   thread_id=1221718733    exec_time=0 error_code=0  

Update 2 : I am following the tutorial here https://engineering.gosquared.com/migrating-mysql-to-amazon-rds

I want to update Amazon RDS from the output of binlog. Currently Amazon RDS do not allow writing from binlog. I am trying to extract the SQL statments from binlog so I can update Amazon RDS. Is there any way to update Amazon RDS from binlog?

How to drop SQL Server database currently in use and in Single user mode

Posted: 06 Sep 2013 11:35 AM PDT

I have a Database on SQL Server 2008, which I want to drop. Currently it is in single user mode and it is currently in use.

select * from sys.sysprocesses  

returns

Msg 924, Level 14, State 1, Line 1  Database 'G_MAIN_DE' is already open and can only have one user at a time.  

and I do not know how to identify the session I have to kill.

An attempt to set it offline

ALTER DATABASE G_MAIN_DE SET OFFLINE WITH ROLLBACK IMMEDIATE  

yields

Msg 5064, Level 16, State 1, Line 1  Changes to the state or options of database 'G_MAIN_DE' cannot be made at this time. The database is in single-user mode, and a user is currently connected to it.  Msg 5069, Level 16, State 1, Line 1  ALTER DATABASE statement failed.  

inner joins with where clause for no values?

Posted: 06 Sep 2013 09:22 AM PDT

I have 2 tables :

Table1

  Parent(varchar) Child (varchar)  

Table2

  C1(varchar)    PC (varchar)  

Sample data:

enter image description here

Requirement - I need Table2.C1 values for which column Table2.PC = Table1.Child , but Child values must be different from Table1.Parent's values..

I'm using below query in mysql:

select distinct(C1),Child,PC     from Table2 inner join Table1       on Table2.PC=Table1.Child    where Table1.Child not in (select Parent from Table1);  

It is giving empty set, but there are values in Child which is same as in PC, but not in Parent....

Where I'm getting wrong?

PostgreSql allocate memory for each connection

Posted: 06 Sep 2013 09:10 AM PDT

When we configure the memory setting for the Postgres DB what is the recommended memory allocation for each connection ? Is there any formula to apply ?

I know 25% of the memory of the Server should be allocated. But how do we allocate based on the DB connections? How do we know, what is the maximum number of connections should be allocated?

Also in a multi-nodes environment can we allocate more connections for each node (in Postgres-ds.xml max connections) than what is actually allocated in the DB?

Combining multiple left join for one single line

Posted: 06 Sep 2013 10:50 AM PDT

We have a query as below.

SELECT tblSData.header,  tblEAlert.eMessage  FROM tblSData  LEFT JOIN tblEAlert ON tblSData.sDataID=tblEAlert.sDataID   Where tblSData.aID=".$aID." Order By tblSData.dateTimer  Asc  

Structure for the tables are as below:

CREATE TABLE IF NOT EXISTS `tblSData` (    `sDataID` int(11) NOT NULL AUTO_INCREMENT,     `header` varchar(255) NOT NULL,       `aID` int(5) NOT NULL,    `gFInID` int(5) NOT NULL,    `gFOutID` int(5) NOT NULL,    `gFAInID` int(5) NOT NULL,    `gFAOutID` int(5) NOT NULL,    `eAlertID` int(5) NOT NULL,    `dateTimer` datetime NOT NULL     PRIMARY KEY (`sDataID`)  ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ;      CREATE TABLE IF NOT EXISTS `tblEAlert` (    `eAlertID` int(11) NOT NULL AUTO_INCREMENT,    `sDataID` int(5) NOT NULL,    `eID` int(5) NOT NULL,    `aID` int(5) NOT NULL,    `eDateTime` datetime NOT NULL,    `eInsertDateTime` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,    `eMessage` varchar(255) NOT NULL,    PRIMARY KEY (`eAlertID`)  ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ;      Sample data for tblSData    1,"A1",1122,100,102,1,2,1,2013-07-13 15:30:19  2,"A3",1122,104,103,3,4,3,2013-07-13 15:45:19  3,"A4",1122,105,108,5,6,7,2013-07-13 15:55:19      Sample data for tblEAlert    1,1,1,1122,2013-07-13 15:30:19,2013-07-13 15:30:19,"Alert 1"  2,1,2,1122,2013-07-13 15:30:19,2013-07-13 15:30:19,"Alert 2"  3,2,2,1122,2013-07-13 15:45:19,2013-07-13 15:45:19,"Alert 3"  4,2,3,1122,2013-07-13 15:45:19,2013-07-13 15:45:19,"Alert 4"  5,2,4,1122,2013-07-13 15:45:19,2013-07-13 15:45:19,"Alert 5"  6,2,5,1122,2013-07-13 15:45:19,2013-07-13 15:45:19,"Alert 6"  

Sample output is.

"A1","Alert 1"  "A1","Alert 2"  "A2","Alert 3"  "A2","Alert 4"  "A2","Alert 5"  "A2","Alert 6"  

The problem now is how to show say for e.g. A1 all the Alert Message in one line and same goes for A2 etc. Now each alert message is on a different line.

psql, record separators in the data

Posted: 06 Sep 2013 11:20 AM PDT

I want to use psql to list all of the databases on a Postgres server, to be parsed by a script. This command lists them:

psql -l -A -t  

but the output shows an obvious issue: the records are separated by newlines, but also contain newlines.

$ psql -l -A -t  postgres|postgres|UTF8|en_CA.UTF-8|en_CA.UTF-8|  template0|postgres|UTF8|en_CA.UTF-8|en_CA.UTF-8|=c/postgres  postgres=CTc/postgres  template1|postgres|UTF8|en_CA.UTF-8|en_CA.UTF-8|=c/postgres  postgres=CTc/postgres  

Using the -R option I can change the record separator, but it seems like no matter what I change it to, there's the risk of that string appearing in the data. Is it possible to instead tell psql to replace the newlines in the data with something else? (and then what if that string also appears in the data?)

I'd also tried to set the record separator to a null character with such sequences as -R '\000' and -R "\0", but it doesn't seem to interpret escape sequences in the parameter at all, and just uses the literal string \000 instead.

The other option I know of to list all databases is:

psql --quiet --no-align --tuples-only --dbname=postgres --username=postgres --host=127.0.0.1 --port=5432 --command="SELECT datname FROM pg_database"  

but that requires me to give the password for the postgres user, so it's not desirable. Perhaps there's another way to get a list of the names of all databases?

Incorrect information in .frm file after a power outage?

Posted: 06 Sep 2013 09:55 AM PDT

  • CentOS release 6.3 (Final)
  • x86_64
  • mysql Ver 14.14 Distrib 5.5.31, for Linux (x86_64) using readline 5.1

SHOW SLAVE STATUS\G:

             Slave_IO_Running: Yes              Slave_SQL_Running: No                Replicate_Do_DB:             Replicate_Ignore_DB:              Replicate_Do_Table:          Replicate_Ignore_Table:         Replicate_Wild_Do_Table:     Replicate_Wild_Ignore_Table:                      Last_Errno: 1696                     Last_Error: Error executing row event: 'Failed to read from the .par file'  

/var/log/mysqld.log:

130706  2:26:04 [ERROR] /usr/libexec/mysqld: Incorrect information in file: './reportingdb/v3_ban_date.frm'  130706  2:26:04 [ERROR] Slave SQL: Error executing row event: 'Failed to read from the .par file', Error_code: 1696  130706  2:26:04 [Warning] Slave: Failed to read from the .par file Error_code: 1696  130706  2:26:04 [Warning] Slave: Incorrect information in file: './reportingdb/v3_ban_date.frm' Error_code: 1033  

v3_ban_date is a partitioned table. The .par file is... empty:

# ls -l v3_ban_date.par  -rw-rw---- 1 mysql mysql 0 Jul  6 01:54 v3_ban_date.par  

and this is the .frm format (hexdump -v -C v3_ban_date.frm):

|partition.... PA|  |RTITION BY RANGE|  | (TO_DAYS(dt)).(|  |PARTITION p00 VA|  |LUES LESS THAN (|  |0) ENGINE = Inno|  |DB,. PARTITION p|  |1 VALUES LESS TH|  |AN (734653) ENGI|  |NE = InnoDB,. PA|  |RTITION p02 VALU|  |ES LESS THAN (73|  |4745) ENGINE = I|  |nnoDB,. PARTITIO|  |N p031 VALUES LE|  |SS THAN (734863)|  | ENGINE = InnoDB|  |,. PARTITION p04|  | VALUES LESS THA|  |N (734959) ENGIN|  |E = InnoDB,. PAR|  |TITION p05 VALUE|  |S LESS THAN (735|  |103) ENGINE = In|  |noDB,. PARTITION|  | p06 VALUES LESS|  | THAN (735160) E|  |NGINE = InnoDB,.|  | PARTITION p07 V|  |ALUES LESS THAN |  |(735210) ENGINE |  |= InnoDB,. PARTI|  |TION MERGER_2013|  |324 VALUES LESS |  |THAN (735316) EN|  |GINE = InnoDB,. |  |PARTITION pcurre|  |nt_2013324 VALUE|  |S LESS THAN (735|  |317) ENGINE = In|  |noDB,. PARTITION|  | MERGER_201375 V|  |ALUES LESS THAN |  |(735419) ENGINE |  |= InnoDB,. PARTI|  |TION pcurrent_20|  |1375 VALUES LESS|  | THAN (735420) E|  |NGINE = InnoDB).|  

The partition definitions on the first slave looks like this:

00000000  1f 00 00 00 74 00 01 07  0c 00 00 00 0c 0c 0c 0c  |....t...........|  00000010  0c 0c 0c 0c 0c 0c 0c 0c  60 00 00 00 70 30 30 00  |........`...p00.|  00000020  70 31 00 70 30 32 00 70  30 33 31 00 70 30 34 00  |p1.p02.p031.p04.|  00000030  70 30 35 00 70 30 36 00  70 30 37 00 4d 45 52 47  |p05.p06.p07.MERG|  00000040  45 52 5f 32 30 31 33 33  32 34 00 70 63 75 72 72  |ER_2013324.pcurr|  00000050  65 6e 74 5f 32 30 31 33  33 32 34 00 4d 45 52 47  |ent_2013324.MERG|  00000060  45 52 5f 32 30 31 33 38  32 38 00 70 63 75 72 72  |ER_2013828.pcurr|  00000070  65 6e 74 5f 32 30 31 33  38 32 38 00              |ent_2013828.|  0000007c  

I've tried to copy this file to the second slave, then used bvi to edit the date to 201375 (corresponding with what is in the .frm file), then tried to start the second slave, but it didn't work.

So the questions are:

  1. How do I know what information is incorrect in the .frm file?
  2. Can I re-create the .par file from the one on the first slave?

PS: I know I can rebuild the second slave but I think this is an interesting challenge.

How to access a SQL Server database from other computer connected to the same workgroup?

Posted: 06 Sep 2013 10:31 AM PDT

I have created a C# application which uses a SQL Server database. I have other computers connected to me and to each other in a workgroup. I have shared my C# application with others. When they open the application they get the error

A network related or instance-specific error occured while establishing a connection to SQL Server. the server was not found or was not accessible

But the application is working fine on my PC. The connection string I am using is

Data Source=ASHISHPC1\SQLEXPRESS;Initial Catalog=ACW;User ID=ash159;Password=ashish159  

which is stored in a .config file.

The application is working fine on my PC. What must I do? I have enabled the TCP/IP in the server but the same error persists. Some change in connection string or something else?

Please help.. Thank you..

Queryplan changes depending on filter values

Posted: 06 Sep 2013 01:17 PM PDT

I created a clustered index on a table expecting it to make the queries with ranges perform better, but, different values in the where clause can produce differente query plans, one uses the clustered index and one does not.

My question is: What can I do to make the DBMS use the better query plan? Or better yet, should I change my schema to something better?

Details:

  • I'm using Amazon's RDS (Server version: 5.5.31-log)
  • I executed optimize table on each table (expecting it to "rebuild" the clustered index and reset the statistics), sometimes it does not change anything, sometimes the DBMS will use worse query plans, sometimes makes it faster because it will use the clustered index.
  • explain extended followed by a show warnings did not produce anyting interesting/useful
  • I'm aware of index hinting. I tested it and the query plan used the primary index but I don't know if it always works, also, I'm using django and django's ORM does not support index hinting, so a solution that did not require it would be nice.

Tables:

-- DDL  create table api_route (     id int(11) not null auto_increment primary key,     origin_id int(11) not null,     destination_id int(11) not null,     group_id int(11) not null,     foreign key (origin_id) references api_area (id),     foreign key (destination_id) references api_area (id),     foreign key (group_id) references api_group (id)  ) engine=innodb, collate=utf8;    create table api_area (    id int(11) not null auto_increment primary key,    name varchar(50) not null  ) engine=innodb, collate=utf8;    create table api_location (    id int(11) not null auto_increment primary key,    area_id int(11),    foreign key (area_id) references api_area (id)  ) engine=innodb, collate=utf8;    create table api_locationzip (     location_ptr_id int(11) not null,     zip_start int(11) not null,     zip_end int(11) not null,     foreign key locationzip_to_location (location_ptr_id) references api_location (id),     constraint primary key using btree (zip_start, zip_end, location_ptr_id)  ) engine=innodb, collate=utf8;    create table api_locationkey (    location_ptr_id int(11) not null,    key varchar(10) not null,    foreign key locationkey_to_location (location_ptr_id) references api_location (id)  ) engine=innodb, collate=utf8;  

Query:

An area has many locations, every location has either a zip or key.

select * from    api_route,    api_area origin,    api_area destination,    api_location location_origin,    api_location location_destination,    api_locationzip origin_zip,    api_locationzip destination_zip  where    api_route.group_id IN (1,2,3,...) and    -- filter origin by zip code    api_route.origin_id = origin.id and    origin.id = location_origin.area_id and    location_origin.id = origin_zip.location_ptr_id and    origin_zip.zip_start <= <zipcode_origin> and    origin_zip.zip_end >= <zipcode_origin> and    -- filter destination by zip code    api_route.destination_id = destination.id and    destination.id = location_destination.area_id and    location_destination.id = destination_zip.location_ptr_id and    destination_zip.zip_start <= <zipcode_destination> and    destination_zip.zip_end >= <zipcode_destination>  limit 100  

Execution plans:

Here is an explain of a slow query (~1.6s):

*************************** 1. row ***************************             id: 1    select_type: SIMPLE          table: destination           type: index  possible_keys: PRIMARY            key: api_area_group_id_599f0627e68b9613_uniq        key_len: 156            ref: NULL           rows: 3794          Extra: Using index  *************************** 2. row ***************************             id: 1    select_type: SIMPLE          table: api_route           type: ref  possible_keys: api_route_0261d0a2,api_route_8de262d6            key: api_route_8de262d6        key_len: 4            ref: master.T6.id           rows: 9          Extra:   *************************** 3. row ***************************             id: 1    select_type: SIMPLE          table: origin           type: eq_ref  possible_keys: PRIMARY            key: PRIMARY        key_len: 4            ref: master.api_route.origin_id           rows: 1          Extra:   *************************** 4. row ***************************             id: 1    select_type: SIMPLE          table: location_origin           type: ref  possible_keys: PRIMARY,api_location_a4563695            key: api_location_a4563695        key_len: 4            ref: master.origin.id           rows: 39          Extra: Using where; Using index  *************************** 5. row ***************************             id: 1    select_type: SIMPLE          table: origin_zip           type: ref  possible_keys: PRIMARY,locationzip_to_location             key: locationzip_to_location         key_len: 4            ref: master.location_origin.id           rows: 1          Extra: Using where; Using index  *************************** 6. row ***************************             id: 1    select_type: SIMPLE          table: location_destination           type: ref  possible_keys: PRIMARY,api_location_a4563695            key: api_location_a4563695        key_len: 4            ref: master.destination.id           rows: 39          Extra: Using index  *************************** 7. row ***************************             id: 1    select_type: SIMPLE          table: destination_zip           type: ref  possible_keys: PRIMARY,locationzip_to_location             key: locationzip_to_location         key_len: 4            ref: master.location_destination.id           rows: 1          Extra: Using where; Using index  7 rows in set (0.00 sec)  

Here is the explain of a fast query (~100ms):

*************************** 1. row ***************************             id: 1    select_type: SIMPLE          table: destination_zip           type: range  possible_keys: PRIMARY,locationzip_to_location             key: PRIMARY        key_len: 4            ref: NULL           rows: 119268          Extra: Using where; Using index  *************************** 2. row ***************************             id: 1    select_type: SIMPLE          table: location_destination           type: eq_ref  possible_keys: PRIMARY,api_location_a4563695            key: PRIMARY        key_len: 4            ref: master.destination_zip.location_ptr_id           rows: 1          Extra:   *************************** 3. row ***************************             id: 1    select_type: SIMPLE          table: api_route           type: ref  possible_keys: api_route_0261d0a2,api_route_8de262d6            key: api_route_8de262d6        key_len: 4            ref: master.location_destination.area_id           rows: 9          Extra:   *************************** 4. row ***************************             id: 1    select_type: SIMPLE          table: origin           type: eq_ref  possible_keys: PRIMARY            key: PRIMARY        key_len: 4            ref: master.api_route.origin_id           rows: 1          Extra:   *************************** 5. row ***************************             id: 1    select_type: SIMPLE          table: location_origin           type: ref  possible_keys: PRIMARY,api_location_a4563695            key: api_location_a4563695        key_len: 4            ref: master.origin.id           rows: 39          Extra: Using where; Using index  *************************** 6. row ***************************             id: 1    select_type: SIMPLE          table: origin_zip           type: ref  possible_keys: PRIMARY,locationzip_to_location             key: locationzip_to_location         key_len: 4            ref: master.location_origin.id           rows: 1          Extra: Using where; Using index  *************************** 7. row ***************************             id: 1    select_type: SIMPLE          table: destination           type: eq_ref  possible_keys: PRIMARY            key: PRIMARY        key_len: 4            ref: master.location_destination.area_id           rows: 1          Extra:   7 rows in set (0.00 sec)  

Edit: Added create table code and full query

How to build a database that contain only the delta from yesterday

Posted: 06 Sep 2013 07:18 PM PDT

I need to know what has been changed on my database since last night. Is it possible to extract this data from the LDF file and to build a new Database that contains the delta?

For example, let say I have a table for users and now, a new user was added and one of the users update his home address. I need to be able to build a new database that users table will contain two records 1. The new user (and to add a new column to know if it's new or update field) 2. The user that update his record (it will be nice to know which record has been update)?

BTW, I have to SQL servers that I can use (2008 and 2012)

Thanks In Advance

Why does Log Shipping .TRN file copy just stop

Posted: 06 Sep 2013 02:18 PM PDT

I apologize in advance for a long post but I have had it up to here with this error of having to delete LS configuration and starting it over for any DB thats got this error.

I have LS setup on 3 win2k8r2 servers(pri,sec,monitor) with 100 databases transactions backed up and shipped from the primary to secondary and monitored by monitor. Back ups and copies are run every 15min and then the ones older than 24hrs are deleted. Some DBs are very active and some not so much but shipped regardless for uniformity sake(basically to make secondary server identical to primary). Some DBs are for SP2010 and majority for inhouse app.

The issue is that after all LS configs are setup, all works well for about 3 to 4 days then i go to the Transaction LS Status report on the secondary, I see that randomly some LS jobs have an Alert Status because the time since last copy is over 45min so no restore has occured. This seems random and the only errors i see is from an SP2010 DB(WebAnalyticsServiceApplication_ReportingDB_77a60938_##########) which I belive is a reports db that gets created weekly and LS cannot just figure which the last copy to backup or to restore is. I posted here regarding that and i have yet to find a permanent solution. For my main error(time since last copy) i have not seen anything that could have caused that and i dont get any messages(even though some alert statuses have been ignored for 3 days). Anyway, I would really appreciate any input on understanding whats causing this and how i could fix it. Thanks.

Is it a bad practice to always create a transaction?

Posted: 06 Sep 2013 09:34 AM PDT

Is it a bad practice to always create a transaction?

For example, it is a good practice to create a transaction for nothing but one simple SELECT?

What is the cost of creating a transaction when it is not really necessary?

Even if you are using an isolation level like READ UNCOMMITTED, is it a bad practice?

Oracle schema import is not importing all the tables present in the schema dump file

Posted: 06 Sep 2013 10:18 AM PDT

I have exported an existing oracle schema from another machine and then imported it in my local machine. Import was successful, but some tables which are present in the export dump file are not imported.

Here are the export and import commands i have used.

Export Command:  ---------------  exp sreeni/sreeni@agentrics1:1524/ezmodc full=n file=SreeniTrunkDump.dmp log=SreeniTrunkDump_Export.log     Import Command:  ---------------  imp badri/badri@localhost:1521/xe file=SreeniTrunkDump.dmp log=BadriSchemaImport_Import.log full=y     

The Oracle we are using is 10g EE.

What could be going wrong ? Can you please suggest a solution to this issue.

Selecting with varbinary(max) criteria (in the where clause)

Posted: 06 Sep 2013 10:58 AM PDT

Basic info

  • Database: SQL Server Express 2008 R2
  • Client: SQL Server Management Studio

Backround (skip if not interested):

A project I'm maintaining uses an ORM, which apparently stored my enum values (which inherit from Byte) into binary serialized .Net objects stored in a varbinary(max) field. I only found out this was happening after a new requirement emerged dictating my code to run under medium trust. Since the .Net binary formatter needs full trust to be called, it started crashing on the enums.

To clean the mess up I need to create migration scripts that will convert these (varbinary(max)) values back to integer values. There are only a handful of different values so it shouldn't be a big problem (I thought).

The problem:

I am able to get string representations of the blobs when selecting:

SELECT BinValue FROM MyTable where Type = 'Object';  

It returns a string '0x...(an array of hexadecimal values)'.

But when I try to select on the column using copy-and-paste making sure I have the exact binary equivalent:

SELECT ItemId FROM MyTable WHERE Value=convert(varbinary(max), '0x...')  

it does not return any records.

So is it possible to solve this using a client (such as Management Studio)?

If so, what would be the correct syntax?

No comments:

Post a Comment

Search This Blog