Tuesday, August 6, 2013

[how to] Different social network user information

[how to] Different social network user information


Different social network user information

Posted: 06 Aug 2013 07:17 PM PDT

I am designing a website with social network login from various sites. The requirement is to allow the easy addition to the list of which social network login methods are supported. If I want to store information about users from the social networks they are logging in from in a relational database such as MySQL, this will be different from social network to social network.

My question is: should I set up my relational database as follows:

Table Name           | Column Names  ------------------------------------  users                | id, name, dob, username  user_social_network  | user_id social_network_id  social_network       | id, social_network_name  facebook             | user_id, (information about facebook i.e. profile pic, interests)  twitter              | user_id, (information about twitter)  google_plus          | user_id, (information about google plus)  

Although I am quite new to this, this seems to me that I will need to do two DB queries: 1. Find out what social network the user uses 2. Query the desired table for social network information

I have read that using joins is much more efficient than sending multiple queries. Is there any other way to lay this out or is this the best way?

Any help will be much appreciated! Thanks in advance.

How does security context propagate between schemas within a database?

Posted: 06 Aug 2013 06:49 PM PDT

We have a SQL Server 2008 database that contains two schema's. The owners of these schema's have been granted full SELECT permissions to all tables within each others ownership.

I have created a user ID and have granted it access to only SQL Views created in both of these schema's but not the underlying tables. In the example below I've created two views, one in each schema, that select data from one table in one of the schema's.

It works if I execute the View as my new user in the schema that the same table exists in, but not if I try from the other schema. However if I login as the schema owner it work regardless of the schema I execute the view from.

How does security propagate between schema's within a single database?

Here is a rough example of what I am trying to do, please let me know if I can explain this better, as for Im not remotely close to a dba.

USE [DEV_DATABASE]    -- Create tables and insert dummy data  CREATE TABLE [DEV_DATABASE].[TESTDATA].[UserData]      (          COL1 CHAR(10),          COL2 CHAR(10),          COL3 CHAR(10),          COL4 CHAR(10),          COL5 CHAR(10)      );     INSERT INTO [DEV_DATABASE].[TESTDATA].[UserData] VALUES ('1','2','3','4',5');   INSERT INTO [DEV_DATABASE].[TESTDATA].[UserData] VALUES ('11','22','33','44',55');      CREATE TABLE [DEV_DATABASE].[TESTCTRL].[ControlData]   (          CTRL1 CHAR(10),          CTRL2 CHAR(10),          CTRL3 CHAR(10),          CTRL4 CHAR(10),          CTRL5 CHAR(10)      );     INSERT INTO [DEV_DATABASE].[TESTCTRL].[ControlData] VALUES ('a','b','c','d',e');   INSERT INTO [DEV_DATABASE].[TESTCTRL].[ControlData] VALUES ('aa','bb','cc','dd',ee');      -- Create views  CREATE VIEW [TESTDATA].[DATA_TEST_VIEW] AS   SELECT    COL1, COL2, COL3  FROM  [TESTDATA].[UserData]    CREATE VIEW [TESTCTRL].[CTRL_TEST_VIEW] AS   SELECT    COL1, COL2, COL3  FROM  [TESTDATA].[UserData]        -- permissions  DENY SELECT ON [TESTDATA].[UserData] to JOEUSER;  GRANT SELECT ON [TESTDATA].[DATA_TEST_VIEW] to JOEUSER;      GRANT SELECT ON [TESTCRL].[CTRL_TEST_VIEW]  to JOEUSER;    EXECUTE AS USER = 'JOEUSER';  -- tests  SELECT * FROM [TESTDATA].[UserData];  -- fails as expected    SELECT * FROM  [TESTDATA].[DATA_TEST_VIEW];  -- passes as expected    SELECT * FROM  [TESTCRL].[CTRL_TEST_VIEW];  -- fails, and should pass?    REVERT;  

Error message received:

Msg 229, Level 14, State 5, Line 1  The SELECT permission was denied on the object 'UserData', database 'DEV_DATABASE', schema 'TESTDATA'.  

PAGELOCK Conversion Deadlock From Update to Intent Exclusive

Posted: 06 Aug 2013 03:23 PM PDT

A customer of ours is experiencing frequent deadlocks. The deadlocks are mostly on the same UPDATE statement. The deadlocks follow a pattern in which both SPIDs have acquired an update (U) lock on a page, and both try to up-convert the U page lock to an intent exclusive (IX) lock. Sometimes there is only one page involved; sometimes several.

We captured a deadlock trace using the trace flag 1222. The SQL Server log shows many, many deadlocks with the following pattern (in bottom-to-top order):

waiter id=processe0dc2088 mode=U requestType=convert  waiter id=process2f9db2478 mode=U requestType=convert  waiter-list  owner id=processe0dc2088 mode=IX  owner id=process2f9db2478 mode=IX  owner-list  pagelock fileid=1 pageid=5794 dbid=2 objectname=tempdb.dbo.Item_Package_Site_Costs_Work id=lock1b22de480 mode=IX associatedObjectId=72057594131775488  resource-list  

Both processes are running the same UPDATE statement to set a flag on this tempdb table. This tempdb table holds information that needs to persist between client calls until the client is done. The table has a fairly long index that starts with a GUID representing a unique process ID.

I am having difficulty understanding and simulating this deadlock. I have tried various amounts of records with simulated data.

My questions:

Why are these processes acquiring U locks and then converting to IX? I would expect the DELETE to acquire IX locks to begin with.

How can I prevent the deadlock?

The statement causing the deadlock is below. The process has just done a lookup of costs for a list of items at a single store. It is trying to note that there was a cost found.

Note that there is a deprecated (NOLOCK) on an UPDATE table. Would this be a contributing factor?

UPDATE tempdb..Item_Package_Site_Costs_Work  SET ItemPkgSiteCost_VINCostFound = 1,      ItemPkgSiteCost_VendCost_Key = SiteCosts_VendCost_Key  FROM tempdb..Item_Package_Site_Costs_Work (NOLOCK)      INNER JOIN #SiteCosts          ON ItemPkgSiteCost_GUID = @ProcGUID          AND SiteCosts_Item_Type = 0 -- Standard          AND ItemPkgSiteCost_Site_Key = SiteCosts_Input_Site_Key           AND ItemPkgSiteCost_Item_Key = SiteCosts_Item_Key          AND ItemPkgSiteCost_ItemPkg_Key = SiteCosts_Input_Sel_ItemPkg_Key          AND ItemPkgSiteCost_VendItem_Key = SiteCosts_VendItem_Key          AND ISNULL(ItemPkgSiteCost_Qty_Recv, 1) = SiteCosts_Input_Qty_Recv  

The customer's server @@version is:

Microsoft SQL Server 2005 - 9.00.4035.00 (X64) Nov 24 2008 16:17:31 Copyright (c) 1988-2005 Microsoft Corporation Standard Edition (64-bit) on Windows NT 6.1 (Build 7601: Service Pack 1)

So far I have not been able to capture the query plan used at the time of the deadlock, and the normal ways I try to retrieve the query plan are not returning anything(sys.dm_exec_query_plan and sys.dm_exec_text_query_plan both return NULL).

How can lots of ram (128 GB) be used to improve MySQL 5.5 index creation?

Posted: 06 Aug 2013 02:11 PM PDT

This is similar to another post of mine, but now I have a different table structure, and it's still unclear what MySQL parameters should be changed to take advantage of the extra ram my machine has---so if it seems like a duplicate, let me know how best to rephrase what I'm asking.

I have the following table: create table mytable (id1 int, id2 int, score float) engine MyISAM, with 50 billion records.

I plan to add an index on the first column (alter table mytable add index myindex (id1)), and I am wondering what MySQL parameters could be changed to take advantage of this extra memory (e.g. buffers?)?

Note: the engine type does not need to be MyISAM if that makes a difference.

how to make a particular field into seperate columns?

Posted: 06 Aug 2013 03:17 PM PDT

Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production.

I have this table in the below format:

  No | User | Value  01 | Port | Funds     01 | Vip1 | Systems    02 | Port | Bank    02 | Vip1 | Authority      

This is how I want it:

  No | Port  | Vip1  01 | Funds | Systems     02 | Bank  | Authority  

Now the problem is, in this table, the User column has 6 other entries besides Port and Vip1. So I would want 6 columns and their respective values. I also want a query that I can use in other similar tables that have different User column entries. This is what I was trying to do without any success:

SELECT      No,      CASE user WHEN 'Port' THEN Value ELSE NULL END AS Port,         CASE user WHEN 'Vip1' THEN Value ELSE NULL END AS Vip1    FROM table1    

Please let me know what you think.

Fatal NI connect errors

Posted: 06 Aug 2013 01:43 PM PDT

I'm wondering if there are any auditing processes I can enable to see what is causing these errors, like sql statements being run, or any other metadata that I could use to track down the source of these errors?

I am aware of DBA_AUDIT_TRAIL (which uses SYS.AUD$), but it isn't giving me what I need. Fine Grained Audit Trail may be what I need? Not sure....

Here are my auditing settings for your review, anything I can change here to give me more to work with?

SQL> show parameter audit    NAME                                 TYPE        VALUE  ------------------------------------ ----------- -----------------------------------  audit_file_dest                      string      /01/daa01/oracle/db/admin/DAA/adump  audit_sys_operations                 boolean     FALSE  audit_syslog_level                   string  audit_trail                          string      DB  

confused over encoding/locale in postgresql

Posted: 06 Aug 2013 01:18 PM PDT

I am a little confused over the difference between encoding and locale as it pertains to the postgresql database. When initializing the database, you can specify both an encoding and a locale. Correct me if I am wrong but I assume the encoding defines what the database is actually stored as on the computer. So if I specify UTF8, all the characters in the UTF8 character set would be valid characters. If I specified WIN1252, all those characters would be valid characters. What I dont get is where the locale comes into play. If I specify my encoding as UTF8 and then specify my locale as English_United States.1252, what does that exactly mean? I think the WIN1252 character set is a subset of UTF8 so is the locale just specifying the subset of characters to be used from the UTF8 character set? From what I have read is that UTF8 can be used with ANY locale so what is the point in specifying different encodings if UTF8 is so ubiquitous and the locale is really the one specifying the specific character set to be used?

Also, on Linux, the locale can be specified like so: en_us.utf8. So the database encoding is specified in the locale? If the encoding is specified in the locale, why even have a -encoding flag when initializing the database?

Inserting into mysql table with archive engine "duplicate key" error

Posted: 06 Aug 2013 02:20 PM PDT

I am trying to archive some files using mysql archive engine. I am using this query to insert file contents:

insert into test_table (id,arch) values (123,'FILE_CONTENT')  

After inserting 2 records, I get "duplicate key" error from mysql for a key that doesn't exist in the table. I checked it like 5 times but the record is not there. I selected count(*) for the duplicate id and the result is 0.

I checked the same code with an innodb engine and it works fine. Can anyone tell me what the problem is with the archive engine?

CREATE TABLE `test_table` (    `id` int(11) unsigned NOT NULL AUTO_INCREMENT,    `arch` mediumtext,    PRIMARY KEY (`id`)  ) ENGINE=ARCHIVE AUTO_INCREMENT=100175977 DEFAULT CHARSET=utf8mb4;  

How do I generate a SQL script from my diagram in MySQL Workbench?

Posted: 06 Aug 2013 12:36 PM PDT

I've created an EER diagram with tables, foreign keys, etc in MySQL Workbench and now I want to generate a ".sql" script that will create this database. How do I do that?

In MySQL if column X has unique values what's the difference between UNIQUE index and B-Tree index

Posted: 06 Aug 2013 02:40 PM PDT

Let's assume that I have table data:

CREATE  TABLE `test`.`data` (  `row_id` INT NOT NULL ,  `data_id` INT NOT NULL ,  PRIMARY KEY (`row_id`) );  

Now, let's assume that data_id has unique values. Is there any difference(space, performance, memory) between having B-Tree index on data_id and UNIQUE index on it? (besides the case when InnoDB is used as storage engine and UNIQUE index can be used as clustering key so row_id is uneccesary)

An error was encountered in the transport layer - SSAS 2012 from Excel 2013

Posted: 06 Aug 2013 12:25 PM PDT

When connecting to an SSAS 2012 tabular model from Excel 2013 (Data... From Other Sources... From Analysis Services... fill in Server Name... click Next), I get the error:

An error was encountered in the transport layer.

Excel, SQL Server, and SSAS are all running on the same server (Windows Server 2012).

Because the server needs to be accessed from the Internet, there is a DNS entry for the server x.mydomain.com but the server's actual Windows computer name is something else servername.

In the Excel (on the server's desktop) connection dialog, if I use servername it works without error. If I use x.mydomain.com I get the above error.

I have added a hosts file entry so that x.mydomain.com points to "::1". From the server, pinging x.mydomain.com or servername both return the same "::1" for the address. This did not help the problem.

DB2 Insert and Deadlocks

Posted: 06 Aug 2013 06:11 PM PDT

My application has 4 different threads doing inserts into the same DB2 table simultaneously.

Could this cause a deadlock / lockwait situtation?

Theoretically I don't see a problem here; as one insert statement is not going to lock a row required by another insert statement because all are independent insert statements.

Also, I commit after each insert so transaction log wouldn't fill up.

I'm looking for a high performance model while avoiding any potential lock situtation. Please provide your thoughts.

hi all i need to sum the M,X,L and C values, how can i do it? help please [on hold]

Posted: 06 Aug 2013 11:31 AM PDT

declare @CodSafra int = 1213 --Safra actual  declare @CodVar int = 4  declare @KgsNPK int = 200 --Kilos de fumo produzidos por cada saco de NPK  declare @PesHectar int = 18000 --Numero de plantas de fumo por hectare  declare @NrSacosHectar int = 6 --Numero de Sacos de NPK Por Hectare (6 sacos estao para 1 hectare)  declare @KgsHectar int = @KgsNPK * @NrSacosHectar --Kilos Produzidos por hectar    SELECT DISTINCT                      rp.CodZona                    ,rp.NomeZona                    ,rp.CodInstr                    ,rp.NomeInstr                    ,rp.CodProd                    ,rp.NomeProd                    ,rp.CodVar                    ,rp.NomeVar                    ,0 NrSacosNPKDist                    ,0 NrSacosCANDist                    ,0 NrSacosUREIADist                    ,0 NrSacosNPKDev                                        ,0 NrSacosCANDev                    ,0 NrSacosUREIADev                    ,0 NrPlantas                    ,0 KgsSold                    ,0 KgHa                    ,0 MediumPrice                    ,0 OSCredit                     ,0 BalesSold                    ,0 Sell                    ,sum(fc.PesoFardo) X                    ,sum(fc.PesoFardo) M                    ,sum(fc.PesoFardo) C                    ,sum(fc.PesoFardo) L                FROM   (select distinct z.CodZona,z.NomeZona,i.CodInstr,i.NomeInstr,p.CodProd,p.NomeProd,                      e.CodVar,e.NrPlantas,vf.NomeVar,pc.CodSafra                       from FProdutor p, FProdClube pc, FZona z, FZonaClube zc, FZonaInstrutor zi,                           FInstrutor i, FEstimativa e, FVariedadeFumo vf                      where p.CodProd = pc.CodProd and p.CodProd = e.CodProd and pc.CodClube = zc.CodClube                       and zc.CodZona = z.CodZona and z.CodZona = zi.CodZona  and e.CodVar = vf.CodVar and e.CodVar = @CodVar                       and zi.CodInstr = i.CodInstr and pc.CodSafra = @CodSafra and zc.CodSafra = @CodSafra                       and p.Status = 'A' and p.TipoRegProd = 'P' and e.CodSafra = @CodSafra and zi.CodSafra = @CodSafra) as rp                          join FFardosCompra fc on rp.CodProd = fc.CodProd and fc.CodSafra = @CodSafra                       join FClasseCompra cc on fc.CodClasseCompra = cc.CodClasseCompra and cc.NomeClasseCompra in ('FX1FO','FX2FO','FX3FO'                      ,'FX4FO','FX5FR','FX1F','FX2F','FX3F','FXJ','FXK','FXV','FND','FNDM')               GROUP BY                     rp.CodZona                    ,rp.NomeZona                    ,rp.CodInstr                    ,rp.NomeInstr                    ,rp.CodProd                    ,rp.NomeProd                    ,rp.CodVar                    ,rp.NomeVar  

DB2 Communication link failure issue

Posted: 06 Aug 2013 12:15 PM PDT

I would like to get your valuable suggestions for the below problem.

My Perl application connects to a DB2 database and keeps inserting data into a table. The application successfully inserts data for a while and after sometime I start seeing the below Error 1.

Error 1:

[IBM][CLI Driver] CLI0108E  Communication link failure. SQLSTATE=40003  

Error 2:

[IBM][CLI Driver] CLI0106E  Connection is closed. SQLSTATE=08003   

Error 1 occurs only once, after that Error 2 occurs repeatedly till I restart the application and establish new connection to the database. It's been confirmed that the database is running healthy and able to accept connections and even the network connections are healthy.

Background on Error 1:

Error "CLI0108E" Suggests that "The connection between the driver and the data source failed during execution of this function."

And "SQLSTATE=40003" suggests that "The statement completion is unknown."

Is it possible that something going wrong in an insert statement actually closes the connection? Or leaves the connection stateless?

This actually breaks my Perl application. What are the best options to handle this situation?

When is a Dynamic Port "dynamic"?

Posted: 06 Aug 2013 12:01 PM PDT

I was having a discussion today about Dynamic Ports with one of my co-workers and could use some help clarifying how they work.

First question: If the IPALL TCP Dynmaic Ports setting is a specific number (say 1971) does that signify that you have a static port of 1971 or a dynamic port that is currently 1971 and may change at some point in the future.

enter image description here

Second question: This is the one I'm most curious about. We have an instance that has had the same port (the value in the IPALL TCP Dynmaic Ports setting) for several years through dozens of instance restarts. What actually causes the dynamic port to actually change after an instance restart?

fighting filter (nested loops) execution plan from Oracle

Posted: 06 Aug 2013 02:30 PM PDT

My query is:

SELECT category, count(*)  FROM large_data_table t  WHERE   (t.val IN (SELECT val FROM SMALL_TABLE WHERE id = 1))  OR  (t.val IN (SELECT val FROM SMALL_TABLE WHERE id = 2))  GROUP BY GROUPING SETS ((category), ());  

The resulting execution plan is like this:

SELECT STATEMENT    SORT (GROUP BY ROLLUP)      FILTER        large_data_table  TABLE ACCESS (FULL)        SMALL_TABLE  TABLE ACCESS (BY INDEX ROWID)        SMALL_TABLE  INDEX (RANGE SCAN)        SMALL_TABLE  TABLE ACCESS (BY INDEX ROWID)        SMALL_TABLE  INDEX (RANGE SCAN)  

Where FILTER is, essentially, NESTED LOOP JOIN which never finishes as NESTED LOOPS often do on large tables. (the two subqueries on the small table return, say, 100 rows each). I am looking for a way to tell Oracle to NOT use the stupid FILTER here. I've tried sticking USE_HASH hint everywhere including some places I'd rather not talk about now.... GROUPING SETS seem to play a role in this, but getting rid of them is very difficult here. The one reasonable thing that works is to replace OR with a UNION, but it doesn't cover all cases either.

Oracle Database 10g JRE Error

Posted: 06 Aug 2013 12:03 PM PDT

I am trying to install Oracle Database 10.2 on OEL5U6(Linux System). I followed all the instructions from HERE. When I am executing installer this return:

Preparing to launch Oracle Universal Installer from /tmp/OraInstall2013-08-06_12-57-33AM. Please wait ...The Java RunTime Environment was not found at /tmp/OraInstall2013-08-06_12-57-33AM/jre/bin/java. Hence, the Oracle Universal Installer cannot be run. Please visit http://www.javasoft.com and install JRE version 1.3.1 or higher and try again. : No such file or directory.

But Java is already installed on my system.

How to migrate SQL Server to MySQL

Posted: 06 Aug 2013 01:49 PM PDT

I'm trying to migrate a SQL Server db to MySQL, and I'm having a tough time. I've tried:

  • MySQLWorkbench -- migration tool fails because my login only shows views in SQL Server, not the tables themselves, and so I can't get past the "Schemata Selection" screen.

  • sqlcmd and bcp -- both fail because they can't export csv properly. They don't quote strings with commas in them. Plus, they don't create table schemas in MySQL.

  • SQLyog -- just fails. Creates a MySQL table from a MS SQL view ok, but doesn't import the data. Also takes a really long time on small tables, and I have to import millions of rows.

Anyone have suggestions?

How can I migrate a sybase database to mysql?

Posted: 06 Aug 2013 01:38 PM PDT

I've been tasked with migrating a fairly complex legacy Sybase database into a MySQL database and I have no idea where to start. I can't use any off the shelf software, so I'm going to have to write some sort of script that takes all the info out of the Sybase database and puts it into MySQL. Does anyone have any resources or thoughts on how to do this quickly, as I have very little experience with Sybase or MySQL.

SQL Server to MySQL migration - how can I remove UCS-2 surrogate pairs in SQL Server?

Posted: 06 Aug 2013 01:54 PM PDT

I have been tasked with migrating a SQL Server 2005 database to MySQL 5.6 (these are both database servers runnig locally) and would really appreciate some help.

-------Edit - Answered-------

This issue has now been resolved. I used Remus Rusanu's suggestion for finding the rows with these surrogate pair characters using CHARINDEX and have decided to use SUBSTRING to exclude the troublesome characters like so:

UPDATE test  SET a = SUBSTRING(a,  1,   (CHARINDEX(0x83dc, CAST(a AS VARBINARY(8000)))+1)/2 - 1) -- string before the unwanted character  + SUBSTRING(a, (CHARINDEX(0x83dc, CAST(a AS VARBINARY(8000)))+1)/2 +1, LEN(a) ) -- string after the unwanted character  WHERE CHARINDEX(0x83dc, CAST(a AS VARBINARY(8000))) % 2 = 1 -- only odd numbered charindexes (to signify match at beginning of byte pair character)  

-----------------------------

  • SQL Server source database has latin1 collation (so has ISO 8859-1 character set right?) but doesn't have any char/varchar fields (any string field is nvarchar/nchar) so all this data should be using the UCS-2 character set.

  • MySQL target database wants the character set UTF-8.

I decided to use the database migration toolkit in the latest version of the MySQL workbench. at first it worked fine and migrated everything as expected. But I have been totally tripped up upon encountering UCS-2 surrogate pair characters in the SQL Server database.

The migration toolkit copytable program did not provide a very useful error message:

Error during charset conversion of wstring: No error.

It also did not provide any field/row information on the problem-causing data and would fail within chunks of 100 rows. So after searching through the 100 rows after the last successful insert I found that the issue seemed to be caused by two UCS-2 characters in one of the nvarchar fields. They are listed as surrogate pairs in the UCS-2 character set. They were specifically the characters DBC0 and DC83 (I got this by looking at the binary data for the field and comparing byte pairs (little endian) with data that was being migrated successfully).

When this surrogate pair was removed from the SQL Server database the row was migrated successfully to MySQL.

Here is the problem:

I have tried to search for these characters in a test SQL Server table (this chartest table is just various test strings an nvarchar field) to prepare a replacement script and keep getting strange results... I must be doing something incorrectly.

Searching for

SELECT * FROM chartest WHERE text LIKE NCHAR(0xdc83)  

Will return any surrogate pair character (whether or not it uses DC83), but obviously, only if it is the only character (or part of the pair) in that field. This isn't a big deal since I would like to remove any instance of these anyway (I dont like to remove data like this but I think we can afford it).

Searching for

SELECT * FROM chartest WHERE text LIKE '%' + NCHAR(0xdc83) + '%'  

Will return every row! Regardless of whether it even has a unicode character present in the field let alone the DC83 character. Is there a better way to find and replace these characters? Or something else I should try?

I have also tried setting the target databse, table, and field character set to UCS-2 but it seems as though it does not make a difference.

I should also mention that this migration is using live data (~50GB database!) while one of the sites that feeds it is taken offline so any solutions to this need to have a quick running time...

I would appreciate any suggestions very much! Please let me know if there is any information I have left out.

Efficiently Filter Large Set With Disjunctions

Posted: 06 Aug 2013 08:21 PM PDT

Let's say I have a single table

CREATE TABLE Ticket (      TicketId int NOT NULL,      InsertDateTime datetime NOT NULL,      SiteId int NOT NULL,      StatusId tinyint NOT NULL,      AssignedId int NULL,      ReportedById int NOT NULL,      CategoryId int NULL  );  

In this example TicketId is the Primary Key.

I want users to be able to create "partially ad-hoc" queries against this table. I say partially because a few parts of the query will always fixed:

  1. The query will always perform a range filter on an InsertDateTime
  2. The query will always ORDER BY InsertDateTime DESC
  3. The query will page results

The user can optionally filter on any of the other columns. They can filter on none, one, or many. And for each column the user may select from a set of values which will be applied as a disjunction. For example:

SELECT      TicketId  FROM (      SELECT          TicketId,          ROW_NUMBER() OVER(ORDER BY InsertDateTime DESC) as RowNum      FROM Ticket      WHERE InsertDateTime >= '2013-01-01' AND InsertDateTime < '2013-02-01'        AND StatusId IN (1,2,3)        AND (CategoryId IN (10,11) OR CategoryId IS NULL)      ) _  WHERE RowNum BETWEEN 1 AND 100;  

Now assume the table has 100,000,000 rows.

The best I can come up with is a covering index that includes each of the "optional" columns:

CREATE NONCLUSTERED INDEX IX_Ticket_Covering ON Ticket (      InsertDateTime DESC  ) INCLUDE (      SiteId, StatusId, AssignedId, ReportedById, CategoryId  );  

This gives me a query plan as follows:

  • SELECT
    • Filter
      • Top
        • Sequence Project (Compute Scalar)
          • Segment
            • Index Seek

It seems pretty good. About 80%-90% of the cost comes from the Index Seek operation, which is ideal.

Are there better strategies for implementing this kind of searching?

I don't necessarily want to offload the optional filtering to the client because in some cases the result set from the "fixed" part could be 100s or 1000s. The client would then also be responsible for sorting and paging which might too much work for the client.

search in mysql with like statement

Posted: 06 Aug 2013 12:17 PM PDT

Imagine I have this table named test_table in the test_db in MySQL:

field1     field2    google   http://google.com    apple    http://apple.com    python   http://python.org     

Now if I search the database using this:

SELECT * FROM test_table WHERE field1 = "google";   

It will return : http://google.com
Now I want to search the database with field1 and return field2 value where field1 like to search keyword.
For example if I give it googles it should return : http://google.com
How can I do this?

GROUP_CONCAT with ORDER BY , but results are not ordered

Posted: 06 Aug 2013 01:05 PM PDT

The following query works fine

SELECT  *,  (SELECT  GROUP_CONCAT(url SEPARATOR '$$' )  FROM project_photos  WHERE project_id = projects.id  ORDER BY priority) AS images  FROM projects  WHERE catID = 2  LIMIT 0,5  

but the images column are not ordered as priority. I am unable to understand why it is happening

Structure for table project

CREATE TABLE `projects` (  `id` int(11) NOT NULL AUTO_INCREMENT,  `catID` int(11) NOT NULL,  `title` varchar(255) NOT NULL,  `description` varchar(400) NOT NULL,  `url` varchar(255) DEFAULT NULL,  `tags` varchar(255) DEFAULT NULL,  `featured` varchar(3) NOT NULL DEFAULT 'No',  `featured_url` varchar(255) DEFAULT NULL,  `order` int(11) DEFAULT NULL,  `created_at` timestamp NULL DEFAULT CURRENT_TIMESTAMP,  PRIMARY KEY (`id`),  KEY `catID` (`catID`),  CONSTRAINT `FK_catID` FOREIGN KEY (`catID`) REFERENCES `category` (`catID`) ON DELETE CASCADE ON UPDATE CASCADE  ) ENGINE=InnoDB AUTO_INCREMENT=48 DEFAULT CHARSET=latin1;  

Structure for table project_photos

CREATE TABLE `project_photos` (  `id` int(11) NOT NULL AUTO_INCREMENT,  `url` varchar(250) DEFAULT NULL,  `project_id` int(11) DEFAULT NULL,  `priority` int(11) DEFAULT NULL,  PRIMARY KEY (`id`)  ) ENGINE=InnoDB AUTO_INCREMENT=347 DEFAULT CHARSET=utf8;  

SQL Server Decimal(9, 0) vs INT

Posted: 06 Aug 2013 08:52 PM PDT

One of our customers uses for some columns the datatype DECIMAL(18,0) in his SQL Server 2008R2 database. Because the columns grow quite slowly, he recently proposed to change the datatype to DECIMAL(5,0) to regain some storage.

According to the MSDN library, the storage space of the DECIMAL(5,0) datatype is, just like the DECIMAL(9,0) datatype, 5 bytes. INT is 1 byte smaller, but can store everything in the range of -2^31 to 2^31 instead of the -99,999 to 99,999 which DECIMAL(5,0) can store. Even the largest DECIMAL which fits into 5 bytes (DECIMAL(9,0)) can store only integers in the range -999,999,999 to 999,999,999 (which is less than half of the range INT offers in 4 bytes).

I can think of two "benefits" of using DECIMAL over INT:

  • The ability to add scale afterwards, without using more storage space
  • The ability to scale the precision up to 38 digits, without altering data type

but these aren't real benefits in my opinion:

  • Adding scale to integers does only make sense in very few cases (in most cases where scale does make a difference, it could also be added beforehand)
  • SQL Server sees every precision / scale combination as a different data type, so the datatype isn't left alone when increasing the precision or scale.

This makes me wonder: what is the added benefit of a DECIMAL(5,0) datatype for integers?

Converting a large SQL Server database to MySQL. Database corruption? Can't access data after transferring

Posted: 06 Aug 2013 01:44 PM PDT

I have gigabytes of tabular data stored in a SQL Server database. I've been trying to migrate it to MySQL using the Migration Toolkit from Oracle. I didn't have any mapping errors and fixed all the syntax mismatches, so now the rows are being transferred. But when I go to PHPMyAdmin and try to browse, it keeps saying MySQL returned an empty result set (i.e. zero rows). ( Query took 17.0116 sec ) although it says there are 44,546,089 rows in that table, and I can see the hard drive space filling up too. I just can't access any of the data.

I researched a bit and increased the buffer size from 16M to 160M in my.conf. But I can't fix this problem.


Some things I noticed (red marked in the status page):

Innodb_buffer_pool_pages_dirty: 376

Innodb_buffer_pool_reads = 92 k

Handler_read_rnd = 41

Handler_read_rnd_next = 59 k


Any advice? This database will be read-only. Also the server specs: dedicated, 8GB RAM, Debian 6, 500GB HDD, ~2.8GHz Quadcore Intel

Practices to keep SQL Server databases easy to port to MySQL?

Posted: 06 Aug 2013 01:46 PM PDT

We've developing an ASP.NET Web application that is to be designed for MS SQL Server 2008. I'd like to minimize my future dB admin's headaches should there be a need to migrating over to MySQL.

With that goal, are there any specific design patterns or features we should avoid on MS SQL? Issues that trip over the MS SQL -> MySQL migration tools?

How do you migrate a database from SQL Server 2008 to MySQL?

Posted: 06 Aug 2013 01:41 PM PDT

I'm just getting into MySQL, so forgive me if this is a basic question. For my application I need to migrate my database from SQL Server 2008 R2 to MySQL.

Is there any mechanism or technique or suggestion or tips or magic tricks for migration?

SQL Server Cannot drop database <dbname> because it is currently in use... but no sessions displayed

Posted: 06 Aug 2013 12:08 PM PDT

When I try to drop a database I get the error "Cannot drop database "dbname" because it is currently in use". However, when I run sp_who2, there are definitely no sessions connected to this database. I've also set the database to single_user mode with rollback immediate.

Why is this happening?

How do I list all databases and tables using psql?

Posted: 06 Aug 2013 12:14 PM PDT

I am trying to learn PostgreSQL administration and have started learning how to use the psql command line tool.

When I log in with psql --username=postgres, how do I list all databases and tables?

I have tried \d, d and dS+ but nothing is listed. I have created two databases and a few tables with pgAdmin III, so I know they should be listed.

No comments:

Post a Comment

Search This Blog