Tuesday, October 8, 2013

[how to] Advice on SQL fundamentals online training [on hold]

[how to] Advice on SQL fundamentals online training [on hold]


Advice on SQL fundamentals online training [on hold]

Posted: 08 Oct 2013 07:17 PM PDT

Sorry for the very noob question here, but can anyone tell me the best crash course in SQL fundamentals I could study for table maintenance (its an Oracle shop by the way) and minor config changes. I have the chance at a promotion and I need to know these skills (all the other skills required I have but SQL is where I fall over sadly) I will have a month to learn.

Cheers

Import CSV file into Mysql with Multiple Delimiters/Field Separators

Posted: 08 Oct 2013 05:59 PM PDT

I'm trying to import a large csv file into Mysql. Unfortunately, the data within the file is separated both by spaces and tabs.

As a result, whenever I load the data into my table, I end up with countless empty cells (because Mysql only recognizes one field separator). Modifying the data before importing it is not an option.

Here is an example of the data:

# 1574    1 1 1  $ 1587    6 6 2  $115 1878    8 9 23  

(Where the second and third value of every row are separated by a tab)

Any ideas?

Bidireccional synchronization between a local Sql Server 2005 database and a SQL Azure database using Sql Data Sync

Posted: 08 Oct 2013 08:20 PM PDT

I need to synchronize an Sql Server 2005 database in an SQL Azure database. In other posts I've read that I can do this using Sql Data Sync but I don't know if SQL Data Sync can make the synchronizations that I need without exceeding their limitations. This is the exact process I need to do: First: synchronize 1 table from Azure to Sql Server. Second: Execute some Store procedures in the SQL Server 2005 instance. And third: Synchronize various tables from Sql Server to Azure sequentially.

Thanks!

How to move csv file into oracle database using sql developer?

Posted: 08 Oct 2013 09:02 PM PDT

How to move csv file into oracle database without creating table firstly using sql developer rather than sql loader?

Creating PostGIS extension in single-user mode

Posted: 08 Oct 2013 03:55 PM PDT

I am trying to create a PostGIS extension for my PostgreSQL database when running a single-user mode by using the following command:

echo "CREATE EXTENSION postgis;" | su postgres sh -c "/usr/lib/postgresql/9.2/bin/postgres --single -D /var/lib/postgresql/9.2/main --config_file=/etc/postgresql/9.2/main/postgresql.conf dbname"

which returns

UTC ERROR: type reclassarg[] does not exist

I am doing it while deploying a Docker container, so I cannot use psql since the database is not running at this moment. After the deployment Docker deployment is finished and the database is started running psql -d dbname -c "CREATE EXTENSION postgis;" works like a charm which means that PostGIS is installed.

What does this type reclassarg[] does not exist message mean and how can I create the PostGIS extension from the single-user mode?

Database design question

Posted: 08 Oct 2013 06:18 PM PDT

Hello and thank you for reading my post. I work for a company that houses many gigabytes of data in SQL Server. The company is in the process of taking a huge step forward to re-architect and re-organize this data.

The following facts can be summarized about the environment.

  1. There are approximately 30 SQL Server instances that contain a total of around 250 databases.
  2. The system as a whole consists of hundreds one off programs that created individual databases and UI's, reports etc..
  3. As time went along each new database started to need data from the other databases and linked servers were created making these all intertwined creating a data path that is difficult to decode and maintain.
  4. There are about 10 core types of data mixed into this surrounding 10 major applications.
  5. The published data is relatively static needing monthly pushes to the release version of the data.


The following approach is currently being researched:

  1. Create a new database that will house all data into normalized structures
  2. Create SSIS packages to move the data up to the new database.
  3. Create another set of SSIS packages to maintain the data by finding the best path for the data and doing incremental updates to the new database.
  4. Over time rewrite applications using the new data model and retire the associated SSIS packages.
  5. The proposed structure of the new database ( per the data architect ) would create several tables that have upwards of 10 billion rows each.
  6. With this proposed model it would require many joins between these huge tables and others. One query could join up to 15-20 tables. These will all be delivered to a website front end with Web API Restful services where the business requires a 1 second response.

Questions

Will SQL possibly be able to live up to this performance wise? Those huge tables will have a small record size consisting of about 8 fields that are integers and 2 text fields.

Should the company be looking at more of a big data solution like Hadoop? Where the architecture looks more appropriate there is no internal knowledge of anything except SQL Server which is version 2012.

Thank you for any insight you are able to provide.

Which string variables in mysql support utf-8?

Posted: 08 Oct 2013 03:35 PM PDT

I have a table that stores string from a user and is displayed in a web form. But as I see when the user inputs e.g. in a language of cyrilic alphabet (I assume is UTF-8) garbage are displayed back. I did a show create table and I saw that the table is defined as LATIN-1 and the column that stores the string is defined as TEXT. I am not clear on what type of data does TEXT stores. Is it only for ASCII? Should I be using a different data type? Which would be the most appropriate?

Two possibilities of primary key

Posted: 08 Oct 2013 05:42 PM PDT

I have a table where it holds the users identification. This id can be of two types, suppose: int and varchar, but one table can't have two primary keys, and a composite one wouldn't solve my problem.

+--------------------+  | all_info           |  +--------------------+  | PK id1 varchar(50) |  | PK id2 int         |  | ...                |  +--------------------+  

So i created two others tables with the primary keys of the types that i needed, add some informations for the specific types of users, and made relations with the only table that holds the rest of the users informations (that both types share):

+--------------------+    +--------------------+  | unique_info1       |    | unique_info2       |  +--------------------+    +--------------------+  | PK id varchar(50)  |    | PK id int(10)      |  | ...                |    | ...                |  +--------------------+    +--------------------+  

In the table that holds the similar users informations i had created an artificial primary key so i don't have to verify in which column is the id of the user everytime i need to manipulate it, and also to not have to create two columns in every other table where i want to make relation with the user id:

+-------------------------+    +-------------------------+  | similar_info            |    | other_table             |  +-------------------------+    +-------------------------+  | PK id int(10) [AI]      |    | PK char(3)              |  | ...                     |    | ...                     |  | FK uniq1_id varchar(50) |    | ...                     |  | FK uniq2_id int(10)     |    | FK similar_id int(10)   |  +-------------------------+    +-------------------------+  

The thing is: that's the best approach, or should i change the plan?

For examples:

  • Choose other information to be the id of the users, where all will have the same type, and add all specific info of the users to that hole table (what would result in a lot of null fields for each user)?
  • create two completely different tables for the two types of users (what would result in redundant info)?

Oracle: how to set only some roles as not default for a user (no GUI)

Posted: 08 Oct 2013 12:29 PM PDT

Scenario:

  1. I have instanceA
  2. I have instanceB
  3. I have USER1 in instanceA
  4. I have USER1 in instanceB
  5. USER1 in instancA has four roles grante to it:
    • ROLE1 default yes admin_option no
    • ROLE2 default false admin_option yes
    • ROLE3 default false admin_option no
    • ROLE3 default yes admin_option yes
  6. USER1 in instanceB has no roles granted
  7. All mentioned roles exist also in instanceB although not granted to USER1
  8. I have already writen a script that generates the DDL in order to duplicate the role grants for USER1 in instanceB.

Problem:

  1. I haven't found a way to programatically do the following because the DEFAULT=NO part doesn't exist in the GRANT ROLE clause.

    • grant ROLE1 to USER1; -- admin option no
    • grant ROLE2 to USER1 DEFAULT NO with admin option;
    • grant ROLE2 to USER1 DEFAULT NO;
    • grant ROLE3 to USER1 with admin option;
  2. Notice than I want ro replicate in instanceB the same role set USER1 has in instanceA, meaning two of the roles are default and two aren't.

  3. I've studied setting the default role false for ROLE2 and ROLE3 after granting them, using ALTER USER DEFAULT ROLE but that only works for setting ALL roles to default or setting ALL role to non-default with NONE.

  4. I cannot find a way for setting only ROLE2 and ROLE3 as not-default and ROLE1 and ROLE2 as default.
  5. I know that I can set that using the visual console but I need to automate this so I need syntax way to do it.
  6. I don't want to make the roles ask for password. All roles in the databases are not password roles.

Curious About SQL Server Registry Entries

Posted: 08 Oct 2013 12:08 PM PDT

I'm working on building some documentation across my systems and planned on simply using Powershell to script out the legwork of what I wanted to do. While that's going all well and good, I ran into an issue with my SQL Server registry entries. The problem being that some of my SQL servers have all of their registry values while some don't.

For example, I'm pulling down the SQL Shared Features entry so I know where that directory is on all servers. Now some of them are pulling data back (E:\yada\yada) while others aren't pulling anything back. After further investigation, I have noticed that several of my SQL servers don't have the required registration data saved within the registry. This happens with several different registry entries that should be there.

Any reason it's like this?

Dropped index from view still referenced in execution plans

Posted: 08 Oct 2013 05:51 PM PDT

On a SQL Server 2005 Enterprise edition server, an index has been dropped from a view. When running a select * query that includes this view, no results are shown. In addition, the execution plan references this index on the view (that no longer exists). By adding the EXPAND VIEWS option to this query, the correct results are returned.

We have tried to clear the cache plans with DBCC freeproccache, but the problem persists (the fresh cache plan still refers to this non-existent index). We have seen this problem before, and a restart of the SQL service corrected it, but I would like to find out the actual cause before we attempt that again.

Edit: I have tried sp_refreshview, with no change. The SQL build version is 9.00.3042 (SP2).

Review my simple database tables

Posted: 08 Oct 2013 01:49 PM PDT

Hey trying to create a simple database table for a small beer review project, it been awhile since i have created db's so could anyone just tell me if i am totally wrong here?

Especially my use of unique identifiers, as i see it they would always be unique?

Project description: A simple asp.net site containing beer data, and information on specific key elements like brewery etc. people should be able to search by name and read about the beer.

Picture of simple database

UPDATE SET REPLACE() matches but does not change

Posted: 08 Oct 2013 01:05 PM PDT

I've seen another post about this exact thing but it's 1-1/2 years old so I thought I'd make my own...besides, the other post did not help me. What I am trying to do is pretty straight-forward.

To be on the safe side, I tried a simple UPDATE statement on a different table, which was successful. So, here's my problem:

mysql> UPDATE bugs_fulltext SET short_desc =  REPLACE(short_desc,'I don\'t know why my previous comment means.',  'I don\'t know what my previous comment means.') WHERE bug_id=452;  Query OK, 0 rows affected (0.00 sec)  Rows matched: 1  Changed: 0  Warnings: 0  

I'm not using wildcards nor am I using a LIKE statement. I am telling it which record to update. Why isn't my text changed?

Max Connection Pool capped at 100

Posted: 08 Oct 2013 10:20 AM PDT

I'm running SQL Server 2008 R2 SP1, on a Windows Server 2008 box. I have a .NET script running from Visual Studio 2010 that does the following:

  • Reaches into the database
  • Makes a change
  • Iterates

The total number of times it will iterate is 150, however it is stopping at 100 connections and I can't figure out why. I could adjust my script to just use a single thread, but I'd prefer to know where I'm missing a max connection setting as that will be more useful to know for future reference.

Here's where I've checked so far:

  • SQL Connection String in Visual Studio 2010 (it's set to 1000)
  • SSMS Database instance connection properties (it's set to 0 [infinity] user connections)
  • Googled some information on Server 2008, it looks like it can handle more than 100 connections
  • Stepped through my code alongside SP_WHO2 which gives more information on logical connections, seeing that the # of connections starts at 52 and the script errors with the "Max Pooled Connections reached" error at 152 logical connections.
  • Changed the connection string to use Data Source=PerfSQL02;Initial Catalog=Masked;Integrated Security=True;Max Pool Size=1000

I'm not sure where else to check, I know I have a lot of moving parts here but I'm getting the feeling I'm just missing a max pool setting somewhere.

How to script all permissions on a schema

Posted: 08 Oct 2013 09:14 AM PDT

SQL management studio allows to create scripts for all db objects, however I so far couldn't find a way to correctly script a schema or user. The permissions of a user on a schema are not included in the script that is created. Did I make something wrong or is MSFT her a bit sloppy ?

Data Model: Parent, Child, Grandchild with Child Being Optional

Posted: 08 Oct 2013 08:42 AM PDT

I posted the question in another forum and I was suggested to re-post it here which may be more appropriate. Thank you in advance.

My organization structure:
   Department -- has many sections ( and employees)
      Section -- has many employees
         Employee -- most of the employees belong to Section directly.

However, there are two departments which have no medium Sections and employees of these Sections report to Department Director directly.

I do not know what can be done to better reflect the business structure using an Entity Relationship model.

I learned of a similar posting which is still different than what I need.
  http://stackoverflow.com/questions/2530215/model-a-zero-or-one-to-many-relationship?rq=1

John

Importing Multipe Trace Files To A SQL Server Table

Posted: 08 Oct 2013 10:22 AM PDT

I am in the process of importing 200+ trace files (which are massive), and my current approach is to perform a loop and insert the trace data (see the below script). I looked around to see if there was a faster way to do this, whether through SSIS or C# and it appears that they still call the below function, similar to the script below this.

Anyone have any other methods that they use to import multiple traces? Don't get me wrong, the below code works, but I'm curious if there's something faster that I'm not considering.

DECLARE @start INT = 1, @stop INT = 223, @sql NVARCHAR(MAX), @path VARCHAR(1000)      WHILE @start <= @stop  BEGIN           SET @path = 'N:\Traces\TraceFile_' + CAST(@start AS VARCHAR(5)) + '.trc'           SET @sql = 'INSERT INTO SavedTraces                     SELECT *                     FROM ::fn_trace_gettable(''' + @path + ''', default)                      '           EXECUTE sp_executesql @sql           SET @start = @start + 1         SET @path = ''         SET @sql = ''    END  

Data Notes: 490MB (~.5G), which holds 11,700,000+ rows, requires 13:11 minutes to import.

Can't stop MySQL server on Raspberry pi

Posted: 08 Oct 2013 09:48 AM PDT

I can't stop mysql server on raspberry pi. If I use mysql workbench I have been able to start and stop the server perfectly fine for months. However it now refuses to stop! How do I force it to stop?

I am using raspbian OS

I am trying to import a file but it gives me exception : ORA-20001: Invalid Operation. while calling a stored procedure, how can i solve this [on hold]

Posted: 08 Oct 2013 05:04 PM PDT

I am trying to import a file , but while calling the procedure i get an exception ORA-20001: Invalid Operation.

Stored Procedure:

begin    -- Call the procedure    imp_pkg.isinmstimp(cpath => :cpath,                       cfilename => :cfilename,                       cusername => :cusername,                       ndp_list_id => :ndp_list_id);  end;       if (imp_module == "m_importisin")              {                  //Call Procedure                       s_dsnstr = o_Cls_Utility.utl_fnGetDSNStr(SessionCheck.s_sessiondpid);                  OracleConnect o_Cls_OracleConnect = new OracleConnect(s_dsnstr);                  o_Cls_OracleConnect.CallStoredProcedure_PreInit();                  o_Cls_OracleConnect.CallStoredProcedure_Varchar2_AddParameters("cpath", parameter2);                  o_Cls_OracleConnect.CallStoredProcedure_Varchar2_AddParameters("cfilename", parameter3);                  o_Cls_OracleConnect.CallStoredProcedure_Varchar2_AddParameters("cusername", parameter4);                  o_Cls_OracleConnect.CallStoredProcedure_Double_AddParameters("ndp_list_id", 1);                  int recordsInserted = o_Cls_OracleConnect.CallStoredProcedure("IMP_PKG.ISINMSTIMP");                      o_Cls_OracleConnect.Dispose();                           }  

On the below line I get the exception :

int recordsInserted = o_Cls_OracleConnect.CallStoredProcedure("IMP_PKG.ISINMSTIMP");   

I have another bit of code which executes correctly, but gives an exception for Store Procedure :

value too large for column "CDSIL_MIGR"."ADDRESS"."ZIP" (actual: 11, maximum: 10)Row-no1

 if (imp_module == "m_impbankmaster")              {                  //Call Procedure                                 try                  {                      parameter2 = "d:\\imports";                      parameter3 = "CD100621U.002";                      parameter4 = "611";                        s_dsnstr = o_Cls_Utility.utl_fnGetDSNStr(SessionCheck.s_sessiondpid);                      OracleConnect o_Cls_OracleConnect = new OracleConnect(s_dsnstr);                      o_Cls_OracleConnect.CallStoredProcedure_PreInit();                      o_Cls_OracleConnect.CallStoredProcedure_Varchar2_AddParameters("cpath", parameter2);                      o_Cls_OracleConnect.CallStoredProcedure_Varchar2_AddParameters("cfilename", parameter3);                      o_Cls_OracleConnect.CallStoredProcedure_Varchar2_AddParameters("cusername", parameter4);                      o_Cls_OracleConnect.CallStoredProcedure_Double_AddParameters("ndp_list_id", 1);                      int recordsInserted = o_Cls_OracleConnect.CallStoredProcedure("IMP_PKG.BANKMSTIMP");                      o_Cls_OracleConnect.Dispose();                  }                  catch (Exception ex)                  {                      throw ex;                  }                           }  

Cross Join with Filter?

Posted: 08 Oct 2013 09:19 AM PDT

i need to make Sp to distribute students to their sections the procedure take 2 string parameters StuID and SecID

in case I've send '1,2,3,4,5' as StuID and 'a,b' as SecID i'm using spliting function which well return tables

Tb1 |   Tb2  1   |    a  2   |    b  3   |  4   |    5   |  

how can i get the following result

1 a  2 b  3 a  4 b  5 a  ....  

I've tried to do it via cross join but it did not show the result i want

select US.vItem as UserID,SE.vItem as Section   from split(@pUserID,',') us   cross join split(@pSectionID,',') se  

Newly installed Postgresql 9.2 on same box as 9.1

Posted: 08 Oct 2013 12:46 PM PDT

I have a new project at work that is using a PostgreSQL 9.2. But, I'm still having to support a project that uses 9.1. So, I'm trying to configure my local dev box to have both installed.

I have gotten 9.2 installed and confirmed it runs fine.

However, I can't connect to it.

xxxxxxx@xxxxxxx-desktop-ubuntu:~$ sudo /etc/init.d/postgresql restart  [sudo] password for xxxxxxx:    * Restarting PostgreSQL 9.1 database server                                                                     [ OK ]    * Restarting PostgreSQL 9.2 database server                                                                     [ OK ]   xxxxxxx@xxxxxxx-desktop-ubuntu:~$ su postgres  Password:   postgres@xxxxxxx-desktop-ubuntu:/home/xxxxxxx$ psql -h localhost -p 5432  Password:   psql (9.2.4, server 9.1.9)  WARNING: psql version 9.2, server version 9.1.           Some psql features might not work.  SSL connection (cipher: DHE-RSA-AES256-SHA, bits: 256)  Type "help" for help.    postgres=# \q  postgres@xxxxxxx-desktop-ubuntu:/home/xxxxxxx$ psql -h localhost -p 5433  Password:   psql: FATAL:  password authentication failed for user "postgres"  FATAL:  password authentication failed for user "postgres"  postgres@xxxxxxx-desktop-ubuntu:/home/xxxxxxx$   

So, how do I connect to this new instance? I thought that the Ubuntu/OS postgres user would allow me to connect, but it doesn't.

Other info:

  • 9.1 is on the default port: 5432
  • 9.2 is on port: 5433
  • Using Ubuntu 12.04

MYSQL LOAD DATA INFILE taking a long time

Posted: 08 Oct 2013 02:04 PM PDT

I have a MYSQL DB running on a raspberry pi. Now under normal circumstances MYSQL actually runs slightly quicker than it did on my much more powerful desktop. However I am trying to insert 60 million records in to the database using LOAD DATA INFILE. I tried it all in one go (a 1.2GB File) and it was still trying to load the data 1.5 days later. So I tried loading in 100 000 chunks which was fine for the first 3 million records but soon started to grind to a halt. So I then removed the indexes from the table and it seems to run a bit quicker but I noticed that for each 100 000 rows I insert the time increases by about 20 seconds.

What is strange is that when I did a database restore from my original desktop machines database (an identical db with 60million rows in the main table) the restore only took about 1 hour.

What is causing the slowdown for LOAD DAT

I should point out that I am using InnoDB

EDIT:

I reduced the chunks to 1000 records and left it running which did appear to speed things up as after about 1 hour it had inserted 24million records however each insert of 1000 was taking about 30 seconds. However I then decided to stop it running and restarted the raspberry pi. Then I ran the import again and low and behold the initial inserts were back to less than one second again.

So my question is, do I need to clear a cache or something as MYSQL appears to be getting bogged down rather than the actual LOAD DATA INFILE being slow. It is almost as if it is filling up memory and not releasing it or something much more technical to do with MYSQL.

EDIT-----------

Just to give an idea of just how much it is slowing down. It inserts around 27million rows in the first 40 mins (the majority of which is inserted in the first 15 mins). Then I have estimated that it will take around 48 hours+ to insert the next 30million rows! However if I restart the raspberry pi it goes crazy quick again (despite the 27million rows still being in the table)

Backup / Export data from MySQL 5.5 attachments table keeps failing!

Posted: 08 Oct 2013 11:25 AM PDT

Can anyone please help! - I have a large table in a MySQL 5.5 database. It is a table which holds a mixture of blobs/binary data and just data rows with links to file paths. It has just over a million rows.

I am having desperate problems in getting the data out of this table to migrate it to another server.

I have tried all sorts - mysqldump (with and without -quick), dumping the results of a query via the command line. Using a MySQL admin tool (Navicat) to open and export the data to file, CSV, or do a data transfer (line by line) to another DB and/or another server but all to no avail.

When trying to use the DB admin tool (Navicat), it gets to approx 250k records and then fails with an "Out of memory" error. I am not able to get any error messages from the other processes I have tried, but they seem to fall over at approximately the same number of records.

I have tried playing with the MySQL memory variables (buffer size, log file size, etc) and this does seem to have an effect on where the export stops (currently I have actually made it worse).

Also - max_allowed_packet is set to something ridiculously large as I am aware this can be a problem too.

I am really shooting in the dark, and I keep going round and round trying the same things and getting no further. Can anyone give me any specific guidance, or recommend perhaps any tools which I might be able to use to extract this data out??

Thanks in hope and advance!

A little more information below - following some questions and advice:

The size of the table I am trying to dump - it is difficult to say, but the sql dump gets to 27gb when the mysqldump dies. It could be approximately 4 times that in total.

I have tried running the following mysqldump command:

mysqldump --single-transaction --quick mydatabase attachments --password=abc123 -u root > d:\attachments.sql   

And this gives the error:

mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table attachments at row: 251249 – Mayb2Moro 4 hours ago

The server has 8gb RAM, Some of the relevant settings copied below. It is an INNODB database/table.

innodb_buffer_pool_size=3000M  innodb_log_file_size=1113M  max_allowed_packet=2024M  query_cache_size=52M  tmp_table_size=500M  myisam_sort_buffer_size=50M  

Constraint to one of the primary keys as foreign key

Posted: 08 Oct 2013 02:25 PM PDT

Table1: grid_col (col_id,f_id,f_value)       (col_id,f_id) is primary key.     Table2: grid (grid_id,col_id,text)       (grid_id) is primary key.   

I want to have a constraint for grid that, col_id should be present in grid_col. I can't have foriegn key constraint here. I can create a function constraint which scans the grid_col while inserting in grid but in that case it increases the chances of having deadlock. How to add a constriant here?

disk I/O error in SQLite

Posted: 08 Oct 2013 03:25 PM PDT

What are the possible things that would trigger the "disk I/O error"? I've been having this problem and I couldn't find a solution. I have a SQLite3 database, and I'm trying to insert data from a file that contains SQL inserts.

Sample data in the file:

insert into files (filesize, filedate, md5, fullpath, origin) values (5795096,1370159412, "e846355215bbb9bf5f30102a49304ef1", "SDs/16G-1/DSC00144.JPG", "SDs");   insert into files (filesize, filedate, md5, fullpath, origin) values (5435597,1370159422, "1a7bcf3a4aaee3e8fdb304ab995ff80f", "SDs/16G-1/DSC00145.JPG", "SDs");  insert into files (filesize, filedate, md5, fullpath, origin) values (5121224,1370159432, "16d28e83599c731657a6cd7ff97a4903", "SDs/16G-1/DSC00146.JPG", "SDs");  

I tried inserting that in the db file with the following command:

$ sqlite3 allfiles.db < insert.sql  

See below the error that I get:

Error: near line 27: disk I/O error  Error: near line 28: disk I/O error  Error: near line 34: disk I/O error  Error: near line 39: disk I/O error  Error: near line 47: disk I/O error  Error: near line 129: disk I/O error  

The input lines that don't generate error are successfully included, but I don't understand why some lines have errors, and they are not inserted into the DB. There's nothing special in the lines with error, and if I run the command again I get errors in different lines, which means it's random (not related to the data itself). I tried adding pragma syncrhonous = off; and pragma temp_store = memory;, to no success. I'm running that on a lubuntu, which runs in a VirtualBox virtual machine. The host machine is a windows 7. The pwd of the files is a shared folder, i.e., it's a folder in the host machine. If I run it in a "local folder" in the guest machine, the error doesn't happen, although for some reason it's much slower... In any case, I'd like to know about the I/O error.

how to find similar word with more similarities

Posted: 08 Oct 2013 03:43 PM PDT

how to Find words with length less than or equal...

  declare @inp nvarchar(max),@data nvarchar(max)    set @inp='You can dance, you can jive, having .... jove... jve, ...'     set @data = 'jeve'    select @inp as results where @inp like '%' + @data +'%'  

@inp not return any data, but i want show:

results   =====  jive  jove  jve  

best way to copy data from SQL Server db to MYSQL DB(remote)

Posted: 08 Oct 2013 10:33 AM PDT

I currently copy data between my local SQL Server to a remote MySQL database linking in SQL and doing a data transformation task but now I'm sending lots of data it is getting very slow.

Just after some advice please on which way anyone would recommend I should go about getting large chunks of data from one to the other. I know there are various ways of doing this, I am just unsure if I should:

  • Keep my link via SQL.
  • Export the data to a CSV file then run a script on the remote end.
  • Something else?

Just need a pointer in the right direction.

Creating the MySQL slow query log file

Posted: 08 Oct 2013 12:25 PM PDT

What do I need to do to generate the slow logs file in MySQL?

I did:

 log_slow_queries  = C:\Program Files\MySQL\MySQL Server 5.1\mysql-slow.log    long_query_time  = 1   

What more do I need to do to?

SSIS Script to split string into columns

Posted: 08 Oct 2013 04:25 PM PDT

I have a dataset (log file) with a number of columns; one of them is "Other-Data" below (unordered string) and need to parse the string to create the derived columns according the u value (U1, U2, U3, etc...). The output columns should be something like:

U1   U2  U3                  U4   U5   etc.  null odw odw : CH : de : hom null null     EUR  sss DE:de:hom           null null     EUR  crm crm                 null null     

Other-Data:

u3=odw : CH : de : hom;u2=odw : Product : DSC-HX20V;~oref=http://www.bidl.ch/lang/de/product/dsc-h-series/dsc-hx20v  u1=EUR;u2=sss:Checkout-Step4:Orderacknowledgement;u3=DE:de:hom;u11=1;u12=302338533;u13=SVE1511C5E;u14=575.67;~oref=https://shop.bidl.de/shop/bibit/success.do  u15=1187;u13=SVE14A1C5E~VAIOEWY401;u11=1~1;u10=843.9~121.14;u9=1038~149;u3=crm : FI : fi : hom;u1=EUR;u2=crm : Checkout : Order acknowledgement;~oref=https://shop.bidl.fi/shop/bibit/success.do  

Can anyone help with this?

Impact of changing the DB compatibility level for a Published replicated DB from 90 to 100

Posted: 08 Oct 2013 10:25 AM PDT

I have a SQL Server 2008 R2 server with a bunch of published databases that are currently operating under compatibility level 90 (2005).

The subscription databases are also SQL Server 2008 R2, however the destination databases are set to compatibility level 100 and replication is working fine.

If I change the compatibility level for the Published databases, will it affect replication in any way, or will it just be a case of reinitializing all the subscriptions and restarting replication?

I suspect that changing the published database compatibility level may change how the replication stored procedures function slightly, but I'm not 100% sure.

Is this the case?

[MS SQL Server] TEMPDB is growing to 400GB

[MS SQL Server] TEMPDB is growing to 400GB


TEMPDB is growing to 400GB

Posted: 08 Oct 2013 05:15 AM PDT

In our production server, tempdb is growing suddenly to 400GB (Earlier, it will reach maximum 210GB). There is no changes at server level or database level.But, It is not happening in another server where the server configuration is same as production server.Below the more information,Server is SQL server 2008 R2 and this server always executing some stored procedures (15 sessions simultaneously) which will push data into web server. Can someone help?

MSSQL Vulnerabilities

Posted: 08 Oct 2013 04:32 AM PDT

Do we have to review monthly microsoft security bulletin to identify vulnerabilities affecting sql server ? If so do we have to apply them every month on the server? Please clarify.My current sql edition : SQL server 2008R2 SP2 EnterpriseWindows: Windows server 2008R2 SP1

Backup users and permissions before database restore

Posted: 07 Oct 2013 05:46 AM PDT

Hi,We are refreshing our test environment daily from our production backup. But the users are loosing their permissions in dev environment. Can anyone suggest me how to automate the process of user and their permission backup and execute the permission script post -refresh ?I have a query that takes the user, role permissions for a particular database, i need to automate this process. Please post your suggestions.thanks in advance

Backup of msdb stuck in rollback

Posted: 08 Oct 2013 03:02 AM PDT

A native full backup of msdb and a Tlog backup on a SQL Server 2005 Enterprise instance (9.0.4053) deployed on a two-node Win2K3 cluster (Microsoft Windows NT 5.2 (3790)) are currently stuck in rollback. The msdb DB is in Simple recovery mode but the Tlog is full. The user DB is in full recovery mode. Attempts to add secondary Tlogs are failing due to a 3023 error (regarding the action needing to be serialized). Options I'm considering are moving non DB files off the volume to allow for autogrow of the Tlog and instance restart. I've tried changing the state of the DB from multi-user to single-user but the SPID stuck in rollback is blocking. SPID 126 is the session in question below.Are there other options I can consider?[code]session_id status blocking_session_id wait_type wait_time wait_resource transaction_id1098 suspended 0 TRACEWRITE 828 25828712297100 suspended 0 WAITFOR 687 0116 suspended 0 WAITFOR 1031 0126 suspended 0 BACKUPTHREAD 1611623843 0201 suspended 441 LCK_M_U 736093 DATABASE: 15 [BULKOP_BACKUP_LOG] 0249 suspended 0 TRACEWRITE 62 41659048209323 suspended 0 WAITFOR 1625 0418 suspended 0 PAGEIOLATCH_SH 0 18:1:2809411 43527231806441 suspended 0 BACKUPTHREAD 1307928843 0500 suspended 0 TRACEWRITE 390 43040835468[/code]

Checkpoint process

Posted: 07 Oct 2013 08:44 PM PDT

Hi AllI've seen it posted all over the web that using the checkpoint pages/sec counter is one of the counters used to check for memory pressure.This makes no sense to me because the checkpoint process never actually removes pages from memory, it takes dirty pages, writes the changes to disk and marks the page as clean. As far as I understand, this page still exists in memory. Is my understanding correct?Thanks

[Articles] The 2013 SQL in the City Tour

[Articles] The 2013 SQL in the City Tour


The 2013 SQL in the City Tour

Posted: 07 Oct 2013 11:00 PM PDT

Steve Jones starts the 2013 SQL in the City tour today. At least for him. He's on an airplane in order to be ready for the first US event tomorrow.

[SQL 2012] I dindn't understand how this query works?

[SQL 2012] I dindn't understand how this query works?


I dindn't understand how this query works?

Posted: 08 Oct 2013 02:47 AM PDT

select * from empolyee where name > 'ax'ID NAME SALARY DEPT PREVIOUS_SAL1 amol singh 2000 10 2200 2 bill gates 3000 30 3300 3 hasan mir 7000 20 7700 4 robin 9000 50 9600 how this command compares?

SSIS SharePointList Adapters with Visual Studio/SQL Server 2012 - Destination Connector Error

Posted: 08 Oct 2013 03:06 AM PDT

Has anyone successfully used the newest SharePointList Adapters for 2012 at [url=https://sqlsrvintegrationsrv.codeplex.com/downloads/get/607593]SharePointListAdaptersSetup-Beta-2012-01-28.msi[/url] , specifically the Destination connection? I have a working 2008 package, and there, when you edit the SharePointList Destination and click on the Refresh button, the columns on the SharePoint List are imported so that you can then map whichever columns you may want to update.In this more recent SharePoint Adapter, however, this produces the error shown in the attached screen snippet.And, since I cannot get the columns, I am unable to update the two that my package should write back to SharePoint.Incidentally, this beta version of the SharePointList adapters [u]only installs a 32-bit [/u]version, and not a 64-bit, so you will find them only in the Program Files (x86) tree.Any help, comments, suggestions, etc. are greatly appreciated - thanks!Larry

trying to create a table - unpivot

Posted: 08 Oct 2013 01:18 AM PDT

Hi :-),I am having problems understand the function unpivot on sql.I have a query: [code="sql"]select RecordDay, ClickCount into #myclicks from dbo.clickswhere RecordDay >= getdate()-38[/code]that the outcome will be (with the hours, minutes and seconds)RecordDay ClickCount8/31/2013 18/31/2013 18/31/2013 68/31/2013 18/31/2013 18/31/2013 18/31/2013 18/31/2013 18/31/2013 18/31/2013 18/31/2013 18/31/2013 18/31/2013 18/31/2013 18/31/2013 18/31/2013 2I want to build from my temp table something that will look like this, where the measure will be the sum of ClickCountRecordDay 8/31/2013 9/7/2013 9/14/2013 9/21/20130 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 I know I will need to define on the columns[code="sql"] datepart(weekday,getdate()) = datepart(weekday,datecreated) [/code] in order to get the same day of the week, and I imagine I will need to do a dateadd to group it by minutes on the rows.But I am getting extremely lost here.Help please :w00t:

split performance

Posted: 07 Oct 2013 10:46 PM PDT

Hi all i have tried below two functions to split string,but both are working slow:[b]Function 1 using substring and while loop[/b]ALTER function [dbo].[uf_split]( @String nvarchar (MAX), @Delimiter nvarchar (10) )returns @ValueTable table ([Value] nvarchar(MAX))begin declare @NextString nvarchar(MAX) declare @Pos int declare @NextPos int declare @CommaCheck nvarchar(1) --Initialize set @NextString = '' set @CommaCheck = right(@String,1) --Check for trailing Comma, if not exists, INSERT --if (@CommaCheck <> @Delimiter ) set @String = @String + @Delimiter --Get position of first Comma set @Pos = charindex(@Delimiter,@String) set @NextPos = 1 --Loop while there is still a comma in the String of levels while (@pos <> 0) begin set @NextString = substring(@String,1,@Pos - 1) insert into @ValueTable ( [Value]) Values (@NextString) set @String = substring(@String,@pos +1,len(@String)) set @NextPos = @Pos set @pos = charindex(@Delimiter,@String) end returnend[b]FUNCTION 2 with XML:[/b]ALTER function [dbo].[split] ( @qustion_option_id varchar(max), @separator char ) RETURNS @list TABLE (item varchar(max))BEGIN DECLARE @xml XML SELECT @xml = CAST ( '<A>' + REPLACE ( @qustion_option_id, @separator, '</A><A>') + '</A>' AS XML) INSERT INTO @list SELECT t.value('.', 'varchar(max)') FROM @xml.nodes('/A') AS xml(t) RETURN ENDany other way to do the split function efficiently?

Migrating SQL Jobs from SQL 2005 to SQL 2012

Posted: 08 Oct 2013 01:17 AM PDT

This week we migrated from SQL Server 2005 to SQL Server 2012. I just looked in SSMS 2012 and found none of our SQL Jobs there. How do I migrate them? Do I generate their SQL scripts and do it that way?

dbWarden HealthReport won't run on SQL 2012 standard

Posted: 07 Oct 2013 05:35 PM PDT

Hi everybody,i am usins dbWarden on 8 sql-servers. Everything runs fine, only my SQL 2012 standard won't execute the heath report.Error from job-history:Null value is elimitnated by an aggregate or other SET ......If i try to execute the SP direct, i get the following error:Warning: Null value is eliminated by an aggregate or other SET operation.Msg 213, Level 16, State 7, Procedure sp_replmonitorhelppublication, Line 320Column name or number of supplied values does not match table definition.Has anybody an idea? I get emails when there are blockings etc.., only the healthreport won't run.SQL 2012 Standard SP1 64bitWindows Cluster 2008It is our distribution-cluster, nothing else.thanksregards,Andreas

SQL Tools 2012

Posted: 07 Oct 2013 06:43 AM PDT

Hello EveryoneI have had some SSIS packages that I converted to SQL 2012 using the SQL tools, what was once BIDS. I would like to edit one, but none of the tools are active. Has anyone else experienced this?Thank you in advance for your assistance, comments and suggestions.

Setting database OFFLINE and ONLINE does not "appear" to clear the cache.

Posted: 07 Oct 2013 05:48 AM PDT

Per BOL: [url]http://msdn.microsoft.com/en-us/library/bb522682.aspx[/url][quote]The plan cache for the instance of SQL Server is cleared by setting one of the following options[/quote] Which includes OFFLINE and ONLINE. I don't doubt it, but we like to test things. So far, setting the database offline, even for 10 minutes, and bringing it back online did not appear to clear the cache as evidenced by no differences before and after in sys.dm_exec_query_stats, sys.dm_exec_cached_plans, and:[code="sql"]SELECT OBJECT_NAME(p.object_id) AS [ObjectName] , p.object_id , p.index_id , COUNT(*) / 128 AS [buffer size(MB)] , COUNT(*) AS [buffer_count]FROM sys.allocation_units AS a INNER JOIN sys.dm_os_buffer_descriptors AS b ON a.allocation_unit_id = b.allocation_unit_id INNER JOIN sys.partitions AS p ON a.container_id = p.hobt_idGROUP BY p.object_id , p.index_idORDER BY buffer_count DESC;[/code]Any thoughts on what we are doing wrong to test this? We tested on a 2008R2 and 2012 instance.

Additional Log Shipping secondary server - how to

Posted: 07 Oct 2013 07:15 AM PDT

I have an existing 2008R2 primary server shipping logs from ODS to a third server. I want to add my Data Warehouse server to the existing log shipping scheme as a secondary server. I know this is likely an idiotic question to those who have done it a bazillion times, but how can this be done?:ermm:

[T-SQL] Filter Countries, States and Cities which in same row in one table.

[T-SQL] Filter Countries, States and Cities which in same row in one table.


Filter Countries, States and Cities which in same row in one table.

Posted: 07 Oct 2013 08:04 PM PDT

I have table which contains the data of the Users with different Country, State and city, in which State and city may be null or may not be null. And all these data are in one table.Table is likeUserId Not Null,Username Not Null,Fullname Not Null,CountryId Not Null,StateId Null,CityId NullI need data based on 1. CountryId2. CountryId and StateId,3. CountryId, StateId and CityIdPlease suggest me the possible queries.

LOCK/Wait on Begin Tran and Commit

Posted: 08 Oct 2013 12:54 AM PDT

Hi,We have a script with insert statements in between BEGIN TRANSACTION and COMMIT.The data from those insert statements should populate few tables. When two of us run the script at the same time. Some of my data and some of other persons data are getting populated. I would like to know the best option to get rid of this issue. Should we use NOLOCK or Wait or any other approach that is the best.Can somebody suggest?Thanks

Out of memory error

Posted: 20 Sep 2013 01:01 AM PDT

I have a procedure, in which returns many results in the grid(over 1 million) and when I run it I get the out of memory error. I tried running the script with the "discard results after execution" box checked and not only was I able to process many more records, but the execution time was cut down from about an hour to 5 minutes. The checkbox I was referring to is located under Tools > Options > Query Results > SQL Server > Results to Grid > Discard results after execution. Is there an equivalent T-SQL command that can perform the same configuration? Having this box checked still processes the script, but doesn't produce any results in the grid. The script now runs so much faster and memory isn't getting used up by the massive amounts of rows that were previously being returned. Since this is will be run in production, i would like to include this option in the procedure/ script instead of selecting it from GUI.

SELECT for UPDATE....

Posted: 08 Oct 2013 12:23 AM PDT

Hi,We have several tables, sales, orders, ..., that when they're updated insert record in a pool so that the customer balance is updated through a scheduled procedure...The scheduled procedure currently has a cursor that goes through all the records in the pool, makes a big update query and then deletes the record from the pool (no risk of doing the same record twice since SQL Agent doesn't start a new schedule until the old one has finished...).Also a scheduler log is inserted in a table with the number of records processed..[code="sql"]SET @cnt = 0DECLARE cur CURSOR FOR SELECT Id, Customer FROM UpdatePoolOPEN curFETCH NEXT FROM cur INTO @id, @customerWHILE @@FETCH_STATUS = 0BEGIN UPDATE ...... WHERE customerid = @customer SET @cnt = @cnt + 1 DELETE FROM UpdatePool WHERE id = @id FETCH NEXT FROM cur INTO @id, @customerENDCLOSE CURDEALLOCATE curINSERT INTO PoolLog (Date, NumProcs) VALUES (GETDATE(), @cnt)[/code]Is there any way of doing this with an UPDATE .. SET ... WHERE customerid IN (SELECT customerId FROM UpdatePool) and then delete the processed records?! I could try to put the INSERTED.customerId from the UPDATE in a table variable and then delete from UpdatePool where customerid IN ... but I can delete more recent process requests (while I'm processing a 2nd customerid a request for the 1st can be inserted again...).Thanks,Pedro

tsql script check existing data on creation

Posted: 07 Oct 2013 08:55 AM PDT

I am missing something here. I need to create a script to do this. To disable I do the following command[code="sql"]EXECUTE sp_msforeachtable @command1="Print 'Test'", @Command2 ="ALTER TABLE Foo.tblFoo NOCHECK CONSTRAINT all"[/code]To enable the FK I do this[code="sql"]EXECUTE sp_msforeachtable @command1="Print 'Test'", @Command2 ="ALTER TABLE Foo.tblFoo CHECK CONSTRAINT all"[/code]The check existing data on creation in the table is still set to "No". I can't seem to google a solution. Any of you potentially know if this can be done without using the ui?

import string with array into table

Posted: 07 Oct 2013 02:31 AM PDT

Hello again,I am running into a problem importing a string containing an array into a table.My string looks like '[{mark,peters,mr}{jane,fonda,mrs}{john,doo,mr}{james,bond,mr}]'I want to import this into a temp table that looks like:ID, Fname, Lname, Sex1, mark, peters, mr2, jane, fonda, mrs3, john,doo,mr4, james,bond,mrCan someone help me with this? I really have no clue how to solve this.Thanks a lot!Mike

lookup table for minutes of a day

Posted: 07 Oct 2013 09:01 AM PDT

Does anyone know if a website or have a query that generates a table with a row for each minute of the day starting 2 years ago and going 10 years into future?Need it to have values in one minute increments (1440 rows/minutes for each day):2013-10-07 04:322013-10-07 04:332013-10-07 04:342013-10-07 04:352013-10-07 04:362013-10-07 04:37Thanks!

Unpivot SQL Query

Posted: 07 Oct 2013 08:23 AM PDT

I have a table structure (tblmapping) like the following:TableName ColumnName ColumnValueProduct ProductID 1Product ProductID 2Product ProductName KeyboardProduct ProductName MouseI want to convert from column based data to row based data. I tried the following query, but syntax errors:[code="sql"]SELECT col,valueFROM(SELECT DISTINCT ColumnNameFROM tblMappingWHERE TableName = 'Product') pUNPIVOT(ColumnValue FOR ColumnName IN (SELECT DISTINCT ColumnName FROM tblMapping WHERE TableName = 'Product')) AS unpvt; [/code]Msg 156, Level 15, State 1, Line 7Incorrect syntax near the keyword 'SELECT'.Msg 102, Level 15, State 1, Line 9Incorrect syntax near ')'.Not sure what I'm doing wrong. Thanks for the help.

Order with unpivot

Posted: 07 Oct 2013 06:12 AM PDT

I have an unpivoted table that seems ordered the way I want but I am not sure if it is guarenteed to be that way.If I have a table:[code]Year TotalNew TotalLost NetGain2013 29544.48 -10832.00 18712.482012 35549.67 -20252.65 15297.022011 136816.89 -22860.54 113956.352010 45795.48 -54933.17 -9137.692009 61113.35 -19419.3 41694.05[/code]And I want to unpivot it:[code]SELECT ActivityYear,Activities, ActivityFROM( SELECT BoundOrLostYear AS ActivityYear , Totalnew , TotalLost , NetGain FROM @FinalTable2) pUNPIVOT(Activity FOR Activities IN (Totalnew, TotalLost, NetGain)) b[/code]This comes out perfect.[code]Year Activities Activity2013 Totalnew 29544.482013 TotalLost -108322013 NetGain 18712.482012 Totalnew 35549.672012 TotalLost -20252.652012 NetGain 15297.022011 Totalnew 136816.892011 TotalLost -22860.542011 NetGain 113956.352010 Totalnew 45795.482010 TotalLost -54933.172010 NetGain -9137.692009 Totalnew 61113.352009 TotalLost -19419.32009 NetGain 41694.05[/code]I need to make sure that years is ordered as it is and the 2nd column is also ordered as it is. If it isn't, I could use an order by on the year in descending order. But I couldn't do the same for the second column.How would I change the unpivot statement to fix this if it only just happens to go this way?Thanks,Tom

How to rename a column in all tables and views

Posted: 07 Oct 2013 05:35 AM PDT

HiI want to change the name of a column in all tables and views (rename Feild "Phone" To "TelePhone")Sp-ReName Change only one table can be done Plz Help Me

Need Help

Posted: 07 Oct 2013 02:12 AM PDT

Hi ,I am trying to delete files in a folder except the last created file in the folder.can you please help me with the logic.The files in the folder are sql server audit files with the extension .sqlauditI cannot choose the files based on date because if the auditing data is not present on todays date . the old files can be deleted.I am looking for a logic which should look at the files in the folder and delete all the files in that folder except the last created file in that folder.I am using SQL Server 2008 R2 Enterprise edition RTM 64 bit on windows server 2008 R2 Enterprise edition sp1 64 bit Thank You,

Search This Blog