Tuesday, October 8, 2013

[how to] Advice on SQL fundamentals online training [on hold]

[how to] Advice on SQL fundamentals online training [on hold]


Advice on SQL fundamentals online training [on hold]

Posted: 08 Oct 2013 07:17 PM PDT

Sorry for the very noob question here, but can anyone tell me the best crash course in SQL fundamentals I could study for table maintenance (its an Oracle shop by the way) and minor config changes. I have the chance at a promotion and I need to know these skills (all the other skills required I have but SQL is where I fall over sadly) I will have a month to learn.

Cheers

Import CSV file into Mysql with Multiple Delimiters/Field Separators

Posted: 08 Oct 2013 05:59 PM PDT

I'm trying to import a large csv file into Mysql. Unfortunately, the data within the file is separated both by spaces and tabs.

As a result, whenever I load the data into my table, I end up with countless empty cells (because Mysql only recognizes one field separator). Modifying the data before importing it is not an option.

Here is an example of the data:

# 1574    1 1 1  $ 1587    6 6 2  $115 1878    8 9 23  

(Where the second and third value of every row are separated by a tab)

Any ideas?

Bidireccional synchronization between a local Sql Server 2005 database and a SQL Azure database using Sql Data Sync

Posted: 08 Oct 2013 08:20 PM PDT

I need to synchronize an Sql Server 2005 database in an SQL Azure database. In other posts I've read that I can do this using Sql Data Sync but I don't know if SQL Data Sync can make the synchronizations that I need without exceeding their limitations. This is the exact process I need to do: First: synchronize 1 table from Azure to Sql Server. Second: Execute some Store procedures in the SQL Server 2005 instance. And third: Synchronize various tables from Sql Server to Azure sequentially.

Thanks!

How to move csv file into oracle database using sql developer?

Posted: 08 Oct 2013 09:02 PM PDT

How to move csv file into oracle database without creating table firstly using sql developer rather than sql loader?

Creating PostGIS extension in single-user mode

Posted: 08 Oct 2013 03:55 PM PDT

I am trying to create a PostGIS extension for my PostgreSQL database when running a single-user mode by using the following command:

echo "CREATE EXTENSION postgis;" | su postgres sh -c "/usr/lib/postgresql/9.2/bin/postgres --single -D /var/lib/postgresql/9.2/main --config_file=/etc/postgresql/9.2/main/postgresql.conf dbname"

which returns

UTC ERROR: type reclassarg[] does not exist

I am doing it while deploying a Docker container, so I cannot use psql since the database is not running at this moment. After the deployment Docker deployment is finished and the database is started running psql -d dbname -c "CREATE EXTENSION postgis;" works like a charm which means that PostGIS is installed.

What does this type reclassarg[] does not exist message mean and how can I create the PostGIS extension from the single-user mode?

Database design question

Posted: 08 Oct 2013 06:18 PM PDT

Hello and thank you for reading my post. I work for a company that houses many gigabytes of data in SQL Server. The company is in the process of taking a huge step forward to re-architect and re-organize this data.

The following facts can be summarized about the environment.

  1. There are approximately 30 SQL Server instances that contain a total of around 250 databases.
  2. The system as a whole consists of hundreds one off programs that created individual databases and UI's, reports etc..
  3. As time went along each new database started to need data from the other databases and linked servers were created making these all intertwined creating a data path that is difficult to decode and maintain.
  4. There are about 10 core types of data mixed into this surrounding 10 major applications.
  5. The published data is relatively static needing monthly pushes to the release version of the data.


The following approach is currently being researched:

  1. Create a new database that will house all data into normalized structures
  2. Create SSIS packages to move the data up to the new database.
  3. Create another set of SSIS packages to maintain the data by finding the best path for the data and doing incremental updates to the new database.
  4. Over time rewrite applications using the new data model and retire the associated SSIS packages.
  5. The proposed structure of the new database ( per the data architect ) would create several tables that have upwards of 10 billion rows each.
  6. With this proposed model it would require many joins between these huge tables and others. One query could join up to 15-20 tables. These will all be delivered to a website front end with Web API Restful services where the business requires a 1 second response.

Questions

Will SQL possibly be able to live up to this performance wise? Those huge tables will have a small record size consisting of about 8 fields that are integers and 2 text fields.

Should the company be looking at more of a big data solution like Hadoop? Where the architecture looks more appropriate there is no internal knowledge of anything except SQL Server which is version 2012.

Thank you for any insight you are able to provide.

Which string variables in mysql support utf-8?

Posted: 08 Oct 2013 03:35 PM PDT

I have a table that stores string from a user and is displayed in a web form. But as I see when the user inputs e.g. in a language of cyrilic alphabet (I assume is UTF-8) garbage are displayed back. I did a show create table and I saw that the table is defined as LATIN-1 and the column that stores the string is defined as TEXT. I am not clear on what type of data does TEXT stores. Is it only for ASCII? Should I be using a different data type? Which would be the most appropriate?

Two possibilities of primary key

Posted: 08 Oct 2013 05:42 PM PDT

I have a table where it holds the users identification. This id can be of two types, suppose: int and varchar, but one table can't have two primary keys, and a composite one wouldn't solve my problem.

+--------------------+  | all_info           |  +--------------------+  | PK id1 varchar(50) |  | PK id2 int         |  | ...                |  +--------------------+  

So i created two others tables with the primary keys of the types that i needed, add some informations for the specific types of users, and made relations with the only table that holds the rest of the users informations (that both types share):

+--------------------+    +--------------------+  | unique_info1       |    | unique_info2       |  +--------------------+    +--------------------+  | PK id varchar(50)  |    | PK id int(10)      |  | ...                |    | ...                |  +--------------------+    +--------------------+  

In the table that holds the similar users informations i had created an artificial primary key so i don't have to verify in which column is the id of the user everytime i need to manipulate it, and also to not have to create two columns in every other table where i want to make relation with the user id:

+-------------------------+    +-------------------------+  | similar_info            |    | other_table             |  +-------------------------+    +-------------------------+  | PK id int(10) [AI]      |    | PK char(3)              |  | ...                     |    | ...                     |  | FK uniq1_id varchar(50) |    | ...                     |  | FK uniq2_id int(10)     |    | FK similar_id int(10)   |  +-------------------------+    +-------------------------+  

The thing is: that's the best approach, or should i change the plan?

For examples:

  • Choose other information to be the id of the users, where all will have the same type, and add all specific info of the users to that hole table (what would result in a lot of null fields for each user)?
  • create two completely different tables for the two types of users (what would result in redundant info)?

Oracle: how to set only some roles as not default for a user (no GUI)

Posted: 08 Oct 2013 12:29 PM PDT

Scenario:

  1. I have instanceA
  2. I have instanceB
  3. I have USER1 in instanceA
  4. I have USER1 in instanceB
  5. USER1 in instancA has four roles grante to it:
    • ROLE1 default yes admin_option no
    • ROLE2 default false admin_option yes
    • ROLE3 default false admin_option no
    • ROLE3 default yes admin_option yes
  6. USER1 in instanceB has no roles granted
  7. All mentioned roles exist also in instanceB although not granted to USER1
  8. I have already writen a script that generates the DDL in order to duplicate the role grants for USER1 in instanceB.

Problem:

  1. I haven't found a way to programatically do the following because the DEFAULT=NO part doesn't exist in the GRANT ROLE clause.

    • grant ROLE1 to USER1; -- admin option no
    • grant ROLE2 to USER1 DEFAULT NO with admin option;
    • grant ROLE2 to USER1 DEFAULT NO;
    • grant ROLE3 to USER1 with admin option;
  2. Notice than I want ro replicate in instanceB the same role set USER1 has in instanceA, meaning two of the roles are default and two aren't.

  3. I've studied setting the default role false for ROLE2 and ROLE3 after granting them, using ALTER USER DEFAULT ROLE but that only works for setting ALL roles to default or setting ALL role to non-default with NONE.

  4. I cannot find a way for setting only ROLE2 and ROLE3 as not-default and ROLE1 and ROLE2 as default.
  5. I know that I can set that using the visual console but I need to automate this so I need syntax way to do it.
  6. I don't want to make the roles ask for password. All roles in the databases are not password roles.

Curious About SQL Server Registry Entries

Posted: 08 Oct 2013 12:08 PM PDT

I'm working on building some documentation across my systems and planned on simply using Powershell to script out the legwork of what I wanted to do. While that's going all well and good, I ran into an issue with my SQL Server registry entries. The problem being that some of my SQL servers have all of their registry values while some don't.

For example, I'm pulling down the SQL Shared Features entry so I know where that directory is on all servers. Now some of them are pulling data back (E:\yada\yada) while others aren't pulling anything back. After further investigation, I have noticed that several of my SQL servers don't have the required registration data saved within the registry. This happens with several different registry entries that should be there.

Any reason it's like this?

Dropped index from view still referenced in execution plans

Posted: 08 Oct 2013 05:51 PM PDT

On a SQL Server 2005 Enterprise edition server, an index has been dropped from a view. When running a select * query that includes this view, no results are shown. In addition, the execution plan references this index on the view (that no longer exists). By adding the EXPAND VIEWS option to this query, the correct results are returned.

We have tried to clear the cache plans with DBCC freeproccache, but the problem persists (the fresh cache plan still refers to this non-existent index). We have seen this problem before, and a restart of the SQL service corrected it, but I would like to find out the actual cause before we attempt that again.

Edit: I have tried sp_refreshview, with no change. The SQL build version is 9.00.3042 (SP2).

Review my simple database tables

Posted: 08 Oct 2013 01:49 PM PDT

Hey trying to create a simple database table for a small beer review project, it been awhile since i have created db's so could anyone just tell me if i am totally wrong here?

Especially my use of unique identifiers, as i see it they would always be unique?

Project description: A simple asp.net site containing beer data, and information on specific key elements like brewery etc. people should be able to search by name and read about the beer.

Picture of simple database

UPDATE SET REPLACE() matches but does not change

Posted: 08 Oct 2013 01:05 PM PDT

I've seen another post about this exact thing but it's 1-1/2 years old so I thought I'd make my own...besides, the other post did not help me. What I am trying to do is pretty straight-forward.

To be on the safe side, I tried a simple UPDATE statement on a different table, which was successful. So, here's my problem:

mysql> UPDATE bugs_fulltext SET short_desc =  REPLACE(short_desc,'I don\'t know why my previous comment means.',  'I don\'t know what my previous comment means.') WHERE bug_id=452;  Query OK, 0 rows affected (0.00 sec)  Rows matched: 1  Changed: 0  Warnings: 0  

I'm not using wildcards nor am I using a LIKE statement. I am telling it which record to update. Why isn't my text changed?

Max Connection Pool capped at 100

Posted: 08 Oct 2013 10:20 AM PDT

I'm running SQL Server 2008 R2 SP1, on a Windows Server 2008 box. I have a .NET script running from Visual Studio 2010 that does the following:

  • Reaches into the database
  • Makes a change
  • Iterates

The total number of times it will iterate is 150, however it is stopping at 100 connections and I can't figure out why. I could adjust my script to just use a single thread, but I'd prefer to know where I'm missing a max connection setting as that will be more useful to know for future reference.

Here's where I've checked so far:

  • SQL Connection String in Visual Studio 2010 (it's set to 1000)
  • SSMS Database instance connection properties (it's set to 0 [infinity] user connections)
  • Googled some information on Server 2008, it looks like it can handle more than 100 connections
  • Stepped through my code alongside SP_WHO2 which gives more information on logical connections, seeing that the # of connections starts at 52 and the script errors with the "Max Pooled Connections reached" error at 152 logical connections.
  • Changed the connection string to use Data Source=PerfSQL02;Initial Catalog=Masked;Integrated Security=True;Max Pool Size=1000

I'm not sure where else to check, I know I have a lot of moving parts here but I'm getting the feeling I'm just missing a max pool setting somewhere.

How to script all permissions on a schema

Posted: 08 Oct 2013 09:14 AM PDT

SQL management studio allows to create scripts for all db objects, however I so far couldn't find a way to correctly script a schema or user. The permissions of a user on a schema are not included in the script that is created. Did I make something wrong or is MSFT her a bit sloppy ?

Data Model: Parent, Child, Grandchild with Child Being Optional

Posted: 08 Oct 2013 08:42 AM PDT

I posted the question in another forum and I was suggested to re-post it here which may be more appropriate. Thank you in advance.

My organization structure:
   Department -- has many sections ( and employees)
      Section -- has many employees
         Employee -- most of the employees belong to Section directly.

However, there are two departments which have no medium Sections and employees of these Sections report to Department Director directly.

I do not know what can be done to better reflect the business structure using an Entity Relationship model.

I learned of a similar posting which is still different than what I need.
  http://stackoverflow.com/questions/2530215/model-a-zero-or-one-to-many-relationship?rq=1

John

Importing Multipe Trace Files To A SQL Server Table

Posted: 08 Oct 2013 10:22 AM PDT

I am in the process of importing 200+ trace files (which are massive), and my current approach is to perform a loop and insert the trace data (see the below script). I looked around to see if there was a faster way to do this, whether through SSIS or C# and it appears that they still call the below function, similar to the script below this.

Anyone have any other methods that they use to import multiple traces? Don't get me wrong, the below code works, but I'm curious if there's something faster that I'm not considering.

DECLARE @start INT = 1, @stop INT = 223, @sql NVARCHAR(MAX), @path VARCHAR(1000)      WHILE @start <= @stop  BEGIN           SET @path = 'N:\Traces\TraceFile_' + CAST(@start AS VARCHAR(5)) + '.trc'           SET @sql = 'INSERT INTO SavedTraces                     SELECT *                     FROM ::fn_trace_gettable(''' + @path + ''', default)                      '           EXECUTE sp_executesql @sql           SET @start = @start + 1         SET @path = ''         SET @sql = ''    END  

Data Notes: 490MB (~.5G), which holds 11,700,000+ rows, requires 13:11 minutes to import.

Can't stop MySQL server on Raspberry pi

Posted: 08 Oct 2013 09:48 AM PDT

I can't stop mysql server on raspberry pi. If I use mysql workbench I have been able to start and stop the server perfectly fine for months. However it now refuses to stop! How do I force it to stop?

I am using raspbian OS

I am trying to import a file but it gives me exception : ORA-20001: Invalid Operation. while calling a stored procedure, how can i solve this [on hold]

Posted: 08 Oct 2013 05:04 PM PDT

I am trying to import a file , but while calling the procedure i get an exception ORA-20001: Invalid Operation.

Stored Procedure:

begin    -- Call the procedure    imp_pkg.isinmstimp(cpath => :cpath,                       cfilename => :cfilename,                       cusername => :cusername,                       ndp_list_id => :ndp_list_id);  end;       if (imp_module == "m_importisin")              {                  //Call Procedure                       s_dsnstr = o_Cls_Utility.utl_fnGetDSNStr(SessionCheck.s_sessiondpid);                  OracleConnect o_Cls_OracleConnect = new OracleConnect(s_dsnstr);                  o_Cls_OracleConnect.CallStoredProcedure_PreInit();                  o_Cls_OracleConnect.CallStoredProcedure_Varchar2_AddParameters("cpath", parameter2);                  o_Cls_OracleConnect.CallStoredProcedure_Varchar2_AddParameters("cfilename", parameter3);                  o_Cls_OracleConnect.CallStoredProcedure_Varchar2_AddParameters("cusername", parameter4);                  o_Cls_OracleConnect.CallStoredProcedure_Double_AddParameters("ndp_list_id", 1);                  int recordsInserted = o_Cls_OracleConnect.CallStoredProcedure("IMP_PKG.ISINMSTIMP");                      o_Cls_OracleConnect.Dispose();                           }  

On the below line I get the exception :

int recordsInserted = o_Cls_OracleConnect.CallStoredProcedure("IMP_PKG.ISINMSTIMP");   

I have another bit of code which executes correctly, but gives an exception for Store Procedure :

value too large for column "CDSIL_MIGR"."ADDRESS"."ZIP" (actual: 11, maximum: 10)Row-no1

 if (imp_module == "m_impbankmaster")              {                  //Call Procedure                                 try                  {                      parameter2 = "d:\\imports";                      parameter3 = "CD100621U.002";                      parameter4 = "611";                        s_dsnstr = o_Cls_Utility.utl_fnGetDSNStr(SessionCheck.s_sessiondpid);                      OracleConnect o_Cls_OracleConnect = new OracleConnect(s_dsnstr);                      o_Cls_OracleConnect.CallStoredProcedure_PreInit();                      o_Cls_OracleConnect.CallStoredProcedure_Varchar2_AddParameters("cpath", parameter2);                      o_Cls_OracleConnect.CallStoredProcedure_Varchar2_AddParameters("cfilename", parameter3);                      o_Cls_OracleConnect.CallStoredProcedure_Varchar2_AddParameters("cusername", parameter4);                      o_Cls_OracleConnect.CallStoredProcedure_Double_AddParameters("ndp_list_id", 1);                      int recordsInserted = o_Cls_OracleConnect.CallStoredProcedure("IMP_PKG.BANKMSTIMP");                      o_Cls_OracleConnect.Dispose();                  }                  catch (Exception ex)                  {                      throw ex;                  }                           }  

Cross Join with Filter?

Posted: 08 Oct 2013 09:19 AM PDT

i need to make Sp to distribute students to their sections the procedure take 2 string parameters StuID and SecID

in case I've send '1,2,3,4,5' as StuID and 'a,b' as SecID i'm using spliting function which well return tables

Tb1 |   Tb2  1   |    a  2   |    b  3   |  4   |    5   |  

how can i get the following result

1 a  2 b  3 a  4 b  5 a  ....  

I've tried to do it via cross join but it did not show the result i want

select US.vItem as UserID,SE.vItem as Section   from split(@pUserID,',') us   cross join split(@pSectionID,',') se  

Newly installed Postgresql 9.2 on same box as 9.1

Posted: 08 Oct 2013 12:46 PM PDT

I have a new project at work that is using a PostgreSQL 9.2. But, I'm still having to support a project that uses 9.1. So, I'm trying to configure my local dev box to have both installed.

I have gotten 9.2 installed and confirmed it runs fine.

However, I can't connect to it.

xxxxxxx@xxxxxxx-desktop-ubuntu:~$ sudo /etc/init.d/postgresql restart  [sudo] password for xxxxxxx:    * Restarting PostgreSQL 9.1 database server                                                                     [ OK ]    * Restarting PostgreSQL 9.2 database server                                                                     [ OK ]   xxxxxxx@xxxxxxx-desktop-ubuntu:~$ su postgres  Password:   postgres@xxxxxxx-desktop-ubuntu:/home/xxxxxxx$ psql -h localhost -p 5432  Password:   psql (9.2.4, server 9.1.9)  WARNING: psql version 9.2, server version 9.1.           Some psql features might not work.  SSL connection (cipher: DHE-RSA-AES256-SHA, bits: 256)  Type "help" for help.    postgres=# \q  postgres@xxxxxxx-desktop-ubuntu:/home/xxxxxxx$ psql -h localhost -p 5433  Password:   psql: FATAL:  password authentication failed for user "postgres"  FATAL:  password authentication failed for user "postgres"  postgres@xxxxxxx-desktop-ubuntu:/home/xxxxxxx$   

So, how do I connect to this new instance? I thought that the Ubuntu/OS postgres user would allow me to connect, but it doesn't.

Other info:

  • 9.1 is on the default port: 5432
  • 9.2 is on port: 5433
  • Using Ubuntu 12.04

MYSQL LOAD DATA INFILE taking a long time

Posted: 08 Oct 2013 02:04 PM PDT

I have a MYSQL DB running on a raspberry pi. Now under normal circumstances MYSQL actually runs slightly quicker than it did on my much more powerful desktop. However I am trying to insert 60 million records in to the database using LOAD DATA INFILE. I tried it all in one go (a 1.2GB File) and it was still trying to load the data 1.5 days later. So I tried loading in 100 000 chunks which was fine for the first 3 million records but soon started to grind to a halt. So I then removed the indexes from the table and it seems to run a bit quicker but I noticed that for each 100 000 rows I insert the time increases by about 20 seconds.

What is strange is that when I did a database restore from my original desktop machines database (an identical db with 60million rows in the main table) the restore only took about 1 hour.

What is causing the slowdown for LOAD DAT

I should point out that I am using InnoDB

EDIT:

I reduced the chunks to 1000 records and left it running which did appear to speed things up as after about 1 hour it had inserted 24million records however each insert of 1000 was taking about 30 seconds. However I then decided to stop it running and restarted the raspberry pi. Then I ran the import again and low and behold the initial inserts were back to less than one second again.

So my question is, do I need to clear a cache or something as MYSQL appears to be getting bogged down rather than the actual LOAD DATA INFILE being slow. It is almost as if it is filling up memory and not releasing it or something much more technical to do with MYSQL.

EDIT-----------

Just to give an idea of just how much it is slowing down. It inserts around 27million rows in the first 40 mins (the majority of which is inserted in the first 15 mins). Then I have estimated that it will take around 48 hours+ to insert the next 30million rows! However if I restart the raspberry pi it goes crazy quick again (despite the 27million rows still being in the table)

Backup / Export data from MySQL 5.5 attachments table keeps failing!

Posted: 08 Oct 2013 11:25 AM PDT

Can anyone please help! - I have a large table in a MySQL 5.5 database. It is a table which holds a mixture of blobs/binary data and just data rows with links to file paths. It has just over a million rows.

I am having desperate problems in getting the data out of this table to migrate it to another server.

I have tried all sorts - mysqldump (with and without -quick), dumping the results of a query via the command line. Using a MySQL admin tool (Navicat) to open and export the data to file, CSV, or do a data transfer (line by line) to another DB and/or another server but all to no avail.

When trying to use the DB admin tool (Navicat), it gets to approx 250k records and then fails with an "Out of memory" error. I am not able to get any error messages from the other processes I have tried, but they seem to fall over at approximately the same number of records.

I have tried playing with the MySQL memory variables (buffer size, log file size, etc) and this does seem to have an effect on where the export stops (currently I have actually made it worse).

Also - max_allowed_packet is set to something ridiculously large as I am aware this can be a problem too.

I am really shooting in the dark, and I keep going round and round trying the same things and getting no further. Can anyone give me any specific guidance, or recommend perhaps any tools which I might be able to use to extract this data out??

Thanks in hope and advance!

A little more information below - following some questions and advice:

The size of the table I am trying to dump - it is difficult to say, but the sql dump gets to 27gb when the mysqldump dies. It could be approximately 4 times that in total.

I have tried running the following mysqldump command:

mysqldump --single-transaction --quick mydatabase attachments --password=abc123 -u root > d:\attachments.sql   

And this gives the error:

mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table attachments at row: 251249 – Mayb2Moro 4 hours ago

The server has 8gb RAM, Some of the relevant settings copied below. It is an INNODB database/table.

innodb_buffer_pool_size=3000M  innodb_log_file_size=1113M  max_allowed_packet=2024M  query_cache_size=52M  tmp_table_size=500M  myisam_sort_buffer_size=50M  

Constraint to one of the primary keys as foreign key

Posted: 08 Oct 2013 02:25 PM PDT

Table1: grid_col (col_id,f_id,f_value)       (col_id,f_id) is primary key.     Table2: grid (grid_id,col_id,text)       (grid_id) is primary key.   

I want to have a constraint for grid that, col_id should be present in grid_col. I can't have foriegn key constraint here. I can create a function constraint which scans the grid_col while inserting in grid but in that case it increases the chances of having deadlock. How to add a constriant here?

disk I/O error in SQLite

Posted: 08 Oct 2013 03:25 PM PDT

What are the possible things that would trigger the "disk I/O error"? I've been having this problem and I couldn't find a solution. I have a SQLite3 database, and I'm trying to insert data from a file that contains SQL inserts.

Sample data in the file:

insert into files (filesize, filedate, md5, fullpath, origin) values (5795096,1370159412, "e846355215bbb9bf5f30102a49304ef1", "SDs/16G-1/DSC00144.JPG", "SDs");   insert into files (filesize, filedate, md5, fullpath, origin) values (5435597,1370159422, "1a7bcf3a4aaee3e8fdb304ab995ff80f", "SDs/16G-1/DSC00145.JPG", "SDs");  insert into files (filesize, filedate, md5, fullpath, origin) values (5121224,1370159432, "16d28e83599c731657a6cd7ff97a4903", "SDs/16G-1/DSC00146.JPG", "SDs");  

I tried inserting that in the db file with the following command:

$ sqlite3 allfiles.db < insert.sql  

See below the error that I get:

Error: near line 27: disk I/O error  Error: near line 28: disk I/O error  Error: near line 34: disk I/O error  Error: near line 39: disk I/O error  Error: near line 47: disk I/O error  Error: near line 129: disk I/O error  

The input lines that don't generate error are successfully included, but I don't understand why some lines have errors, and they are not inserted into the DB. There's nothing special in the lines with error, and if I run the command again I get errors in different lines, which means it's random (not related to the data itself). I tried adding pragma syncrhonous = off; and pragma temp_store = memory;, to no success. I'm running that on a lubuntu, which runs in a VirtualBox virtual machine. The host machine is a windows 7. The pwd of the files is a shared folder, i.e., it's a folder in the host machine. If I run it in a "local folder" in the guest machine, the error doesn't happen, although for some reason it's much slower... In any case, I'd like to know about the I/O error.

how to find similar word with more similarities

Posted: 08 Oct 2013 03:43 PM PDT

how to Find words with length less than or equal...

  declare @inp nvarchar(max),@data nvarchar(max)    set @inp='You can dance, you can jive, having .... jove... jve, ...'     set @data = 'jeve'    select @inp as results where @inp like '%' + @data +'%'  

@inp not return any data, but i want show:

results   =====  jive  jove  jve  

best way to copy data from SQL Server db to MYSQL DB(remote)

Posted: 08 Oct 2013 10:33 AM PDT

I currently copy data between my local SQL Server to a remote MySQL database linking in SQL and doing a data transformation task but now I'm sending lots of data it is getting very slow.

Just after some advice please on which way anyone would recommend I should go about getting large chunks of data from one to the other. I know there are various ways of doing this, I am just unsure if I should:

  • Keep my link via SQL.
  • Export the data to a CSV file then run a script on the remote end.
  • Something else?

Just need a pointer in the right direction.

Creating the MySQL slow query log file

Posted: 08 Oct 2013 12:25 PM PDT

What do I need to do to generate the slow logs file in MySQL?

I did:

 log_slow_queries  = C:\Program Files\MySQL\MySQL Server 5.1\mysql-slow.log    long_query_time  = 1   

What more do I need to do to?

SSIS Script to split string into columns

Posted: 08 Oct 2013 04:25 PM PDT

I have a dataset (log file) with a number of columns; one of them is "Other-Data" below (unordered string) and need to parse the string to create the derived columns according the u value (U1, U2, U3, etc...). The output columns should be something like:

U1   U2  U3                  U4   U5   etc.  null odw odw : CH : de : hom null null     EUR  sss DE:de:hom           null null     EUR  crm crm                 null null     

Other-Data:

u3=odw : CH : de : hom;u2=odw : Product : DSC-HX20V;~oref=http://www.bidl.ch/lang/de/product/dsc-h-series/dsc-hx20v  u1=EUR;u2=sss:Checkout-Step4:Orderacknowledgement;u3=DE:de:hom;u11=1;u12=302338533;u13=SVE1511C5E;u14=575.67;~oref=https://shop.bidl.de/shop/bibit/success.do  u15=1187;u13=SVE14A1C5E~VAIOEWY401;u11=1~1;u10=843.9~121.14;u9=1038~149;u3=crm : FI : fi : hom;u1=EUR;u2=crm : Checkout : Order acknowledgement;~oref=https://shop.bidl.fi/shop/bibit/success.do  

Can anyone help with this?

Impact of changing the DB compatibility level for a Published replicated DB from 90 to 100

Posted: 08 Oct 2013 10:25 AM PDT

I have a SQL Server 2008 R2 server with a bunch of published databases that are currently operating under compatibility level 90 (2005).

The subscription databases are also SQL Server 2008 R2, however the destination databases are set to compatibility level 100 and replication is working fine.

If I change the compatibility level for the Published databases, will it affect replication in any way, or will it just be a case of reinitializing all the subscriptions and restarting replication?

I suspect that changing the published database compatibility level may change how the replication stored procedures function slightly, but I'm not 100% sure.

Is this the case?

No comments:

Post a Comment

Search This Blog