Saturday, August 10, 2013

[how to] SQL Server nvarchar(n) where n affects performance

[how to] SQL Server nvarchar(n) where n affects performance


SQL Server nvarchar(n) where n affects performance

Posted: 10 Aug 2013 04:14 PM PDT

I have 3 tables with identical data. First table has the column defined as nvarchar(max). Second table has the column defined as nvarchar(4000). Third table has the column defined as nvarchar(800). This column is an included column on an index (on all 3 tables). If I run the same query on all three tables, here are the statistics:

Table 'table1'. Scan count 91676, logical reads 368637, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times: CPU time = 2605 ms, elapsed time = 3030 ms.

Table 'table2'. Scan count 91676, logical reads 368668, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times: CPU time = 1669 ms, elapsed time = 1897 ms.

Table 'table3'. Scan count 91676, logical reads 276220, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times: CPU time = 1061 ms, elapsed time = 1306 ms.

First why does the logical reads decrease with the lower max size setting? Second why does changing from max to the lower value even speed up anything? Based on the lob reads being 0, I don't think any of the data is in lob-storage.

Restoring old MySQL database settings

Posted: 10 Aug 2013 07:55 PM PDT

I know this question may be similar to some others posted but think it may be slightly different, so hopefully it's ok.

Basically I just reinstalled Windows as I bought a new HD and I am in the process of copying all the MySQL databases back to the new installation dir; however I'm not sure if it's ok to copy all the default MySQL databases back with it? Those being:

  • cdcol
  • mysql
  • performance_schema
  • phpmyadmin
  • test
  • webauth

I assume one of these contain details I want to keep such as info about the users and also other stuff like mysql variables - unless that is found in a MySQL .ini file or something?

I'm not 100% sure but I'm pretty sure I'm still using the same version of MySQL. If it matters I am using XAMPP.

So basically my question is can I copy the old folders above over the new ones and what are these folders/databases used for exactly (other than the obvious phpmyadmin one)?

MySQL MyISAM index causes query to match no rows; indexes disabled, rows match

Posted: 10 Aug 2013 08:25 AM PDT

I created a table and index as described in this SE post, but when I query the table for a particular ID, no matches are found. When I disable the index, matches are found.

Commands ran:

CREATE TABLE mytable (id1 int, id2 int, score float) ENGING MyISAM;  LOAD DATA INFILE '50-billion-records.txt' INTO mytable (id1, id2, score);  ALTER TABLE mytable ADD INDEX id1_index (id1);    SELECT COUNT(*) FROM mytable; # returns 50 billion  SELECT COUNT(DISTINCT id1) FROM mytable; # returns 50K, should be about 50M  SELECT COUNT(*) FROM mytable WHERE id1 = 49302; # returns 0 results    ALTER TABLE mytable DISABLE KEYS;  SELECT * FROM mytable WHERE id1 = 49302 LIMIT 1; # returns 1 row  

Is this a bug with MySQL, or does this behavior make sense for some reason?

Note: When I ran ALTER TABLE mytable ENABLE KEYS; just now, the command is acting like it is building an index for the first time (it's still running after 30 minutes, and memory usage is at 80 GB, which matches my setting of myisam_sort_buffer_size=80G. I'll reply when this command finishes running (the original ALTER .. ADD INDEX.. took 7.5 hours, so it may be a bit).

Update: Running SHOW PROCESSLIST indicates "Repair with keycache" is taking place with my ENABLE KEYS command.

Sql Server Replication Transactional Default value in [msrepl_tran_version]

Posted: 10 Aug 2013 07:14 AM PDT

Type Replication : Transactional publication with updatable subscriptions I am using EntityFramework 5 and he is generating the table (DATABASEFIRST) whit this collum, unnecessary, so every insert whit this tools creating a default min value.

id_User id_Album    date    location    msrepl_tran_version  1   2222    2013-06-14  NULL    9AB6B9A6-4BB4-416C-AFEB-5851410F545F  1   22427   2013-08-10  NULL    00000000-0000-0000-0000-000000000000  1   22428   2013-08-10  NULL    00000000-0000-0000-0000-000000000000  

At moment the replication is working fine, but how many problems can I get on futures maintenances if keep this value on system?

Anyway, I know how create a Backup whitout replication Info and import this to EF5 and change ips to main sql.

Percona Cluster Failed with Haproxy : Connection Limit Exceeded

Posted: 10 Aug 2013 04:23 AM PDT

We have 3 node PXC cluster setup on amazon ec2. Its been working fine for around 4-5 months. Since last week, every node starting to fail due to MySQL connection limit exceeded error. we increased the connection limit and still no help. We had about 1-2hr downtime since last week.

Setup as follows.

App1 ---> Haproxy1 ---> PXC Node1 App2 ---> Haproxy2 ---> PXC Node2 App3 ---> Haproxy3 ---> PXC Node3

Each haproxy configured with 1 active mysql node and 2 backup servers

server SQL1 xx.xx.xx.xx:3306 check port 50000 inter 3000 maxconn 160 rise 3 fall 3 backup  server SQL2 xx.xx.xx.xx:3306 check port 50000 inter 3000 maxconn 160 rise 3 fall 3  server SQL3 xx.xx.xx.xx:3306 check port 50000 inter 3000 maxconn 160 rise 3 fall 3 backup  

We also use clustercheck script from percona to monitor mysql nodes on port 50000 with help of xinex.d

Any suggestions?

How to transpose/convert rows as columns in mysql

Posted: 10 Aug 2013 04:15 AM PDT

I have table structures like below Subjects Table

      +----+------+------+--------+---------+        | id | C_Id | G_Id | status | subject |        +----+------+------+--------+---------+        |  1 |   13 |    4 |      1 | Telugu  |        |  2 |   13 |    4 |      1 | Hindi   |        |  3 |   13 |    4 |      1 | English |        |  4 |   13 |    4 |      1 | Maths   |        |  5 |   13 |    4 |      1 | Physics |        |  6 |   13 |    4 |      1 | Biology |        |  7 |   13 |    4 |      1 | Social  |        +----+------+------+--------+---------+           +----+-----+-----+--------+--------+-------+-------+-------+        | id | Cid | Gid | Examid | rollno | subId | Marks | paper |        +----+-----+-----+--------+--------+-------+-------+-------+        |  1 |  13 |   4 |      1 |      1 |     1 | 14.50 |     1 |        |  2 |  13 |   4 |      1 |      2 |     1 | 12.00 |     1 |        |  3 |  13 |   4 |      1 |      1 |     2 | 13.00 |     1 |        |  4 |  13 |   4 |      1 |      2 |     2 | 15.00 |     1 |        |  5 |  13 |   4 |      1 |      1 |     3 | 16.00 |     1 |        |  6 |  13 |   4 |      1 |      2 |     3 | 18.00 |     1 |        |  7 |  13 |   4 |      1 |      1 |     4 | 19.00 |     1 |        |  8 |  13 |   4 |      1 |      2 |     4 | 23.00 |     1 |        |  9 |  13 |   4 |      1 |      1 |     5 | 21.00 |     1 |        | 10 |  13 |   4 |      1 |      2 |     5 | 24.00 |     1 |        | 11 |  13 |   4 |      1 |      1 |     6 | 20.00 |     1 |        | 12 |  13 |   4 |      1 |      2 |     6 | 19.00 |     1 |        | 13 |  13 |   4 |      1 |      1 |     7 | 20.00 |     1 |        | 14 |  13 |   4 |      1 |      2 |     7 | 21.00 |     1 |        | 15 |  13 |   4 |      2 |      1 |     1 | 45.00 |     2 |        | 16 |  13 |   4 |      2 |      2 |     1 | 40.00 |     2 |        | 17 |  13 |   4 |      2 |      1 |     1 | 32.00 |     3 |        | 18 |  13 |   4 |      2 |      2 |     1 | 33.00 |     3 |        | 19 |  13 |   4 |      2 |      1 |     2 | 80.00 |     1 |        | 20 |  13 |   4 |      2 |      2 |     2 | 89.00 |     1 |        | 21 |  13 |   4 |      2 |      1 |     3 | 39.00 |     2 |        | 22 |  13 |   4 |      2 |      2 |     3 | 38.00 |     2 |        | 23 |  13 |   4 |      2 |      1 |     3 | 41.00 |     3 |        | 24 |  13 |   4 |      2 |      2 |     3 | 45.00 |     3 |        | 25 |  13 |   4 |      2 |      1 |     4 | 34.00 |     2 |        | 26 |  13 |   4 |      2 |      2 |     4 | 38.00 |     2 |        | 27 |  13 |   4 |      2 |      1 |     4 | 32.00 |     3 |        | 28 |  13 |   4 |      2 |      2 |     4 | 33.00 |     3 |        | 29 |  13 |   4 |      2 |      1 |     5 | 31.00 |     1 |        | 30 |  13 |   4 |      2 |      2 |     5 | 34.00 |     1 |        | 31 |  13 |   4 |      2 |      1 |     6 | 33.00 |     1 |        | 32 |  13 |   4 |      2 |      2 |     6 | 31.00 |     1 |        | 33 |  13 |   4 |      2 |      1 |     7 | 35.00 |     2 |        | 34 |  13 |   4 |      2 |      2 |     7 | 31.00 |     2 |        | 35 |  13 |   4 |      2 |      1 |     7 | 43.00 |     3 |        | 36 |  13 |   4 |      2 |      2 |     7 | 38.00 |     3 |        +----+-----+-----+--------+--------+-------+-------+-------+   

So Far I have wrote the code

select ta.rollno,  ta.StdNm,  max(case when s.subject = 'Telugu' AND tm.paper=1 then tm.Marks end) Telugu,  max(case when s.subject = 'Telugu' AND tm.paper=2 then tm.Marks end) Telugu1,  max(case when s.subject = 'Telugu' AND tm.paper=3 then tm.Marks end) Telugu2,  max(case when s.subject = 'Hindi' AND tm.paper=1 then tm.Marks end) Hindi,  max(case when s.subject = 'Hindi' AND tm.paper=2 then tm.Marks end) Hindi1,  max(case when s.subject = 'Hindi' AND tm.paper=3 then tm.Marks end) Hindi2,  max(case when s.subject = 'English' AND tm.paper=1 then tm.Marks end) English,  max(case when s.subject = 'English' AND tm.paper=2 then tm.Marks end) English1,  max(case when s.subject = 'English' AND tm.paper=3 then tm.Marks end) English2,  max(case when s.subject = 'Maths' AND tm.paper=1 then tm.Marks end) Maths,  max(case when s.subject = 'Maths' AND tm.paper=2 then tm.Marks end) Maths1,  max(case when s.subject = 'Maths' AND tm.paper=3 then tm.Marks end) Maths2,  max(case when s.subject = 'Physics' AND tm.paper=1 then tm.Marks end) Physics,  max(case when s.subject = 'Physics' AND tm.paper=2 then tm.Marks end) Physics1,  max(case when s.subject = 'Physics' AND tm.paper=3 then tm.Marks end) Physics2,  max(case when s.subject = 'Biology' AND tm.paper=1 then tm.Marks end) Biology,  max(case when s.subject = 'Biology' AND tm.paper=2 then tm.Marks end) Biology1,  max(case when s.subject = 'Biology' AND tm.paper=3 then tm.Marks end) Biology2,  max(case when s.subject = 'Social' AND tm.paper=1 then tm.Marks end) Social,  max(case when s.subject = 'Social' AND tm.paper=2 then tm.Marks end) Social1,  max(case when s.subject = 'Social' AND tm.paper=3 then tm.Marks end) Social2  FROM tbl_cmarks tm  INNER JOIN tbl_classes tc   ON tm.Cid = tc.C_Id  INNER JOIN tbl_admission ta   ON ta.rollno = tm.rollno  INNER JOIN tbl_subjects s   on tm.subId = s.id  where tm.Cid = 13   and tm.Examid=2  group by  ta.rollno  

Here in my script i am passing the subjects Names as hard Coded But in future we don't know how many subjects will have in Class 13 where should i change in my query to pass the subject Ids instead of Subject Names and that too not as hard coded.

Can i take all subject Ids for Particular Class into a variable and pass in the Max(Case) statement.? Is it possible?

Please review my code and give me a clue/help, to do work this code.

Thank you in advance.

Complex constraint across all data in a table

Posted: 10 Aug 2013 01:25 PM PDT

We have a table to record processing that occurs on a system and we need to ensure that only a single row has an 'in process' status.

I want to ensure that the result of this is always either zero or one:

select count(id) from jobs where status in ('P', 'G');  

We're using explicit transactions so ideally this check would happen on commit and abort the transaction if the invariant doesn't hold. Handling any racy processing that throws the occasional error is much easier for us to deal with than suddenly ending up with more than one job that is 'in process'.

The solution only needs to work with Postgres so we're quite happy to take a non-standard solution to this. We're currently on 8.4, but we'll be upgrading at some point to 9.x if that makes any difference.

Oracle Text - Changing SYNC to MANUAL not working always

Posted: 10 Aug 2013 06:21 AM PDT

Oracle version is 11g. When the Oracle Text index is created, SYNC option is given as "ON COMMIT". Before inserting large number of records, SYNC is changed to "Manual" for performance reason. Below query is run using CallableStatement for this purpose

{call CTX_DDL.REPLACE_INDEX_METADATA('IXFTS_ABC_TABLE','REPLACE METADATA SYNC (MANUAL)')}  

In some environments this works fine but in production environments the SYNC option is still "ON COMMIT". No error is thrown when this is not updated. The index is not a partitioned index. Any pointers to troubleshoot further will be helpful.

[Update] I could reproduce this in my local machine rarely. Looks like running the same sql 2 or 3 times is working. Not sure why this fails the first time.

Does the mysql parameter "max_connections" apply for unix socket connections?

Posted: 10 Aug 2013 10:46 AM PDT

The documentation does not specify if max_connections apply only to TCP connections.

The way it is said, it looks like it does apply for any kind of connection, but I'd like to be sure because otherwise, I could use it when the "Too many connection" problem arises, even if the (last) SUPER connection slot is already in use.

Calculated Measure to get only most current from one dimension on snapshot fact but keep other filters

Posted: 10 Aug 2013 04:00 AM PDT

I'm working on a tabular cube in SSAS 2012 SP1 CU4. I have 3 dimensions (Requisition, Requisition Status, Date) and 1 fact (Requisition Counts). My fact table is at the grain of requisitionKEY, RequisitionStatusKEY, SnapshotDateKey.

I have calculated measures that essentially get the lastnonempty value (like a semi-additive measure) for the given period whether it is Year, Month, Or Date:

Openings:=CALCULATE(Sum('Requisition Counts'[NumberOfOpeningsQT]),   Filter('Date','Date'[DateKey] = Max('Requisition Counts'[SnapshotDateKEY])))    

This works well until you throw Requisition Status into the mix. I have rows for each requisition for every day in 2013. For one of the requisitions, the Requisition Status was Pending for the first 6 months and then it changed to Approved on all data from July 1 to date. When I summarize the number of openings for the requisition at the month level for July, users see two rows: the sum of the openings for the last populated day in July that it was pending and the sum of the openings for the last populated day in July that it was approved.
Pivot Table

Although the total of 2 is correct, I need to change this calculation so that I only get the most current requisition status for the date period selected (Approved) and either show 0 or null or blank for the Pending approval line in the pivot table.

The Requisition Status table looks like this: Requisition Status

Update: Here is a link to a PowerPivot model I made that has some scrubbed data in it to help answer this question. This should better explain the table schemas. The NumberOfOpeningsQT field is basically the number of open positions they have for that job. Sometimes it is 1, sometimes is is more than 1. It doesn't usually change over time, but I guess it could. I'm trying to make the Openings calculation give me the desired answer. I threw some other calculations in there so show some things I had tried that had promise but that I couldn't get to work.

Restoring of subscriber database failed

Posted: 10 Aug 2013 07:58 AM PDT

We have replicated a database from live to test in SQL Server 2005. Now we need to restore the subscriber database which is in test to the same server without replication setting in the restored database. How can we achieve it?

SqlPackage does not pick up variables from profile

Posted: 10 Aug 2013 05:58 PM PDT

I want to upgrade a database using .dacpac and sqlpackage.exe

here is how I run sqlpackage:

SqlPackage.exe      /Action:Publish      /SourceFile:"my.dacpac"      /Profile:"myprofile.publish.xml"  

The error I get is:

* The following SqlCmd variables are not defined in the target scripts: foo.

I have verified that myprofile.publish.xml file does contain that var:

<ItemGroup>    <SqlCmdVariable Include="foo">      <Value>bc\local</Value>    </SqlCmdVariable>  

I also verified that project that creates dacpac does publish successfully from within visual studio using myprofile.publish.xml

What else could I be missing?

(I'm using SQL Server 2012)

Speeding up mysqldump / reload

Posted: 10 Aug 2013 09:58 AM PDT

Converting a large schema to file-per-table and I will be performing a mysqldump/reload with --all-databases. I have edited the my.cnf and changed "innod_flush_log_at_trx_commit=2" to speed up the load. I am planning to "SET GLOBAL innodb_max_dirty_pages_pct=0;" at some point before the dump. I am curious to know which combination of settings will get me the fastest dump and reload times?

SCHEMA stats:

26 myisam tables 413 innodb ~240GB of data

[--opt= --disable-keys; --extended-insert; --quick, etc] --no-autocommit ??

vs prepending session vars like: "SET autocommit=0; SET unique_checks=0; SET foreign_key_checks=0;"

Are the mysqldump options equivalent or not really?

Thanks for your advice!

How to disable SSMS table designer timeout?

Posted: 10 Aug 2013 05:37 AM PDT

How do i disable the Table Designer timeout in SQL Server Management Studio?

Background

This KB Article seems to indicate that if you stop overriding the Designer timeout:

To resolve this behavior, use one of the following methods:

  • Click to clear the Override connection string time-out value for table designer updates check box for the table designer and for the database designer in SQL Server Management Studio.

enter image description here

it is supposed to revert to the Execution timeout:

enter image description here

But rather than timing out never it times out after 30 seconds:

enter image description here

i've also tried setting the table designer to the maximum value of 65,535. Unfortunately that just causes the timeout to happen after 18 hours, rather than 30 seconds.

How do i disable the table designer timeout in SQL Server Management Studio?

SQL Server Management Studio: 9.00.1399.00

Pre-emptive snarky comment: What kind of table change are you making that needs more than 18 hours to complete? i'm changing a column in a 130 GB table from text to nvarchar(max).

Database Mail sending functionality not working on local system

Posted: 10 Aug 2013 12:58 PM PDT

I am using Database Mail functionality to send mail from a SQL Server 2008 database via following stored procedure execution:

EXEC sp_send_dbmail @profile_name = 'MyProfile',                       @recipients = 'abc@companyname.com',                       @subject = 'Test message',                      @body = 'Congrats Database Mail Received By you Successfully.'   

I have tried with my gmail account profile on my local system it's working properly but not with my company or outlook profile.

Error message:

The mail could not be sent to the recipients because of the mail server failure. (Sending Mail using Account 5 . Exception Message: Could not connect to mail server. (A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond abc.j.i.ooo:pp). )

Reference

http://blogs.msdn.com/b/suhde/archive/2009/07/12/how-to-configure-sql-server-database-mail-to-send-email-using-your-windows-live-mail-account-or-your-gmail-account.aspx  

What would be the problem?

Thanks

createdb: could not connect to database postgres: FATAL: could not write init file

Posted: 10 Aug 2013 07:58 PM PDT

RedHat Enterprise Server 3.0 32 Bits

psql (PostgreSQL) 8.2.3

user: postgres

server is running:

/soft/postgres/8.2.3/bin/pg_ctl start  pg_ctl: another server may be running; trying to start server anyway  2013-05-09 11:23:07 BRST---:FATAL:  lock file "postmaster.pid" already exists  2013-05-09 11:23:07 BRST---:HINT:  Is another postmaster (PID 12810) running in data directory "/opt/psql/dba/bdadms/data1/pstg"?  pg_ctl: could not start server  Examine the log output.  

I had just created a new database cluster with initdb; but when I run createdb:

8.2.3:postgres:pstg:>/soft/postgres/8.2.3/bin/createdb pstg  createdb: could not connect to database postgres: FATAL:  could not write init file  8.2.3:postgres:pstg:>/soft/postgres/8.2.3/bin/createdb postgres  createdb: could not connect to database template1: FATAL:  could not write init file  8.2.3:postgres:pstg:>/soft/postgres/8.2.3/bin/createdb template1  createdb: could not connect to database postgres: FATAL:  could not write init file  

any clues as to the cause and possible solutions to this problem?

Repeated values in group_concat

Posted: 10 Aug 2013 03:58 PM PDT

I have two tables, first the table food and Second is Activity:

INSERT INTO food      (`id`, `foodName`)  VALUES      (1, 'food1'),      (2, 'food2'),      (3, 'food3'),      (4, 'food4'),      (5, 'food5'),      (6, 'food6'),  ;  CREATE TABLE Activity      (`id` int,`place` varchar(14),`food_id` int,`timing` TIME,`date_and_time` DATETIME)  ;  INSERT INTO Activity      (`id`,`place`, `food_id`,`timing`,`date_and_time`)  VALUES      (1, 'place1', 1, '10:30am','2013-05-01'),      (2, 'place1', 1, '12:30pm','2013-05-01'),      (3, 'place1', 1, '04:30pm','2013-05-01'),      (4, 'place2', 2, '10:30am','2013-05-02'),      (5, 'place2', 2, '12:30pm','2013-05-02'),      (6, 'place2', 2, '4:30pm','2013-05-02'),      (7, 'place1', 2, '10:30am','2013-05-02'),      (8, 'place1', 2, '12:30pm','2013-05-02'),      (9, 'place1', 2, '4:30pm','2013-05-02'),      (10, 'place2', 3, '10:30am','2013-05-03'),      (11, 'place2', 3, '12:30pm','2013-05-03'),      (12, 'place2', 3, '4:30pm','2013-05-03')  ;  

For now I'm using the following query:

SELECT       a.activity_type AS Activity,       COUNT(DISTINCT p.id) AS Products,      CONVERT(GROUP_CONCAT(p.category_id SEPARATOR ',  ') USING utf8)         AS Categories  FROM       food AS p    JOIN       ( SELECT activity_type             , prod_id        FROM activity         WHERE activity_type <> ''         GROUP BY activity_type               , prod_id      ) AS a      ON p.id = a.prod_id  GROUP BY       activity_type  ORDER BY       Products DESC ;  

Could you please help me, I need output in the below format:

place | food_id | Timings             |                              |        |         |---------------------|Date                          |        |         |Time1 |Time2 | Time3 |                              |  ---------------+----------+------------------------------------------|  place1 | 1      | 10:30am| 12:30pm| 4:30pm |2013-05-01(MAX timestamp)|            place2 | 1      | 10:30am| 12:30am| 4:30am |2013-05-01(MAX timestamp)|  

MySQL PDO Cannot assign requested address

Posted: 10 Aug 2013 01:58 PM PDT

Can someone help me with this error?

[08-Apr-2013 17:44:08 Europe/Berlin] PHP Warning:  PDO::__construct(): [2002]      Cannot assign requested address (trying to connect via tcp://****:3306) in       /var/www/***  [08-Apr-2013 17:44:08 Europe/Berlin] PHP Fatal error:  Uncaught exception       'PDOException' with message 'SQLSTATE[HY000] [2002] Cannot assign requested       address' in /var/www/***  

I have a Server with a lot connections per second; out of about 100 Connections, a single one got this error.

I've tried this recommendation from stackoverflow however it does not solve my problem.

DB2 critical alert "database partition availability"

Posted: 10 Aug 2013 12:58 AM PDT

I have a existing DB2 database that I'm just getting started with and I have connected IBM Data Studio (web) to it and it says that it has the alert "Database Partition Availability" but it doesn't give a real description (the description field says "The availability of the database partitions"). Does this mean that the database partition is offline? I can't find anything related on google.

How I prevent deadlock occurrence in my application?

Posted: 10 Aug 2013 05:58 AM PDT

I am developing an LMS application in PHP framework(Codeigniter 2.1.0). I am using MySQL database. All the tables in the database have innodb engine. I also created indexes on each tables. Now I am doing load testing using Jmeter version 2.9 locally for 200 users concurrently. During the load testing, in a specific page action I got Deadlock Found error. I changed my original query to the new one but again same error is occurring.

I have written save_interactions function which takes four parameters interaction array,module_id,course_id,user_id & is been called so many times by the AJAX script. The following script inserts the record if the specific interaction_id is not present in that table otherwise the update query will get fire.

public function save_interactions($interaction_array,$modid,$cid,$uid)  {      foreach($interaction_array as $key=>$interact_value)      {          $select_query = $this->db->query("SELECT COUNT(*) AS total FROM `scorm_interactions` WHERE `mod_id`='".$modid."' AND `course_id`='".$cid."' AND `user_id`='".$uid."' AND `interaction_id`='".$interact_value[0]."'");          $fetchRow = $select_query->row_array();            if($fetchRow['total']==1)          {              $update_data = array(                          "interaction_type"=>$interact_value[1],                          "time"=>$interact_value[2],                          "weighting"=>$interact_value[3],                          "correct_response"=>$interact_value[4],                          "learner_response"=>$interact_value[5],                          "result"=>$interact_value[6],                          "latency"=>$interact_value[7],                          "objectives"=>$interact_value[8],                          "description"=>$interact_value[9]              );              $this->db->where('mod_id', $modid);              $this->db->where('course_id', $cid);              $this->db->where('user_id', $uid);              $this->db->where('interaction_id', $interact_value[0]);              $this->db->update('scorm_interactions', $update_data);          }else          {              $insert_data = array(                          "user_id"=>$uid,                          "course_id"=>$cid,                          "mod_id"=>$modid,                          "interaction_id"=>$interact_value[0],                          "interaction_type"=>$interact_value[1],                          "time"=>$interact_value[2],                          "weighting"=>$interact_value[3],                          "correct_response"=>$interact_value[4],                          "learner_response"=>$interact_value[5],                          "result"=>$interact_value[6],                          "latency"=>$interact_value[7],                          "objectives"=>$interact_value[8],                          "description"=>$interact_value[9]              );              $this->db->insert('scorm_interactions', $insert_data);          }      }  }  

I got this type of error:

Deadlock found when trying to get lock; try restarting transaction

UPDATE `scorm_interactions` SET      `interaction_type` = 'choice',      `time` = '10:45:31',      `weighting` = '1',      `correct_response` = 'Knees*',      `learner_response` = 'Knees*',      `result` = 'correct',      `latency` = '0000:00:02.11',      `objectives` = 'Question2_1',      `description` = ''  WHERE      `mod_id` =  '4' AND      `course_id` =  '5' AND      `user_id` =  '185' AND      `interaction_id` =  'Question2_1'  ;    Filename: application/models/user/scorm1_2_model.php Line Number: 234  

Can anyone please suggest me how to avoid Deadlock?

How much data is needed to show MySQL Cluster's performance scaling vs. InnoDB

Posted: 10 Aug 2013 08:58 AM PDT

I am evaluating MySQL Cluster as a possible replacement for an InnoDB schema. So far, I have tested it with 10s of MB of data, and found MySQL Cluster slower than InnoDB; however, I have been told MySQL Cluster scales much better.

How much data does it take to show a performance benefit to MySQL Cluster vs. an InnoDB schema? Or, is there a better way to demonstrate MySQL Cluster's merits?

EDIT

Perhaps an important note: My cluster is currently a heterogeneous cluster with 4 machines. On each machine, I have given an equal amount of Data and Index Memory; 4GB, 2GB, 2GB, and 1GB respectively. The machines are running i7's and are connected over a Gigabit Lan. NumOfReplicas is set to 2.

EDIT

This application is a low-usage analytics database, which has roughly 3 tables >= 200M rows and 5 tables <= 10K rows. When we use it, it takes 15 seconds to run our aggregate functions. My boss asked me to research MySQL Cluster, to see if we could increase performance, since we thought aggregate functions could run pretty well in parallel.

Named Pipe Provider Error code 40

Posted: 10 Aug 2013 08:58 PM PDT

I have literally tried everything, from enabling named pipe to adding exception to ports in the firewall, to everything possible in surface configuration. I can connect to the SQL instance(using TCP and Named Pipes) with SQL Server Management Studio. But sqlcmd throws an error:

Login timeout expired  

Help!

How to removing or reduce the size of redo logs in MySQL cluster

Posted: 10 Aug 2013 04:58 AM PDT

Can anybody tell me that how can we reduce the redo log file size or can we remove the redo log file size in MySQL Cluster and also what will be the impact of it on performance of cluster.Also can we disable Redo logging in MySQL cluster.

What methods are available to create a snapshot of an oracle database for testing purposes?

Posted: 10 Aug 2013 10:47 AM PDT

I am new to databases and have just been assigned to work with Oracle. We want to create a snapshot of the database for testing purposes that will allow us to revert back once we're done.

I'm a bit confused as to what my options are.

I've looked into creating a standby. It seems like that may be overkill for what we need and the pages describing how to create one are confusing to one with as little experience as I.

Questions

  • Is a standby the best fit or is import/export a viable option?
  • Are there other methods?.

Thanks!

SQL Server making arithmetic abort enabled

Posted: 10 Aug 2013 10:48 AM PDT

I am using SQL Server 2008.

In my stored procedures, I often have to call for XML data type, functions (like split).

Is it possible to permanently enable arithmentic abort for a database? I know it can be done using properties window > options but is there any script i can use to turn it on?

The reason to look for script is obvious; i have too many DBs to do it manually.

I assume set arithabort off and set arithabort on turns on arithabort only for the context.

Why is 'Allow Snapshot Isolation' set to TRUE on the master and msdb databases?

Posted: 10 Aug 2013 10:45 AM PDT

Why is the database property Allow Snapshot Isolation set to TRUE on the master and msdb databases, but FALSE on other system databases? The default option for this property is also FALSE when creating user databases.

Can IMPLICIT_TRANSACTIONS set on database level?

Posted: 10 Aug 2013 10:45 AM PDT

When I have to investigate problems caused by IMPLICIT_TRANSACTIONS ON, can I focus on Client tools setting IMPLICIT_TRANSACTIONS ON or ANSI_DEFAULTS ON or can this be set for the whole database affecting all clients?

I can avoid the problem by changing my standard script header to

SET ANSI_DEFAULTS ON  SET IMPLICIT_TRANSACTIONS  OFF  

but at which levels session, database, server can this setting be influenced?

Edit:

I found this in msdn

it seems to be possible to set this on server level.

In the Server Properties dialog (Connections Page) I found the option changed and scripted it:

implicit transactions on:

EXEC sys.sp_configure N'user options', N'2'  GO  RECONFIGURE WITH OVERRIDE  GO  

implicit transactions off:

EXEC sys.sp_configure N'user options', N'0'  GO  RECONFIGURE WITH OVERRIDE  GO  

But executing

exec('dbcc useroptions')  

in SSMS didn't reflect the change. The server setting seems to be somehow overwritten for such connections.

Now I have to repose the question,

are there clients which do not overwrite the server setting and can crash when the server setting is changed.

WAL Database log on separate disk good idea RAID10?

Posted: 10 Aug 2013 02:30 PM PDT

I have a RAID10-Server and have Postgres writing the log(pg_xlog a.k.a WAL) onto the same RAID10-Array.

I use the WAL in sync-mode with a big buffer and do alot of bulk inserts and updates, so that buffer is hopfully used.

Is it an good idea to split the log onto a extra disk, not in the raid10? I could stuff in a RAID with 2 disks, but would that really improve anything?

I looking forward if someone could name a few pros and cons.

Thanks alot

No comments:

Post a Comment

Search This Blog