Monday, April 29, 2013

[how to] Defacto way of fixing constraint issues without downtime?

[how to] Defacto way of fixing constraint issues without downtime?


Defacto way of fixing constraint issues without downtime?

Posted: 29 Apr 2013 09:22 PM PDT

I come across two constraint issues all the time (UNIQUE, and referential integrity constraints) within InnoDB. My question is when these issues arise reproducibly what is typically the solution? I'm asking this from a web application attempting to do transactions on the database; so there isn't much control on what is being done unless I risk taking down the database; so conceptually the solutions I have found are below, and am wondering if input can be provided.

UNIQUE constraint possible solutions:

  • Remove 'old' value so new one can be inserted.

Typical error: ERROR 1062 (23000): Duplicate entry '0' for key 1

FOREIGN KEY constraint failure possible solutions:

  • Insert expected value into parent.
  • Set foreign_key_checks to false so the transaction doesn't get aborted. I don't like this solution but it works.

Typical error: SQLSTATE[23000]: Integrity constraint violation: 1452 Cannot add or update a child row: a foreign key constraint fails...

I've also thought that maybe there are inconsistencies between tables that causes generated queries to fail; this makes me wonder if null operations on all the tables could resolve the issue? e.g. ALTER TABLE <tbl_name> ENGINE=InnoDB;

Suggestions where to also dump the table and re-import it; but this just sounds too much of a hassle for a 'non-guaranteed' fix.

So, I ask what are typical solutions for the above errors?

strange io in mysql server

Posted: 29 Apr 2013 08:53 PM PDT

one of mysql servers becomes slow recently, and I found that there is a high IO in the server:

$ sudo  pt-ioprofile -cell sizes  Mon Apr 29 22:39:32 CDT 2013  Tracing process ID 16934       total      pread       read     pwrite      fsync       open      lseek filename  2147483647    1735082      64640     369248          0          0 2147483647 /var/lib/mysql/db1/estate.MYD   889808476     524176      19420     128436          0          0  889136444 /var/lib/mysql/db2/estate.MYD   343733731       7231        600          0          0          0  343725900 /var/lib/mysql/db2/cmsuser.MYD    18334349       3325        180          0          0          0   18330844 /var/lib/mysql/db1/cmsuser.MYD      104076          0          0     104076          0          0          0 /var/lib/mysql/db1/estate.MYI       98304          0          0      98304          0          0          0 /var/lib/mysql/ibdata1       34300          0          0      34300          0          0          0 /var/lib/mysql/db2/estate.MYI        1024          0          0       1024          0          0          0 /var/lib/mysql/ib_logfile1         512          0          0        512          0          0          0 /var/lib/mysql/ib_logfile0           0          0          0          0          0          0          0 /etc/hosts.deny           0          0          0          0          0          0          0 /etc/hosts.allow  

however, when I enter show process in the server, I found no active process!

what's possibly going on here?

thx

Deadlock : Behaviour of NOLOCK

Posted: 29 Apr 2013 08:27 PM PDT

We are running into deadlocks when we run concurrent instances of following query :

BEGIN TRAN    IF NOT EXISTS (SELECT TOP 1 * FROM Table1 WITH (NOLOCK) WHERE Col1 = Val1 AND Col2 = Val2)    BEGIN      INSERT INTO Table1 (ID, Col1, Col2)      VALUES(-1, Val1, Val2)    END  COMMIT TRAN  

The deadlock results in INSERT being aborted. I am trying to understand the working of NOLOCK hint for the above. This link says that NOLOCK acquires table level shared lock and database level schema lock. However, there is no page level lock. So if there is no shared lock to exclusive lock conversion (ie, from SELECT to INSERT), why would there be a deadlock ? Is there lock escalation with NOLOCK if the SELECT query is run over large table ? Am I better off using UPDLOCK instead of NOLOCK for SELECT, as there will not be lock conversion conflicts ?

Monitor For Disabled Queue On RDS SQL Server

Posted: 29 Apr 2013 05:41 PM PDT

I would like to somehow get notification of a queue being disabled due to poisoned messages. (The issue is occasional resource contention. Until we solve it, monitoring would help us be aware.) In my case, SQL Server's service broker is running under Amazon's RDS environment, which limits what is available.

Suggestions?

Why would I use the MySQL "system" command?

Posted: 29 Apr 2013 01:40 PM PDT

What can I do with the MySQL client's system command? Why does it exist?

The most productive use I've seen is to look around the file system (e.g., to remember the file name you want to SOURCE or LOAD FILE).

mysql> system ls /tmp  backup.sql   mysql> source /tmp/backup.sql  Query OK, 1 row affected (0.02 sec)    mysql>  

As near as I can tell, you can't pass a query result to the STDIN of a command, you can't pass STDOUT of a command into a MySQL query.. it just doesn't seem widely useful.

Why does CREATE INDEX ... WITH ONLINE=ON block access to the table over a period of minutes?

Posted: 29 Apr 2013 12:20 PM PDT

I have an existing table:

CREATE TABLE dbo.ProofDetails  (      ProofDetailsID int NOT NULL           CONSTRAINT PK_ProofDetails           PRIMARY KEY CLUSTERED IDENTITY(1,1),      ProofID int NULL,      IDShownToUser int NULL,      UserViewedDetails bit NOT NULL           CONSTRAINT DF_ProofDetails_UserViewedDetails           DEFAULT ((0)),  );  GO  

This table has 150,000,000 rows. The system is in operation 24x7x365, so there are no regularly occurring maintenance windows.

I want to add an index to the table, and with the Enterprise edition of SQL Server, I should be able to do that without blocking write access to the table. The command I used was:

CREATE INDEX IX_ProofDetails_ProofID_Etc   ON dbo.ProofDetails (ProofID, IDShownToUser)  INCLUDE (UserViewedDetails)  WITH (ONLINE=ON      , ALLOW_ROW_LOCKS=ON      , ALLOW_PAGE_LOCKS=ON      , FILLFACTOR=100      , MAXDOP=4  );  

This ran for over a minute, then began blocking other sessions. I then immediately cancelled the CREATE INDEX command since I cannot block other sessions. During the first minute, nothing was blocking my CREATE INDEX command, sys.dm_exec_requests showed the process with a wait type of CXPACKET - of course. I don't think that is a bad thing since the operation was parallelized.

Am I misunderstanding the implementation of WITH (ONLINE=ON)? Or is there something else I need to be aware of.

The server is a fairly beefy machine, with 2 quad-core Xeon E5-2643 3.3Ghz processors, 192GB RAM, and SAN storage capable of 5,000+ iops. CPU is typically below 20%, RAM is 93% utilized, mostly by SQL Server. There is nothing else running on the box, just Windows Server 2012, and SQL Server 2012.

Oracle database created whats next? Create Schema or Create Tablespace?

Posted: 29 Apr 2013 12:01 PM PDT

I'm a programmer that's being thrown to the DBA/Sysadmin island all by myself with a volleyball that I named Wilson. I'm trying to survive here.

I'm supposed to create a database for an application. The server where this will be running previously had a database for a pretty similar app. I don't know much about Oracle so I reused the ZFS filesystems and left them how they were created (because honestly, I didn't knew why they were created that way, but I'm pretty sure it was for a good reason).

app/oradata_smart_ora1    858M  12.2G   858M  /oradata/SMART/ora1  app/oradata_smart_ora2   7.18M  18.0G  7.18M  /oradata/SMART/ora2  app/oradata_smart_ora3   7.18M  36.0G  7.18M  /oradata/SMART/ora3  app/oradata_smart_ora4   60.6K   400G  60.6K  /oradata/SMART/ora4  app/oradata_smart_redo1   400M  2.61G   400M  /oradata/SMART/redo1  app/oradata_smart_redo2   200M  2.80G   200M  /oradata/SMART/redo2  app/oradata_smart_redo3   200M  2.80G   200M  /oradata/SMART/redo3  

Since I reused the filesystems I created my database and placed the controlfiles in the same places where the old database files were (/oradata/SMART/ora1,/oradata/SMART/ora2,/oradata/SMART/ora3). Thinking like MySQL works I created app/oradata_smart_ora4 60.6K 400G 60.6K /oradata/SMART/ora4 specifically to store the database there.

The databases startups and mounts no problem. Now is where I'm stuck. I've read this, this,this,this,this and much more but I still have doubts.

Note that this server will manage with millions/billions records throughout its lifetime.

  1. Now that my Database is created, whats the next step? Create the Schema or Tablespace?
  2. Tablespace Questions: Tablespace datafile(s) is where actual data from tables is stored? how many are needed? Default or Temporary? How much space will I need for it? Autoextend?

I'm really sorry for all these questions but I really want to do this right while following the best practices for Oracle. If you need more information for your answer let me know.

DB2 Load Failure due to timestampformat

Posted: 29 Apr 2013 11:41 AM PDT

I'm trying to load a file into DB2 database table using DB2 Load. I'm able to load the file, using the following statement from the unix prompt.

db2 +p -x load from BOM_Sales_20130326.txt of DEL MODIFIED BY COLDEL\| timestampformat=\"YYYY-MM-DD hh:mm:ss\" insert into GEO_SALES.SDM_STL_VFRSK_SALES NONRECOVERABLE

But, when I try to call DB2 Load from a script after parameterising many of the options, it does not seem to work.

For example, I have re-written the statement as follows:

db2 +p -x "load client from $FILE_DIR/$filenm of DEL MODIFIED BY COLDEL\$FILE_DELMTR timestampformat=\"YYYY-MM-DD HH:MM:SS\" insert into $TGT_SCHEMA_NM.$FILE_STG_TBL_TGT NONRECOVERABLE"

All the variable are assigned with correct values prior to calling the statemant. But it seemed to fail due to wrong timestampformat. The error is as follows:

SQL3191N The field in row "F2-1", column "1" which begins with "2013-03-26|STL|5678|D|3212345" does not match the user specified DATEFORMAT, TIMEFORMAT, or TIMESTAMPFORMAT. The row will be rejected.

In fact, my need is to give the value "timestampformat" also as a parameter, as follows: db2 +p -x "load client from $FILE_DIR/$filenm of DEL MODIFIED BY COLDEL\$FILE_DELMTR timestampformat=$DTTIME_FMT insert into $TGT_SCHEMA_NM.$FILE_STG_TBL_TGT NONRECOVERABLE"

Since it was not working, just for the testing purpose, the variable $DTTIME_FMT was replaced with \"YYYY-MM-DD HH:MM:SS\". As said earlier, that too does not seem to work.

Request your help to fix this and also like to know whether I can parameterise the all these load options as I've tried.

Thanks, Tom

Upgrade SQL Server 2005 Enterprise to SQL Server 2008 R2 Standard

Posted: 29 Apr 2013 01:34 PM PDT

I understand that this is not a supported path, but does anyone have any insight about how to accomplish this? I can't seem to find any documents on the subject. I need my QA (2005) to match my Production (2008R2). Just joined this company and this is one of my first tasks...

Optimizing Query

Posted: 29 Apr 2013 10:58 AM PDT

Can anyone help me tuning this query.....

explain extended      SELECT DISTINCT a.msisdn FROM `std_msc_opr_wise` a    LEFT OUTER JOIN    (SELECT msisdn     FROM as_treat_pre_usage_30days     GROUP BY msisdn     HAVING SUM(std_total_og_mou)>0) b ON a.msisdn=b.msisdn  WHERE    b.msisdn IS NULL    AND a.call_dt BETWEEN '2013-03-27' AND '2013-04-28';        +----+-------------+---------------------------+-------+---------------+---------+---------+------+----------+----------+-----------------------------------+      | id | select_type | table                     | type  | possible_keys | key     | key_len | ref  | rows     | filtered | Extra                             |      +----+-------------+---------------------------+-------+---------------+---------+---------+------+----------+----------+-----------------------------------+      |  1 | PRIMARY     | a                         | ALL   | ix_cd         | NULL    | NULL    | NULL | 73985537 |    50.00 | Using where; Using temporary      |      |  1 | PRIMARY     | <derived2>                | ALL   | NULL          | NULL    | NULL    | NULL |  1121664 |   100.00 | Using where; Not exists; Distinct |      |  2 | DERIVED     | as_treat_pre_usage_30days | index | NULL          | PRIMARY | 14      | NULL |  3033621 |   100.00 |                                   |      +----+-------------+---------------------------+-------+---------------+---------+---------+------+----------+----------+-----------------------------------+      3 rows in set, 1 warning (1.70 sec)    ________________________________________________________________________    explain extended         SELECT DISTINCT a.msisdn FROM `std_msc_opr_wise` a FORCE INDEX(ix_cd) LEFT OUTER JOIN      (SELECT msisdn FROM as_treat_pre_usage_30days GROUP BY msisdn HAVING SUM(std_total_og_mou)>0 )b      ON a.msisdn=b.msisdn      WHERE b.msisdn IS NULL AND a.call_dt BETWEEN '2013-03-27' AND '2013-04-28';         +----+-------------+---------------------------+-------+---------------+---------+---------+------+----------+----------+-----------------------------------+      | id | select_type | table                     | type  | possible_keys | key     | key_len | ref  | rows     | filtered | Extra                             |      +----+-------------+---------------------------+-------+---------------+---------+---------+------+----------+----------+-----------------------------------+      |  1 | PRIMARY     | a                         | range | ix_cd         | ix_cd   | 4       | NULL | 36992756 |   100.00 | Using where; Using temporary      |      |  1 | PRIMARY     | <derived2>                | ALL   | NULL          | NULL    | NULL    | NULL |  1121664 |   100.00 | Using where; Not exists; Distinct |      |  2 | DERIVED     | as_treat_pre_usage_30days | index | NULL          | PRIMARY | 14      | NULL |  3033621 |   100.00 |                                   |      +----+-------------+---------------------------+-------+---------------+---------+---------+------+----------+----------+-----------------------------------+        ____________________________________________________    TABLE SCHEMA    ____________________________________________________  CREATE TABLE `as_treat_pre_usage_30days` (    `msisdn` varchar(12) NOT NULL COMMENT 'Subscriber number',    `local_a2a_og_mou` bigint(20) DEFAULT NULL COMMENT 'Local A2A OG MOU of the subs',    `local_a2o_og_mou` bigint(20) DEFAULT NULL COMMENT 'Local A2O OG MOU of the subs',    `std_total_og_mou` bigint(20) DEFAULT NULL COMMENT 'STD OG MOU of the subs',    `total_og_mou` bigint(20) DEFAULT NULL COMMENT 'Total OG MOU of the subs',    PRIMARY KEY (`msisdn`),    KEY `ix_std` (`std_total_og_mou`)  ) ENGINE=InnoDB  ____________________________________________________  CREATE TABLE `std_msc_opr_wise` (    `msisdn` varchar(12) DEFAULT NULL COMMENT 'Mobile Number of the Subscriber',    `call_dt` date DEFAULT NULL,    `destination_type` varchar(5) DEFAULT NULL COMMENT 'Destination Type - A2A or A2O',    `duration` int(6) DEFAULT NULL COMMENT 'Call Duration in Seconds',    KEY `ix_ms` (`msisdn`),    KEY `ix_cd` (`call_dt`)  ) ENGINE=InnoDB DEFAULT CHARSET=latin1  /*!50100 PARTITION BY RANGE (to_days(call_dt))  (PARTITION p20130120 VALUES LESS THAN (735253) ENGINE = InnoDB,   PARTITION p20130121 VALUES LESS THAN (735254) ENGINE = InnoDB,   PARTITION p20130122 VALUES LESS THAN (735255) ENGINE = InnoDB,   PARTITION p20130123 VALUES LESS THAN (735256) ENGINE = InnoDB,   PARTITION p20130124 VALUES LESS THAN (735257) ENGINE = InnoDB,   PARTITION p20130126 VALUES LESS THAN (735259) ENGINE = InnoDB,   PARTITION p20130127 VALUES LESS THAN (735260) ENGINE = InnoDB,   PARTITION p20130128 VALUES LESS THAN (735261) ENGINE = InnoDB,   PARTITION p20130129 VALUES LESS THAN (735262) ENGINE = InnoDB,   PARTITION p20130130 VALUES LESS THAN (735263) ENGINE = InnoDB,   PARTITION p20130131 VALUES LESS THAN (735264) ENGINE = InnoDB,   PARTITION p20130201 VALUES LESS THAN (735265) ENGINE = InnoDB,   PARTITION p20130202 VALUES LESS THAN (735266) ENGINE = InnoDB,   PARTITION p20130203 VALUES LESS THAN (735267) ENGINE = InnoDB,   PARTITION p20130204 VALUES LESS THAN (735268) ENGINE = InnoDB,   PARTITION p20130205 VALUES LESS THAN (735269) ENGINE = InnoDB,   PARTITION p20130206 VALUES LESS THAN (735270) ENGINE = InnoDB,   PARTITION p20130207 VALUES LESS THAN (735271) ENGINE = InnoDB,   PARTITION p20130208 VALUES LESS THAN (735272) ENGINE = InnoDB,   PARTITION p20130209 VALUES LESS THAN (735273) ENGINE = InnoDB,   PARTITION p20130210 VALUES LESS THAN (735274) ENGINE = InnoDB,   PARTITION p20130211 VALUES LESS THAN (735275) ENGINE = InnoDB,   PARTITION p20130212 VALUES LESS THAN (735276) ENGINE = InnoDB,   PARTITION p20130213 VALUES LESS THAN (735277) ENGINE = InnoDB,   PARTITION p20130214 VALUES LESS THAN (735278) ENGINE = InnoDB,   PARTITION p20130215 VALUES LESS THAN (735279) ENGINE = InnoDB,   PARTITION p20130216 VALUES LESS THAN (735280) ENGINE = InnoDB,   PARTITION p20130217 VALUES LESS THAN (735281) ENGINE = InnoDB,   PARTITION p20130218 VALUES LESS THAN (735282) ENGINE = InnoDB,   PARTITION p20130219 VALUES LESS THAN (735283) ENGINE = InnoDB,   PARTITION p20130220 VALUES LESS THAN (735284) ENGINE = InnoDB,   PARTITION p20130221 VALUES LESS THAN (735285) ENGINE = InnoDB,   PARTITION p20130222 VALUES LESS THAN (735286) ENGINE = InnoDB,   PARTITION p20130223 VALUES LESS THAN (735287) ENGINE = InnoDB,   PARTITION p20130224 VALUES LESS THAN (735288) ENGINE = InnoDB,   PARTITION p20130225 VALUES LESS THAN (735289) ENGINE = InnoDB,   PARTITION p20130226 VALUES LESS THAN (735290) ENGINE = InnoDB,   PARTITION p20130227 VALUES LESS THAN (735291) ENGINE = InnoDB,   PARTITION p20130228 VALUES LESS THAN (735292) ENGINE = InnoDB,   PARTITION p20130301 VALUES LESS THAN (735293) ENGINE = InnoDB,   PARTITION p20130302 VALUES LESS THAN (735294) ENGINE = InnoDB,   PARTITION p20130303 VALUES LESS THAN (735295) ENGINE = InnoDB,   PARTITION p20130304 VALUES LESS THAN (735296) ENGINE = InnoDB,   PARTITION p20130305 VALUES LESS THAN (735297) ENGINE = InnoDB,   PARTITION p20130306 VALUES LESS THAN (735298) ENGINE = InnoDB,   PARTITION p20130307 VALUES LESS THAN (735299) ENGINE = InnoDB,   PARTITION p20130308 VALUES LESS THAN (735300) ENGINE = InnoDB,   PARTITION p20130309 VALUES LESS THAN (735301) ENGINE = InnoDB,   PARTITION p20130310 VALUES LESS THAN (735302) ENGINE = InnoDB,   PARTITION p20130311 VALUES LESS THAN (735303) ENGINE = InnoDB,   PARTITION p20130312 VALUES LESS THAN (735304) ENGINE = InnoDB,   PARTITION p20130313 VALUES LESS THAN (735305) ENGINE = InnoDB,   PARTITION p20130314 VALUES LESS THAN (735306) ENGINE = InnoDB,   PARTITION p20130315 VALUES LESS THAN (735307) ENGINE = InnoDB,   PARTITION p20130316 VALUES LESS THAN (735308) ENGINE = InnoDB,   PARTITION p20130317 VALUES LESS THAN (735309) ENGINE = InnoDB,   PARTITION p20130318 VALUES LESS THAN (735310) ENGINE = InnoDB,   PARTITION p20130319 VALUES LESS THAN (735311) ENGINE = InnoDB,   PARTITION p20130320 VALUES LESS THAN (735312) ENGINE = InnoDB,   PARTITION p20130321 VALUES LESS THAN (735313) ENGINE = InnoDB,   PARTITION p20130322 VALUES LESS THAN (735314) ENGINE = InnoDB,   PARTITION p20130323 VALUES LESS THAN (735315) ENGINE = InnoDB,   PARTITION p20130324 VALUES LESS THAN (735316) ENGINE = InnoDB,   PARTITION p20130325 VALUES LESS THAN (735317) ENGINE = InnoDB,   PARTITION p20130326 VALUES LESS THAN (735318) ENGINE = InnoDB,   PARTITION p20130327 VALUES LESS THAN (735319) ENGINE = InnoDB,   PARTITION p20130328 VALUES LESS THAN (735320) ENGINE = InnoDB,   PARTITION p20130329 VALUES LESS THAN (735321) ENGINE = InnoDB,   PARTITION p20130330 VALUES LESS THAN (735322) ENGINE = InnoDB,   PARTITION p20130331 VALUES LESS THAN (735323) ENGINE = InnoDB,   PARTITION p20130401 VALUES LESS THAN (735324) ENGINE = InnoDB,   PARTITION p20130402 VALUES LESS THAN (735325) ENGINE = InnoDB,   PARTITION p20130403 VALUES LESS THAN (735326) ENGINE = InnoDB,   PARTITION p20130404 VALUES LESS THAN (735327) ENGINE = InnoDB,   PARTITION p20130405 VALUES LESS THAN (735328) ENGINE = InnoDB,   PARTITION p20130406 VALUES LESS THAN (735329) ENGINE = InnoDB,   PARTITION p20130407 VALUES LESS THAN (735330) ENGINE = InnoDB,   PARTITION p20130408 VALUES LESS THAN (735331) ENGINE = InnoDB,   PARTITION p20130409 VALUES LESS THAN (735332) ENGINE = InnoDB,   PARTITION p20130410 VALUES LESS THAN (735333) ENGINE = InnoDB,   PARTITION p20130411 VALUES LESS THAN (735334) ENGINE = InnoDB,   PARTITION p20130413 VALUES LESS THAN (735336) ENGINE = InnoDB,   PARTITION p20130414 VALUES LESS THAN (735337) ENGINE = InnoDB,   PARTITION p20130415 VALUES LESS THAN (735338) ENGINE = InnoDB,   PARTITION p20130416 VALUES LESS THAN (735339) ENGINE = InnoDB,   PARTITION p20130417 VALUES LESS THAN (735340) ENGINE = InnoDB,   PARTITION p20130418 VALUES LESS THAN (735341) ENGINE = InnoDB,   PARTITION p20130419 VALUES LESS THAN (735342) ENGINE = InnoDB,   PARTITION p20130420 VALUES LESS THAN (735343) ENGINE = InnoDB,   PARTITION p20130421 VALUES LESS THAN (735344) ENGINE = InnoDB,   PARTITION p20130422 VALUES LESS THAN (735345) ENGINE = InnoDB,   PARTITION p20130423 VALUES LESS THAN (735346) ENGINE = InnoDB,   PARTITION p20130424 VALUES LESS THAN (735347) ENGINE = InnoDB,   PARTITION p20130425 VALUES LESS THAN (735348) ENGINE = InnoDB,   PARTITION p20130426 VALUES LESS THAN (735349) ENGINE = InnoDB,   PARTITION p20130427 VALUES LESS THAN (735350) ENGINE = InnoDB,   PARTITION p20130428 VALUES LESS THAN (735351) ENGINE = InnoDB,   PARTITION p20130429 VALUES LESS THAN (735352) ENGINE = InnoDB,   PARTITION p20130430 VALUES LESS THAN (735353) ENGINE = InnoDB,   PARTITION p20130501 VALUES LESS THAN (735354) ENGINE = InnoDB) *  

I can't install db-mysql in node.js

Posted: 29 Apr 2013 05:45 PM PDT

When I execute this command: npm install db-mysql, I get the following result. What's the solution to this problem?

C:\Users\Mouad>npm install db-mysql  npm http GET https: //registry.npmjs.org/db-mysql  npm http GET https: //registry.npmjs.org/db-mysql  npm http GET https: //registry.npmjs.org/db-mysql  npm ERR! Error: SELF_SIGNED_CERT_IN_CHAIN  npm ERR!     at SecurePair.<anonymous> (tls.js:1283:32)  npm ERR!     at SecurePair.EventEmitter.emit (events.js:92:17)  npm ERR!     at SecurePair.maybeInitFinished (tls.js:896:10)  npm ERR!     at CleartextStream.read [as _read] (tls.js:430:15)  npm ERR!     at CleartextStream.Readable.read (_stream_readable.js:294:10)  npm ERR!     at EncryptedStream.write [as _write] (tls.js:344:25)  npm ERR!     at doWrite (_stream_writable.js:211:10)  npm ERR!     at writeOrBuffer (_stream_writable.js:201:5)  npm ERR!     at EncryptedStream.Writable.write (_stream_writable.js:172:11)  npm ERR!     at write (_stream_readable.js:547:24)  npm ERR! If you need help, you may report this log at:  npm ERR!     <http: //github.com/isaacs/npm/issues>  npm ERR! or email it to:  npm ERR!     "npm-@googlegroups.com"    npm ERR! System Windows_NT 6.1.7601  npm ERR! command "C:\\Program Files\\nodejs\\\\node.exe" "C:\\Program Files\\nodejs\\node_modules\\npm\\bin\\npm-> cli.js" "install" "db-mysql"  npm ERR! cwd C:\Users\Mouad  npm ERR! node -v v0.10.4  npm ERR! npm -v 1.2.18  npm ERR!  npm ERR! Additional logging details can be found in:  npm ERR!     C:\Users\Mouad\npm-debug.log  npm ERR! not ok code 0  

Table for optional parent/child relationship

Posted: 29 Apr 2013 12:09 PM PDT

Assuming we have the following table: Item, Parent, Child and Parent is the parent of child.

The item can either belong to a parent or child and not both.

I have other tables that are similar to Item and they too can belong to either a Parent or Child.

Should I simply just add 2 nullable FK to them?

Can I enforce that either a Parent or Child must exists using the db?

What is the correct model for related tables with millions of rows?

Posted: 29 Apr 2013 10:49 AM PDT

I need to create a question and answer tables that will have millions (maybe billions) of rows.

The current model is:

Question Table id_question (PK, auto increment) id_user question_content question_date

Answer Table id_answer (PK, auto increment) id_question id_user answer_content answer_date

Is this a correct model (considering better query performance)? Should I add the id_question and id_user columns to the primary key?

Thanks

Is it ever a good idea to denormalize for integrity?

Posted: 29 Apr 2013 12:13 PM PDT

I'm using Postgres 9.2 to develop a quiz app, in which I present the user with a series of problems and record their answers.

These problems can take a variety of forms - they might be multiple choice (What's 2 + 2? A: 2. B: 3. C: 4), or they might require the user to calculate an answer of their own, in which case I need to constrain their input to be something like '440' or '1/2' or '.333'. Some of the problems might prompt the user to type in an essay. And, of course, I may need to add more types of problems later.

The tables I'm envisioning, in a simplified form, are something like this:

CREATE TABLE problems  (    problem_id serial NOT NULL PRIMARY KEY,    problem_type text NOT NULL, -- Refers to a lookup table    answer_letter text, -- If not null, refers to the correct answer in the answers table below.    response text -- If not null, represents a correct answer to be input, like '0.4'  );    CREATE TABLE answers  (    problem_id integer, -- Foreign key    answer_letter text,    content text,      CONSTRAINT answers_pkey PRIMARY KEY (problem_id, answer_letter)  )    CREATE TABLE questions  (    user_id integer,    created_at timestamptz,    problem_id integer, -- Foreign key    answer_letter text,    response text,      CONSTRAINT questions_pkey PRIMARY KEY (user_id, created_at)  );  

So, the problems table would have a variety of constraints to ensure that:

  • When problem_type is 'multiple_choice', answer_letter must not be null and response must be null.
  • When problem_type is 'user_input', answer_letter must be null and response must not be null. Response must also consist of only a few characters.
  • When problem_type is 'essay', both answer_letter and response must be null, since I can't really have a correct answer for an essay question.

This is clean enough, and constrains the problems table just fine. I might use an enum instead of a lookup table for problem_type, since all of its possible values will already be baked into the schema anyway.

My difficulty is, how to constrain the questions table? Its constraints will be very similar (I don't want an answer_letter supplied for a question that references an essay problem, and so on). I can think of a few options:

  1. Create a unique index on problems (problem_id, problem_type), add a problem_type field to questions and include it in the reference to problems, then use it in check constraints in a way similar to problems. This is the way I'm leaning right now, since it seems cleanest overall, but then I'm denormalizing to achieve the proper constraints, which feels wrong to me.
  2. Create three problem tables, one for each type, and constrain them separately. Do the same with three question tables. This feels like the pure relational way to me, which is typically what I'd like to go for, but it also feels way too complex. I don't want to have to deal with unioning three tables (or more, later on) to get a user's question history.
  3. Go with #2, but use Postgres' inheritance support to try to keep the logic simple. But since you can't point a foreign key at a hierarchy of tables, again it's not a very clean solution, it's a solution that would again need to be hacked around.
  4. Use triggers to ensure that questions data fits the corresponding problem. Maybe it's because I don't have much experience with triggers, but I'm concerned that cramming that sort of imperative logic into the DB will eventually become unmanageable.
  5. Forget the question constraints, handle it in app logic, and hope for the best. You can't constrain everything all the time. Of course, I don't really like this idea either.

I feel like there's a problem with my approach to modeling that's leading me to these difficulties, especially since I'm running into a couple of cases very similar to this elsewhere in my schema (this one was just the easiest to describe). Maybe it's just a domain that's difficult to model, but I feel like there must be a better way, and I'm just not normalizing correctly.

Help? Thanks!

Separating tables vs having one table

Posted: 29 Apr 2013 01:42 PM PDT

At the moment I have a table setup that looks somewhat like this:

create table tbl_locationcollections  (     id int(11) PRIMARY KEY IDENTITY, --(Primary Key),     name varchar(100) not null,     cleanname varchar(100) not null,     typeid int(11) not null (foreign key)  )    create tbl_locationcollectiontypes  (     id int(11) PRIMARY KEY IDENTITY,  --(Primary Key)     type varchar(100)  )  

a type would be something like: country, state, city etc etc.

I would then join to the various type tables like this:

create tbl_states2locationcollection  (     id int(11) PRIMARY KEY IDENTITY,  --(Primary Key)     stateid int(11) not null, --(foreing key)     locationcollectionid int(11) not null --(foreign key)  )  

with other tables like tbl_cities2locationcollection etc etc.

Is this style of seperation better for speed and readability than having a table such as:

create tbl_locations2collection  (     id int(11) PRIMARY KEY IDENTITY, --(Primary key)     locationid int(11) not null, --(foreign key to the type of: state, country, city)     typeid int(11) not null --(foreign key)  )  

where all the different types are mixed in together.

the only downside i can see for not having a mixed table, is having to create a specific table for each type that is created in the future.

Easier way to handle so many isnull() situation

Posted: 29 Apr 2013 09:13 PM PDT

Is there any good way to avoid writing so many times isnull() function inside sproc ?
I have stored procedure that use almost 30 times isnull() function, I think that I am miss a concept , but until I find better way to get my data ill love to clean my code from so many isnull() functions.
Can I for while set MSSQL 2008R2 server to using null values as float 0.
Moost of my isnull()-s adding just zero 0 value if there is no data so I can do maths operations.

EDIT: I am not lazy, I just trying to clean my code and avoid select parts looks like this

select       vrsta.NAZIV     ,isnull(sum(prod.prod_mpci),0) as prod_MPCI     ,isnull(sum(isnull(mal_MPCI,0) + (vel_kol * isnull(mal_MPC,vel_VPC))),0) as lager_mpci     ,isnull(sum(vel_NCI),0)+isnulL(sum(mal_NCI),0) as lager_nci     ,sum(   case               when isnull(vel_KOL,0)+isnull(mal_KOL,0) > isnull(prod.prod_kol,0) and isnull(dzn.dzn,'2010-01-01') < @dzu                      then ((isnull(vel_KOL,0)+isnull(mal_KOL,0))-isnull(prod.prod_kol,0)) * isnull(mal_MPC,vel_VPC)                      else 0 end              ) as visak_MPC     ,sum(   case               when isnull(vel_KOL,0)+isnull(mal_KOL,0) > isnull(prod.prod_kol,0) and isnull(dzn.dzn,'2010-01-01') < @dzu                      then ((isnull(vel_KOL,0)+isnull(mal_KOL,0))-isnull(prod.prod_kol,0)) * isnull(mal_NC,vel_NC)                      else 0 end              ) as visak_MPC      ,sum(   case               when isnull(vel_KOL,0)+isnull(mal_KOL,0) <= isnull(prod.prod_kol,0)                       then ((isnull(vel_KOL,0)+isnull(mal_KOL,0))-isnull(prod.prod_kol,0)) * isnull(mal_MPC,vel_VPC)                      else 0 end              ) as manjak_MPC        ,sum(   case               when isnull(vel_KOL,0)+isnull(mal_KOL,0) <= isnull(prod.prod_kol,0)                       then ((isnull(vel_KOL,0)+isnull(mal_KOL,0))-isnull(prod.prod_kol,0)) * isnull(mal_NC,vel_NC)                      else 0 end              ) as manjak_NC  

T-SQL Issues With Defining 'AS'

Posted: 29 Apr 2013 12:04 PM PDT

I am creating a fully dynamic application but have ran into a bit of a hiccup. There are multiple 'undefined' fields that can be defined by the users. The only problem is redisplaying them. Currently they are showing as [UDF_0] to [UDF_7], i.e. :

SELECT [TABLE].[UDF_0],         [TABLE].[UDF_1],         [TABLE].[UDF_2],         [TABLE].[UDF_3],         [TABLE].[UDF_4],         [TABLE].[UDF_5],         [TABLE].[UDF_6],         [TABLE].[UDF_7]  FROM [TABLE]  

Would obviously display as:

UDF_0 || UDF_1 || etc...  

What I would like to be able to do is display them something to this effect, but I can't for the life of me figure it out.

EXECUTE PROCEDURE [dbo].[GetProjectInfo] @Project varchar(100)    AS  BEGIN    SELECT [TABLE].[UDF_0] AS (SELECT [TBL_PROJECT_DESC].[UDF_0]                             FROM [TBL_PROJECT_DESC]                             WHERE [TBL_PROJECT_DESC].[PROJECT_NAME]=@Project),         --etc....  FROM [TABLE]  

Desired display would be :

Field Name 0 || Field Name 1 || etc...  

SQL Server update query on linked server causing remote scan

Posted: 29 Apr 2013 08:04 PM PDT

I have a SQL Server 2012 setup as a linked server on a SQL Server 2008 server.

The following queries executes in less than 1 second:

   SELECT kg.IdGarment     FROM Products p      INNER JOIN ZAMStaffRentals r ON r.ProductID = p.UniqueID      INNER JOIN ZAMCustomer_Staff s ON r.StaffID = s.UniqueID      INNER JOIN ZAMRentalGarments g ON g.StaffRentalID = r.UniqueID      INNER JOIN [REMOTE_SERVER].[REMOTE_DB].dbo.t_ProcessIT_Garment kg on g.BarcodeRFID = kg.IdGarment      INNER JOIN ZAMKannChanges log on log.GarmentID = g.UniqueID      WHERE log.ActionType = 'I'   

t_ProcessIT_Garment contains 37,000 rows, the query returns two records, the IdGarment column is the Primary Key. No problem here.

However, if I run this query to do a remote update, it takes 24 seconds, and 2 rows is affected:

   UPDATE [REMOTE_SERVER].[REMOTE_DB].dbo.t_ProcessIT_Garment SET      IdGarment = IdGarment     FROM Products p      INNER JOIN ZAMStaffRentals r ON r.ProductID = p.UniqueID      INNER JOIN ZAMCustomer_Staff s ON r.StaffID = s.UniqueID      INNER JOIN ZAMRentalGarments g ON g.StaffRentalID = r.UniqueID      INNER JOIN [REMOTE_SERVER].[REMOTE_DB].dbo.t_ProcessIT_Garment kg on g.BarcodeRFID = kg.IdGarment      INNER JOIN ZAMKannChanges log on log.GarmentID = g.UniqueID      WHERE log.ActionType = 'I' ;  

I tested using IdGarment = IdGarment to keep things simple. The execution plan shows it uses Remote Query for the first query, but Remote Scan for the second query, which has 100% of the cost.

The table joins are identical in both queries, why is it using Remote Scan for the second query, and how do I fix this?

Mysql - How to optimize retrival time in a table

Posted: 29 Apr 2013 06:04 PM PDT

I have query like this! which has 200 million Records in a single table.. I am using BTree Indexes in my table...

mysql> select COUNT(DISTINCT id) from [tablename] where [columname] >=3;
+------------------------------+
| COUNT(DISTINCT id) |
+------------------------------+
| 8242063
+------------------------------+
1 row in set (3 min 23.53 sec)

I am not satisfy with this timing ..! how can I reduce the result time less than 30sec. Kindly give me any suggessions! It will be more helpful to me!

thanking you!

MySQL specific database configuration file

Posted: 29 Apr 2013 02:04 PM PDT

In MySQL's configuration file I've globally disabled autocommit as so.

[mysqld]  autocommit=0  

I need to turn MySQL's autocommit on for a specific Ruby on Rails database though. It could be for the user or the database itself (doesn't matter). Thinking it would look something like this.

[mysqld]  autocommit=0  execute_sql="Custom SQL to set autocommit for a database"  

SQL Server 2005 Replication

Posted: 29 Apr 2013 05:04 PM PDT

I am in the process of creating Replication between 2 Remote Servers, server 1 is the Distributor and Publisher and server 2 is the Subscription.

server 1 windows 2003 server 192.168.10.1 connected by vpn SQL Server 2005 domain1.local

server 1  windows 2003 server  192.168.10.1 connected by vpn  SQL Server 2005  domain1.local  

server 2 windows 2003 server 192.168.10.6 connected by vpn SQL Server 2005 domain2.local

server 2  windows 2003 server  192.168.10.6 connected by vpn  SQL Server 2005  domain2.local  

When I setup up Replication everything looked fine until I looked at the sync status and it said:

The Agent could not be started    An exception occurred while executing a transact-sql statement or batch    sqlserveragent error request to run job  server1-username blah blah blah  

From user sa refused because the job is already running from a request by user sa changed database context to technical error 22022.

I have cleared jobs in the server agent as well as restarted the service.

Could this be something to do with authentication between two non trusted domains as I can browse and even control each sql server via SQL studio but just not setup replication?

Yes I can manage each SQL Server in SSMS and we are using merge with snapshot.

Mysqldump tables excluding some fields

Posted: 29 Apr 2013 01:04 PM PDT

Is there a way to mysqldump a table without some fields?

Let me explain:
I have a MySQL database called tests. In tests I have 3 tables: USER, TOTO and TATA. I just want to mysqldump some fields of table USER, so excluding some fields like mail, ip_login, etc.

How can I do this?

How to do something like UPDATE DELAYED in MySQL

Posted: 29 Apr 2013 03:42 PM PDT

I have an averages table that should keep track of an average value over time. I don't want to have a row for each value, just a single row that continuously updates the average. What I've come up with is this:

set @value=4;  set @name="myAverageValue";  UPDATE `timing` SET    `max` = greatest(`max`,@value),    `average` = `average` + ((@value - `average`) / (`count` + 1)),    `count` = `count` + 1  WHERE `name` = @name  

Many clients may be doing this at the same time, and I don't want there to be any locking issues. I don't care what order the updates are run in, since in the end it will all end up the same. I just want to have a query that sends the UPDATE to the database, and it will process it eventually, similar to an INSERT DELAYED. Does UPDATE LOW_PRIORITY do this, or does that cause the client to wait until it is available?

How does SQL Server AlwaysOn work with help with scheduled jobs and SSIS packages?

Posted: 29 Apr 2013 02:12 PM PDT

I have 2 sql servers set up with a database within an AlwaysOn availability group.

I also have jobs set up which run against the database. During a failover how do I ensure the jobs will continue to run on the secondary server? Do I need to install the jobs and ssis packages on both machines and manually disable them on the secondary machine... then manually enable them in the case of a failover? Or is there built in functionality to handle this?

Replication master binlog rotation when network is unavailable

Posted: 29 Apr 2013 04:04 PM PDT

I recently experienced an issue where the binlog file in master rotated because network connectivity between the slave and master was unavailable.

After solving the network issue, the slave was not able to follow the master as it was doing through previous binlog and position.

That was solved by purging the binlog to last binlog in master and pointing the slave to that last binlog and previous binlog's position which was following.

I am wondering if this issue is normal?

Take individual MySQL database offline

Posted: 29 Apr 2013 07:04 PM PDT

MySQL does not have any SQL commands or internal mechanisms for

  • making an individual database unavailable / offline
  • moving an individual database

Thtis being the case, how can you take an individual database offline?

Choosing shard key and friendly URL Ids for my MongoDB

Posted: 29 Apr 2013 09:04 PM PDT

I have decided to use MongoDB as my Database for a web application. However, I have some difficulties to get started and I hope that you can help me out with a few questions.

I am developing my application in ASP.NET and with MongoDB as the back-end. I intend to start with a single server + 1 replication but wanted to built it right so I won't have problem sharding the database in the future if I have to.

One of my biggest problems is choosing the right shard key and friendly URLs for my website.

I have a folders collection and files as embedded collection inside the folders collection. Each user can create any number of folders and add files to it. Each folder belongs to one user. I wanted to know what is the best shard key for this type of collection? Many queries will query by the user, getting the folder and its items by querying the folders collection by its unique id. I will also use the id in the URL to get the folder and its filers: ex. mywebsite.com/folder/[the_id_of_the_folder]

I will also will use paging in my application, so I need to query the data (also in a sharded environment) and get for example: the last 10 records, page 2 with 10 records - all ordered by the last time the were inserted/updated

  • So my first question is what is the best shard key to use for a single machine, but considering that I will have to shard in the future
  • Does the shard key has to be the primary unique id of the document in MongoDB?
  • How can I generate more user friendly URLs - I prefer a numerical value instead of GUID (is there option to convert it?)

Help will be very appreciated, as I am stuck and can continue until I solve this.

SQL Server 2008 R2 Transactional Replication "Cannot insert explicit value for identity column..."

Posted: 29 Apr 2013 12:32 PM PDT

Right now I'm having an "identity crisis" with transactional replication in SQL Server 2008 R2. The database is running in compatibility 90. There's a table that has an identity column and won't replicate over. The error is "Cannot insert explicit value for identity column in table '' when IDENTITY_INSERT is set to OFF. (Source: MSSQLServer, Error number: 544)".

The "not for replication" is set to true for that table. I can't find any settings for the articles to specify this as well.

Any ideas are appreciated.

Is there a repository for pgadmin plugins?

Posted: 29 Apr 2013 05:24 PM PDT

I've been using PostgreSQL with PGAdmin III for a while now and it's been bugging me that there is a plugins menu option that is empty. I've Googled some and found a plugin here and there but I wanted to know if there was a repository I was missing out on?

If you don't know of a repo but are aware of additional plugins please also let me know about them.

No comments:

Post a Comment

Search This Blog