Showing posts with label how to. Show all posts
Showing posts with label how to. Show all posts

Wednesday, March 20, 2013

[how to] sql server sum and count

[how to] sql server sum and count


sql server sum and count

Posted: 20 Mar 2013 08:52 PM PDT

Sample data :

LINE_NO E_FIELD F_FIELD G_FIELD HSA_STATUS  FAMILY  CACHE_FAMILY    Count  Percentage     23053B  00000   00000   00000   S           SUMMIT      WER           43   ??  23053B  00000   00000   00000   T           SUMMIT      WER           144  ??  23053B  00000   00000   00684   T           SUMMIT      WER           2    ??  23053B  00353   00418   00684   T           SUMMIT      WER           1    ??  23053B  00353   00418   00763   T           SUMMIT      WER           1    ??  23053B  00353   00418   01512   T           SUMMIT      WER           1    ??  23053B  00353   00418   06797   T           SUMMIT      WER           1    ??  23053B  00353   00418   30228   T           SUMMIT      WER           1    ??  23053B  00353   00418   31935   T           SUMMIT      WER           2    ??  23053B  05601   01402   00758   T           SUMMIT      WER           1    ??  23053B  05601   01402   09091   T           SUMMIT      WER           1    ??  23053B  05601   01402   65053   T           SUMMIT      WER           1    ??  

This is my query:

SELECT LINE_NO,    E_FIELD,    F_FIELD,    G_FIELD,    HSA_STATUS,    FAMILY,    CACHE_FAMILY,    Count = ((SUM(TOTAL)) )    FROM  (    SELECT LINE_NO,      E_FIELD,      F_FIELD,G_FIELD,      HSA_STATUS,      FAMILY,      CACHE_FAMILY,       Count(LINE_NO) as Total     FROM TX_HSA_SUMM     WHERE MT_TIMESTAMP2 BETWEEN ('2013-03-07 10:10:00') AND ('2013-03-08 10:20:00')     GROUP BY LINE_NO,E_FIELD,F_FIELD,G_FIELD,HSA_STATUS,FAMILY,CACHE_FAMILY  ) as a  GROUP BY LINE_NO,E_FIELD,F_FIELD,G_FIELD,HSA_STATUS,FAMILY,CACHE_FAMILY,total  ORDER BY LINE_NO,E_FIELD,F_FIELD,G_FIELD,HSA_STATUS,FAMILY,CACHE_FAMILY,total  

This is my sample data actually. I already make the count. As You can see the column header. My problem is I need to sum the count and the for each row I need to get the percentage. For example the total for the above record is 199 so for the first record count is 43 so the calculation should be 43/199 * 100. How can I view the percentage? Please help me I need this urgently.

Is there any suggestion to show how to get the count first. then from that sum the count and finally. the count should be divided with the sum and * 100 to get percentage. Thanks

sql server-percentage calculation

Posted: 20 Mar 2013 06:39 PM PDT

Sample data :

LINE_NO E_FIELD F_FIELD G_FIELD HSA_STATUS  FAMILY  CACHE_FAMILY    Count  23053B  00000   00000   00000   S           SUMMIT      WER           43  23053B  00000   00000   00000   T           SUMMIT      WER           144  23053B  00000   00000   00684   T           SUMMIT      WER           2  23053B  00353   00418   00684   T           SUMMIT      WER           1  23053B  00353   00418   00763   T           SUMMIT      WER           1  23053B  00353   00418   01512   T           SUMMIT      WER           1  23053B  00353   00418   06797   T           SUMMIT      WER           1  23053B  00353   00418   30228   T           SUMMIT      WER           1  23053B  00353   00418   31935   T           SUMMIT      WER           2  23053B  05601   01402   00758   T           SUMMIT      WER           1  23053B  05601   01402   09091   T           SUMMIT      WER           1  23053B  05601   01402   65053   T           SUMMIT      WER           1  

This is my query:

SELECT LINE_NO,    E_FIELD,    F_FIELD,    G_FIELD,    HSA_STATUS,    FAMILY,    CACHE_FAMILY,    Count = ((SUM(TOTAL)) )    FROM  (    SELECT LINE_NO,      E_FIELD,      F_FIELD,G_FIELD,      HSA_STATUS,      FAMILY,      CACHE_FAMILY,       Count(LINE_NO) as Total     FROM TX_HSA_SUMM     WHERE MT_TIMESTAMP2 BETWEEN ('2013-03-07 10:10:00') AND ('2013-03-08 10:20:00')     GROUP BY LINE_NO,E_FIELD,F_FIELD,G_FIELD,HSA_STATUS,FAMILY,CACHE_FAMILY  ) as a  GROUP BY LINE_NO,E_FIELD,F_FIELD,G_FIELD,HSA_STATUS,FAMILY,CACHE_FAMILY,total  ORDER BY LINE_NO,E_FIELD,F_FIELD,G_FIELD,HSA_STATUS,FAMILY,CACHE_FAMILY,total  

This is my sample data actually. I already make the count. As You can see the column header. My problem is I need to sum the count and the for each row I need to get the percentage. For example the total for the above record is 199 so for the first record count is 43 so the calculation should be 43/199 * 100. How can I view the percentage? Please help me I need this urgently.

Communication link failure for some queries to linked server

Posted: 20 Mar 2013 04:28 PM PDT

I am seeing the following error in SSMS (server has Windows Server 2008 and Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 installed) when running some queries against the linked server, in particular long running queries.

Simple selects from tables across the linked server work fine. This is a new issue that was noticed when SPs that have worked for years started failing.

I have run a Wireshark capture on the server, capturing for packets to port 1433 on the linked server host. At the tail of the capture, I see many (10) TCP Keep-Alives being issued (after a message regarding bad checksum) and then an RST packet. The RST packet is correlated with the error below being returned to the client.

There are other database servers on our network, where the linked server is configured identically, that don't exhibit this issue.

I have found some articles such as this and this. We are using the implicated Broadcom NICs. The Chimney Offload State setting is enabled on the server.

We will try disabling. Other thoughts on troubleshooting would be much appreciated.

OLE DB provider "SQLNCLI10" for linked server "myServer" returned message "Protocol error in TDS stream".  OLE DB provider "SQLNCLI10" for linked server "myServer" returned message "Communication link failure".  Msg 65535, Level 16, State 1, Line 0  Session Provider: Physical connection is not usable [xFFFFFFFF].   OLE DB provider "SQLNCLI10" for linked server "myServer" returned message "Communication link failure".  Msg 65535, Level 16, State 1, Line 0  Session Provider: Physical connection is not usable [xFFFFFFFF].   OLE DB provider "SQLNCLI10" for linked server "myServer" returned message "Communication link failure".  Msg 64, Level 16, State 1, Line 0  TCP Provider: The specified network name is no longer available.  

When is data moved during an Oracle partition split?

Posted: 20 Mar 2013 06:31 PM PDT

I need to split an Oracle partition, and I'm confused about whether or not Oracle is going to physically relocate the data when I execute the split. Here's what I intend to do:

alter table SCHEMA.DATATABLE     split partition MAXVAL_PART     at (TO_DATE(' 2013-04-01 00:00:00',                 'SYYYY-MM-DD HH24:MI:SS',                 'NLS_CALENDAR=GREGORIAN'))     into (partition PARTFOREXISTINGROWS          tablespace EXISTINGMAXVALTABLESPACE,           partition MAXVAL_PART          tablespace NEWMAXVALTABLESPACE);  

The problem is that my current MAXVAL_PART has 320 GB of data in it, and I don't want to physically move this data on the disk. If I cut off the partition at 2013-04-01, then there will be no data in the new partition, but I'm getting conflicting information about whether this will still necessitate a move of all the data. Ideally, Oracle sees that my new MAXVAL partition will be empty, defines it in the new table space, and I'm all done.

Will this command move the data that's already on the disk, or will it leave it in place and just create a new partition?

This article says Oracle will detect the empty partition and do a fast split http://docs.oracle.com/cd/B19306_01/server.102/b14231/partiti.htm

But I see in a number of places that a split means all data gets moved to new storage. Which is it?

SQL Server 2008R2 DBCC CheckDB never finishing

Posted: 20 Mar 2013 02:20 PM PDT

I'm running SQL Server 2008 R2 on Windows server 2008 R2 and up until recently we were having no problems. After windows updates recently (no sql server updates) we cannot run step 1 of our maintenance plan DBCC CHECKDB It does not error out, it just runs indefinitely, longest time so far 36 hours. The DB is only about 500MB so I dont think size is an issue.

I was just wondering if anyone has seen this sort of issue in the past.

Why does that query cause lock wait timeouts?

Posted: 20 Mar 2013 06:24 PM PDT

From time to time, I find a lot of these errors in my PHP error log:

MYSQL.1213: Deadlock found when trying to get lock; try restarting transactionSQL  

The problem persists for about 2 or 3 minutes. Thanks to stackoverflow, the reason was quite easy to find:

------------------------  LATEST DETECTED DEADLOCK  ------------------------  130320 15:53:37  *** (1) TRANSACTION:  TRANSACTION 0 83395751, ACTIVE 35 sec, process no 26405, OS thread id 140507872417536 starting index read  mysql tables in use 3, locked 3  LOCK WAIT 3 lock struct(s), heap size 1216, 2 row lock(s)  MySQL thread id 1163191, query id 199629038 localhost sosci Updating  UPDATE `database`.`table` SET `invalidate`='2013-03-21 03:53:02' WHERE ((token='C7G8X3H9ABCDEEFGT') AND (invalidate IS NULL)) AND (project=26118) LIMIT 1  *** (1) WAITING FOR THIS LOCK TO BE GRANTED:  RECORD LOCKS space id 0 page no 65548 n bits 192 index `PRIMARY` of table `database`.`table` trx id 0 83395751 lock_mode X locks rec but not gap waiting  Record lock, heap no 4 PHYSICAL RECORD: n_fields 12; compact format; info bits 0   0: len 4; hex 000c2591; asc   % ;; 1: len 6; hex 000004e36ace; asc     j ;; 2: len 7; hex 8000000a830110; asc        ;; 3: len 4; hex 80000001; asc     ;; 4: len 4; hex 80006606; asc   f ;; 5: len 1; hex 07; asc  ;; 6: len 16; hex 32455637363853485447444734584252; asc 2EV768SHTGDG4XBR;; 7: SQL NULL; 8: len 30; hex 3935363436362c656e672c616e6e612e63616d706f7265736940676d6169; asc 956466,eng,anna.camporesi@gmai;...(truncated); 9: SQL NULL; 10: len 8; hex 8000124ef477640e; asc    N wd ;; 11: len 8; hex 8000124ef495e88e; asc    N    ;;  

What I do not understand is: Why? The locked table is very small, only 61 entries (about 30 new and 30 deleted per day, the auto-incremented primary index is near 800.000). No column is especially large.

I use the InnoDB enging for this table (one key refers to another table with approx. 20.000 entries) and the problem occurs from time to time. RAM should not be an issue.The webserver and MySQL server run on the same (virtual) machine that usually does not suffer performance problems. Other transactions (there were thousands during the locked minutes) in large tables (1-2 mio. entries) did not make any trouble.

Thanks for your hints!

PostgresSQL: Get single attribute of UDT in SELECT statement

Posted: 20 Mar 2013 03:16 PM PDT

I created a user-defined type in a PostgreSQL 9.2 database and am trying in the SELECT statement to get only an attribute of the UDT. However, I don't seem to get my code to work.

Creation of type and table:

CREATE TYPE  ttp AS (f1 int, f2 int);  CREATE TABLE tbl (ctyp ttp);  

The SELECT statement:

-- does not work  SELECT ctyp.f1 FROM tbl  SELECT ctyp(f1) FROM testtable  

Both times I get an error. Is it possible to access a single attribute defined in a UDT in a SELECT?

Optimal drive configuration for SQL Server 2008R2

Posted: 20 Mar 2013 04:22 PM PDT

I have a fairly busy database server running SQL Server 2008 R2 that has the following setup:

  • SATA RAID 1 (2 Drives) - OS / Programs
  • SAS RAID 10 (4 Drives) - Sql Database Files (data and logs)
  • SAS RAID 1 (2 Drives) - TempDB (data and logs)

Assuming I can't add additional drives into this server, have I made the best use of the configuration I have available? Or should I consider another scheme here where logs are isolated from the data files, for example?

Streaming replication WAL files pg_xlog

Posted: 20 Mar 2013 05:40 PM PDT

We are using this awesome tool repmgr developed by 2nd Quadrant for streaming replication.

We set 1000 WALs to be archived. What I noticed though is that on master we have about 600 WALs in the pg_xlog directory whilst on the standby we have about 10 - yet replication is not lagging and from the looks on the data it looks like all data is there ie replication works.

I might be missing something but I'd expect the same amount of WALS to be on both master and standby ?

Is there some magical process which is removing them ?

Need help with syntax of Instr in Access 2000. Trying to find values 0.0 to 9.9 in a text string. [migrated]

Posted: 20 Mar 2013 01:08 PM PDT

I am an Access novice trying to use Instr to select numeric data from a text string.

 Instr ( [start], string_being_searched, string2, [compare] )        [start] defaults to 1        string_being_searched is [Source Key]        string2 is a number, decimal and number between 0.0 and 9.9        [compare] is optional and I do not think it applies here  

My problem is with string2, which returns a 0 for every occurrence when I use

 Expr1: InStr([Source Key],'#.#') or, with double quotes    Expr1: InStr([Source Key],"#.#")  

If I specify a value such as 1.4 or 3.2 or even a text value such as QJX (not even a number) it returns the value specified, but I need it to select two numbers with a decimal point between them.

Please can you help as I am really stuck.

Thanks

When is it appropriate to use SQL Server Developer Edition?

Posted: 20 Mar 2013 04:27 PM PDT

Do you use SQL Server Developer Edition on server-class machines in DEV and STAGING environments?

I am working on a large project where (if it passes the proof of concept stage) we will have several large geographically distributed enterprise class database servers running SQL Server 2012 Enterprise Edition.

We will have a production environment will initially have 3 servers, our Staging environment will have a minimum of 3 servers, and our Development environment will have 1 server (hosting three instances).

I was under the impression that we would only need to acquire Enterprise licences for the actual production servers, and we could get by with developer edition in our developer and staging environments because they are not "production".

Other sources have told me that we need to have an enterprise licence on all of the above mentioned machines, and that the developer edition is only meant for a single developer on their workstation.

Since developer edition has all the sexy features of Enterprise, I can't really see the value of it on a workstation class machine, especially for developing and testing the type of high availability system we are building.

If we have to fork out Enterprise licences for a dev server, that will just about kill our proof of concept stage, thus killing the project. Forcing an enterprise licence on a staging environment will make management just want to skip staging altogether.

Modeling staff dimension in data warehouse

Posted: 20 Mar 2013 12:44 PM PDT

I need to write a data warehouse diagram. I've got a problem with staff area. I have to store a information about workers details like name, age etc., and workers job time and agreement details like job_start, job_end, salary etc.

Firstly, I tried to draw a dimension for each data, but I consider whether it should be done like a connection between these dimensions each other?

Avoiding repetition without creating a view

Posted: 20 Mar 2013 03:27 PM PDT

Suppose that I have a query Q1 and I need to run a query like the following:

Q1  union  select *  from (some query that uses Q1 outcome)  

I would like to do that:

  • Without creating any view
  • Without running twice the Q1 query.

How can I do that on PostgreSQL?

How to restore deleted rows from a binary log?

Posted: 20 Mar 2013 01:06 PM PDT

Background

I am trying to recover a few thousand rows that were accidentally deleted from a MySQL database. Luckily replication is enabled, so in the binary logs I have a record of each INSERT, UPDATE and DELETE that was performed on the database since it was created.

Unfortunately in a previous attempt to recover the lost rows, someone had copied the entire INSERTS from the binlogs into the database. This created an even bigger mess by adding a bunch of duplicates and changing the ID value which is set to auto_increment.

Remediation

I created a new recovery database based on a backup that was made a few weeks prior to the incident. I then found the exact point in the binary logs were the backup ended. I extracted all the INSERTS/UPDATES/DELETES from the binlog file and fed them into the new recovery database up until the point were the rows got deleted; this restored the database back to where it was before all the rows had been deleted.

Problem

Because of the first failed attempt to recover the lost rows, a lot of invalid data was added to the original database, along with new legitimate data and because one of the columns in the row is set to auto_increment, this creates a mismatch for the value of the column that auto increments when trying to restore the database. I can't simply COPY/PASTE all the missing legitimate binlog SQL statements, because the value of the auto_increment column has changed due to the invalid data that was inserted, which I don't want.

Example

An example of the binlog file:

INSERT INTO reservations (id, name, date) VALUES (null, "john doe", "2013-03-20");  INSERT INTO reservations (id, name, date) VALUES (null, "jane doe", "2013-03-21");  INSERT INTO reservations (id, name, date) VALUES (null, "garbage", "2013-03-21");  INSERT INTO reservations (id, name, date) VALUES (null, "garbage", "2013-03-22");  INSERT INTO reservations (id, name, date) VALUES (null, "mary jane", "2013-03-23");  UPDATE reservations SET name = "mary janett" WHERE id=5;  

OLD database

id, name, date  1, john doe, 2013-03-20  2, jane doe, 2013-03-21  3, garbage, 2013-03-21  4, garbage, 2013-03-22  5, mary janett, 2013-03-23  

NEW database

id, name, date  1, john doe, 2013-03-20  2, jane doe, 2013-03-21  3, mary jane, 2013-03-23  

Now if I were to copy the binlog SQL statements into the recovery database, ignoring the garbage INSERTS which I don't want, the UPDATE statement would fail because the value of `id` would be different. Same goes for DELETE's and INSERT's which rely on the value of `id`.

Question

Is there an "easy" way to recover all the new legitimate data that was introduced to the database after the incident? The only solution I can think of at the moment is manually going through each INSERT/UPDATE/DELETE and adjusting the value of `id`.

I am hoping there is a better solution, because with thousands of records, this approach would take a long time. Any help is greatly appreciated!

Database and query optimizacion

Posted: 20 Mar 2013 07:32 PM PDT

I have a database containing three tables: tbl_database (main table is the primary), tbl_cmdatabase and tbl_blacklist. The three tables share the same structure but differ in the number of records that can be. This is the SQL structure, that basically is the same:

CREATE TABLE IF NOT EXISTS `tbl_ndatabase` (    `id` int(11) NOT NULL AUTO_INCREMENT,    `id_telefonica` int(11) NOT NULL,    `number` varchar(11) COLLATE utf8_spanish2_ci NOT NULL,    PRIMARY KEY (`id`)  ) ENGINE=InnoDB  DEFAULT CHARSET=utf8 COLLATE=utf8_spanish2_ci AUTO_INCREMENT=1 ;  

Table tbl_database contains ~194 074 records and they should continue to grow in large numbers, table tbl_cmdatabase contains ~45,742 records and similarly can increase but perhaps not so exaggerated as tbl_database but would increase and finally the table tbl_blacklist having ~92,038 records. I am developing an application where I need to verify that the number that I will insert into table tbl_database is not in tables tbl_blacklist and tbl_cmdatabase so I need to check for every row looking for the number (this queries are slow in this amount of records). What I need to do is to optimize some tables or change the structure or not because when I perform these queries the server tends to fall as the shared hosting does not support large queries, anyone can help me with this issue? Any suggestions?

Edit: Added a file for test data

Retrieving cached plans fails due to cached objects of database Mirror

Posted: 20 Mar 2013 07:58 PM PDT

I'm trying to retrieve cached execution plans for a specific database object, but this fails because of cached plans which belong to a mirrored database in MIRROR state.

The below query, without any additional WHERE clauses is failing with the following SQL error:

Msg 954, Level 14, State 1, Line 1 The database "DatabaseName" cannot be opened. It is acting as a mirror database.

SELECT  *  FROM    sys.dm_exec_query_stats AS deqs   CROSS APPLY     sys.dm_exec_query_plan(deqs.plan_handle) AS deqp  

I suspect what happens is the sys.dm_exec_query_plan function is first trying to parse all the plan handles from the cache, but fails on cached objects for a mirrored database.

Does anyone know if there are any ways to get around this, T-SQL wise?

Off course I could execute DBCC FREEPROCCACHE to clear the cache, however I'm hoping for other solutions. I'm a bit amazed this function is not discarding any objects from mirrored databases while trying to parse the plans.

Fastest way to delete matching rows in MySQL?

Posted: 20 Mar 2013 12:16 PM PDT

I have more than 10 million records in a table. I want to delete the records where an email column contains .ac..

What is the fastest method to perform the task?

help with simple sql queries

Posted: 20 Mar 2013 12:03 PM PDT

Can I use some help please with the following query

Consider the following schema: Suppliers(sid,sname,saddress) Parts(pid,pname,color) Catalog(sid->Suppliers,pid->Parts,cost)sid and pid are in this table FK

The primary keys are emphasis and ->Rel indicates a foreign key relationship with the primary key of Rel.

  1. find the pids of parts that are supplied by at least two different suppliers
  2. find the sids of supplies who supply every red part
  3. find the sids of suppliers who supply every red part or every green part

can somebody help me with those queries please?

Create log file only for my executed queries

Posted: 20 Mar 2013 03:24 PM PDT

I have been using the following to set up the log file:

log_statement='all',    log_collector='on',    log_destination='on',    log_filename='filename'  

After that, for testing I just executed a query then I have seen the log file in tools>serverstatus. Unfortunately it has a lot of anonymous queries including the query which I ran.

How can I filter out the queries which were executed along with my query (anonymous queries)? Is there any possibility to log the affected table information like:

  • which table got affected?
  • at what time was the query executed?
  • what fields got affected?

And how can I get these details using queries?

Retrieve Log Detail

Posted: 20 Mar 2013 03:10 PM PDT

I have been using the following to set up the log file:

log_statement='all',  log_collector='on',  log_destination='on',  log_filename='filename'  

After that, for testing I just executed a query then I have seen the log file in tools>serverstatus. Unfortunately it has lot of anonymous queries including the query which I ran.

How can I filter out the queries which were executed along with my query (anonymous queries)? Is there any possibility to log the affected table information like:

  • which table got affected?
  • at what time was the query executed?
  • what fields got affected?

And how can I get these details using queries?

PL/SQL trigger error ORA-0001 and ORA-06512

Posted: 20 Mar 2013 04:27 PM PDT

create or replace   TRIGGER "SUP" AFTER INSERT ON "EMP_REPORT" REFERENCING OLD AS "OLD" NEW AS "NEW" FOR EACH ROW  DECLARE      miclaim_supervisor_count number;    employee_company_code VARCHAR2(10);    employee_businessunit number;    projMgrs NUMBER;    supId NUMBER;      cursor  projMgrsCursor is select b.BU_MEMBER_ID    from BU_MEMBER b, EMP_SUB_DIV s    where s.EMP_NO = :NEW.EMP_NO    and s.SUB_DIVISION_CODE = '01' and s.DIV_CODE = '2'    and b.BU_ID IN (select BU_ID from BU_MEMBER where BU_MEMBER_ID = :NEW.EMP_NO);      BEGIN          delete from MICL_SUPERVISORS where EMP_NO = :NEW.EMP_NO and IS_OVVERRIDDEN = 0;          select count(*) into miclaim_supervisor_count from MICL_SUPERVISORS where EMP_NO = :NEW.EMP_NO and IS_OVVERRIDDEN = 1;          select COMPANY_CODE into employee_company_code from  EMPLOYEE_MASTER where EMP_NO = :NEW.EMP_NO;          projMgrs := 0;    if (employee_company_code ='F')then       OPEN  projMgrsCursor;      LOOP         select micl_sup_id_seq.nextval into  supId from dual;      FETCH projMgrsCursor INTO projMgrs;      EXIT WHEN projMgrsCursor%NOTFOUND;        insert into SUP VALUES ((supId), (SELECT SYSDATE FROM DUAL), :NEW.ENTRYADDEDBY_EMP_NO, 3000, 0,projMgrs, NULL,:NEW.EMP_NO);      END LOOP;         CLOSE projMgrsCursor;  else      if(miclaim_supervisor_count IS NULL or miclaim_supervisor_count<1) then      insert into SUP VALUES ((:NEW.ID), (SELECT SYSDATE FROM DUAL), :NEW.ENTRYADDEDBY_EMP_NO, 3000, 0, :NEW.SUP_EMP_NO, NULL,:NEW.EMP_NO);      end if;  end if;    END;  

I created this trigger a week go, with no compilation errors. But when I insert a record into "EMP_REPORT", I get the following errors:

 *"INSERT INTO"EMP_REPORT" (ID, ADDEDDATE, ENTRYADDEDBY_EMP_NO, SUP_EMP_NO, EMP_NO) VALUES ('71', TO_TIMESTAMP('19-MAR-13 09.55.57.983000000 AM', 'DD-MON-RR HH.MI.SS.FF AM'), '81', '841', '5295')  ORA-00001: unique constraint (SYS_C0023329) violated  ORA-06512: at line 1    One error saving changes to table  "EMP_REPORT":  Row 51: ORA-00001: unique constraint (SYS_C0023329) violated  ORA-06512: at line 1"*   

I can't figure out where I went wrong. How can I fix this?

Please note that I can't remove the constraint, and it is a primary key.

Segmentation fault (core dumped) with self-complied mysql-5.1.9-beta.tar.gz

Posted: 20 Mar 2013 06:35 PM PDT

I am compiling mysql-5.1.9-beta.tar.gz source code under Ubuntu 12.04. My steps are following:

tar -xzf mysql-5.1.9-beta.tar.gz  cd /home/gaoyu/mysql-5.1.9-beta  ./configure --prefix=/home/gaoyu/mysql-5.1.9 --with-debug  sudo make  sudo make install  sudo cp /home/gaoyu/mysql-5.1.9-beta/support-files/my-medium.cnf /etc/my.cnf  cd /home/gaoyu/mysql-5.1.9  bin/mysql_install_db --user=mysql  sudo cp /home/gaoyu/mysql-5.1.9-beta/support-files/mysql.server /etc/init.d/mysql  sudo bin/mysqld_safe --user=mysql &  

The bug is :

gaoyu@FlashGroup:~/mysql-5.1.9$ sudo bin/mysqld_safe --user=mysql &  [1] 26196  gaoyu@FlashGroup:~/mysql-5.1.9$ nohup: ignoring input and redirecting stderr to stdout  Starting mysqld daemon with databases from /home/gaoyu/mysql-5.1.9/var  ^C  

(here I wait for a very long time, so I must press Ctrl + C to stop it and continue)

gaoyu@FlashGroup:~/mysql-5.1.9$ bin/mysql  Segmentation fault (core dumped)  

After I input sudo bin/mysqld_safe --user=mysql &, I must wait for a very long time. Why is that? Why do I get a segfault? Has anyone encountered a similar problem?

MySQL replication between VPS and shared host

Posted: 20 Mar 2013 12:18 PM PDT

I have a VPS where I have ssh access and shared host with CPanel. Both have MySQL.

I need to create a master/slave setup among the two server. As per this tutorial: How To Set Up Database Replication In MySQL, I will need to modify the my.conf file.

Is it possible to achieve this with the following permissions on both the servers?

  • VPS

    I have root access with ssh login where I can modify /etc/my.conf file.

  • Shared host

    Cpanel with phpmyadmin.

Slave SQL thread got hanged

Posted: 20 Mar 2013 12:30 PM PDT

We have a master - slave setup with ROW based replication. We are seeing huge delay's on the salve even though there is no activity running wither on master or slave.

When we looked in, we observed the SQL thread looks like hanged. It has been in "Reading event from the relay log" state since last 3 hours or more.

baleaf:(none)> show processlist ;  +--------+-------------+-----------+------+---------+-------+----------------------------------+-----    -------------+  | Id     | User        | Host      | db   | Command | Time  | State                            |        Info             |  +--------+-------------+-----------+------+---------+-------+----------------------------------+-----    -------------+  | 217159 | system user |           | NULL | Connect |  1039 | Waiting for master to send event | NULL             |  | 217160 | system user |           | NULL | Connect |  10045 | Reading event from the relay log | NULL             |  +--------+-------------+-----------+------+---------+-------+----------------------------------+-----    -------------+  4 rows in set (0.00 sec)        baleaf:blackarrow_dw> SHOW SLAVE STATUS \G  *************************** 1. row ***************************                 Slave_IO_State: Queueing master event to the relay log                    Master_Port: 3306                  Connect_Retry: 60                Master_Log_File: binlog.001403            Read_Master_Log_Pos: 95601911                 Relay_Log_File: relay-bin.000002                  Relay_Log_Pos: 12757514          Relay_Master_Log_File: binlog.001403               Slave_IO_Running: Yes              Slave_SQL_Running: Yes                Replicate_Do_DB:             Replicate_Ignore_DB:              Replicate_Do_Table:          Replicate_Ignore_Table:         Replicate_Wild_Do_Table:     Replicate_Wild_Ignore_Table:                      Last_Errno: 0                     Last_Error:                    Skip_Counter: 0            Exec_Master_Log_Pos: 32820355                Relay_Log_Space: 75539220                Until_Condition: None                 Until_Log_File:                   Until_Log_Pos: 0             Master_SSL_Allowed: No             Master_SSL_CA_File:              Master_SSL_CA_Path:                 Master_SSL_Cert:               Master_SSL_Cipher:                  Master_SSL_Key:           Seconds_Behind_Master: 7720  Master_SSL_Verify_Server_Cert: No                  Last_IO_Errno: 0                  Last_IO_Error:                  Last_SQL_Errno: 0                 Last_SQL_Error:     Replicate_Ignore_Server_Ids:                Master_Server_Id: 1  1 row in set (0.00 sec)  

Shall some one please have a look into this ASAP.

Partition of tablespace

Posted: 20 Mar 2013 07:45 PM PDT

My postgres server is running in a windows server and three tablespace are created. Postgresql server is installed in C: drive and the tablespaces are in D: and E: drive. So how to take a backup and restore. Is it similar to normal backup and while restore, in another machine, do i need to have the same setup or i can restore it in a single drive? From the tablespace folder, is it possible to retreive information for only one table which is huge?

Restore a Database to a New database (T-SQL)

Posted: 20 Mar 2013 08:45 PM PDT

I have a database backup from a database, say SourceData which needs to be restored on another database, same schema, but different name (SourceData_1) using T-SQL. I don't know why I gets the following error, despite specifying REPLACE option. For example I used the links below:

Restore Database script

http://msdn.microsoft.com/en-us/library/ms190447(v=sql.105).aspx

Based on the posts I found on the internet, first I got the logical names of the backup file using script below:

RESTORE FILELISTONLY  FROM DISK = 'C:\Backups\SourceData_20121227.BAK'  

Then used the logical names in the following script:

RESTORE DATABASE SourceData_1   FROM DISK = 'C:\DEV\SourceData_20121227.BAK'  WITH      MOVE 'SourceDate' TO 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\SourceData_1.mdf',      MOVE 'SourceDate_log' TO 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\SourceData_1_log.ldf',      REPLACE  GO  

Below is the error I'm getting:

MODIFY FILE encountered operating system error 112 (failed to retrieve text for this error. Reason: 15105) while attempting to expand the physical file 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\SourceData_1_log.ldf'.
RESTORE DATABASE is terminating abnormally.

I checked the disk space and can confirm it is fine. I assume the target database should exist before running the restore. Is that right?

Any suggestion how to fix this issue?

Thanks.

Character vs Integer primary keys

Posted: 20 Mar 2013 02:48 PM PDT

I'm designing a database with multiple lookup tables containing possible attributes of the main entities. I'm thinking of using a 4 or 5-character key to identify these lookup values rather than an auto-incrementing integer so that when I store these attribute IDs on the main tables I'll see meaningful values rather than just random numbers.

What are the performance implications of using a character field as a primary key rather than an integer?

I'm using MySQL if that matters.

[Edit]
These lookup tables have new records added infrequently. They are manually maintained, and the character-based keys are manually created as well. Here's an example:

      CUISINES   ID      Description  -----  --------------  CHNSE  Chinese  ITALN  Italian  MXICN  Mexican  

Tuesday, March 19, 2013

[how to] clusterdb stopped working for PostgreSQL database in Ubuntu 12.04 server

[how to] clusterdb stopped working for PostgreSQL database in Ubuntu 12.04 server


clusterdb stopped working for PostgreSQL database in Ubuntu 12.04 server

Posted: 19 Mar 2013 09:02 PM PDT

After a fresh PostgreSQL install (version 9.2) on my Ubuntu server 12.04, clusterdb utility stopped working. I used the same config files as I used with version 9.1.

Below is the actual command:

clusterdb -v -h localhost -p <port#> -U postgres -d <dbname>  

And this is the response that I'm getting:

clusterdb: could not connect to database <dbname>: could not connect to server: Connection refused      Is the server running on host "localhost" (127.0.0.1) and accepting      TCP/IP connections on port <port#>?  

I'm able to connect to the server locally and from outside. I can also use another utilities such as pg_dump with no problems.

My pg_hba.conf file below:

# Database administrative login by Unix domain socket  local all postgres peer  # TYPE DATABASE USER ADDRESS METHOD "local" is for Unix domain socket connections only  local all all peer  # IPv4 local connections:  host all all 127.0.0.1/32 md5  # IPv6 local connections:  host all all ::1/128 md5  # Allow replication connections from localhost, by a user with the replication privilege.  local all postgres md5  host all all x.x.x.x/x md5         #my network  host    all all  x.x.x.x/x  md5    #internal ip of the server  

Are there any security changes were implemented in PostgreSQL 9.2 vs 9.1 that may prevent clusterdb to work properly?

Appropriate Action for Dead Locks / Blocking

Posted: 19 Mar 2013 08:00 PM PDT

I've setup my databases so that anytime a procedure is blocked for more than 45 seconds the database notifies the DBA email. Is it bad practice to setup a way to auto kill the process that is doing the blocking? I'm assuming yes; however, waiting until an off hours DBA can get to a computer and fix the change seams problematic as well. Is there a better way to handle the blocking processes?

SQL Server 2008 R2 Restore COPY_ONLY full backup with transaction logs

Posted: 19 Mar 2013 07:11 PM PDT

After doing some research I cannot seem to find an answer to this question.

Background I am attempting to setup a backup plan that fits the following three requirements: 1) Reliability of backups, having nightly full backups 2) Transaction log backups that could be restored from 3) Low amount of disc space used 4) The backups must be accessible locally for an auditing tool

So to fit those needs I am thinking full backups weekly, differential daily, and transaction hourly. Then each night a copy_only backup would run that can be shipped offsite, this backup is done so that the log chain is not broken, and we have reliable nightly full backups offsite, without having to eat up so much local disc space.

Question Is it possible to restore from the copy_only backup, and restore the transaction logs after.

Let me just give an example so you know what I am talking about.

Using the below list I am wondering if it is possible to restore FullbackupCOPY_ONLYC.bak followed by TransactionbackupG.trn, TransactionbackupH.trn, finally TransactionbackupI.trn

---List of Backups--- FullbackupA.bak 01/01/2013 00:00:00 DifferntialbackupA.bak 02/01/2013 00:00:00 FullbackupCOPY_ONLYA.bak 02/01/2013 00:00:00 TransactionbackupA.trn 02/01/2013 01:00:00 TransactionbackupB.trn 02/01/2013 02:00:00 TransactionbackupC.trn 02/01/2013 03:00:00 DifferntialbackupB.bak 03/01/2013 00:00:00 FullbackupCOPY_ONLYB.bak 03/01/2013 00:00:00 TransactionbackupD.trn 03/01/2013 01:00:00 TransactionbackupE.trn 03/01/2013 02:00:00 TransactionbackupF.trn 03/01/2013 03:00:00 DifferntialbackupC.bak 04/01/2013 00:00:00 FullbackupCOPY_ONLYC.bak 04/01/2013 00:00:00 TransactionbackupG.trn 04/01/2013 01:00:00 TransactionbackupH.trn 04/01/2013 02:00:00 TransactionbackupI.trn 04/01/2013 03:00:00

Maybe this whole setup is not reasonable I am fairly new to SQL Server and am trying to learn as I go. Any advice/help would be appreciated.

What settings should I use in my.cnf

Posted: 19 Mar 2013 09:01 PM PDT

We have about 5-6 different internal webapps (staff use only) that we will be transferring across to a new virtual server, however I need to setup our my.cnf settings first and I am lost (I am web designer - so this is a little out of my league!).

Our internal systems have about 100 tables apiece, using primarily InnoDB but in some cases using Myisam. Also, 1 of our webapps has quite a lot of data (100,000 records+) and the SQLs used to select the data are quite inefficient (few indexes used, inner SELECTs etc). Roughly about 50 users at any one time.

Our new server will be the HS-250 from HeartInternet.

Many thanks for the help,

Cant start SQL services

Posted: 19 Mar 2013 04:03 PM PDT

So I am having issues starting my SQL services. Currently it is failing the start due to:

FileMgr::StartLogFiles: Operating system error 2(The system cannot find the file specified.) occurred while creating or opening file 'E:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\modellog.ldf'. Diagnose and correct the operating system error, and retry the operation.

The issue herein lies in the fact that this .ldf file is not on the E drive. There is no E: drive. The file is located on the F: drive.

I cannot find anywhere to tell SQL to change this. I can not drop/move/detach/etc the databases, because I cant even get the service to start, in order to connect.

This is happening on 3 different instances of SQL, after volume moves. Any ideas?

SQL 2008

SQL Server 2008 Restore

Posted: 19 Mar 2013 06:49 PM PDT

I'm trying to restore from a database backup to SQL Server 2008. The backup came from a different server. I'm executing the following query:

RESTORE DATABASE Database_Name  FROM DISK = 'C:\Path\To\Backup.file'  WITH REPLACE  GO  

If the database does not exist, I'm getting the following error:

Msg 3118, Level 16, State 1, Line 1  The database "Database_Name" does not exist. RESTORE can only create a database when restoring either a full backup or a file backup of the primary file.  Msg 3013, Level 16, State 1, Line 1  RESTORE DATABASE is terminating abnormally.  

The file is a full backup.

If the database does exist, I get the following error:

Msg 3154, Level 16, State 4, Line 1  The backup set holds a backup of a database other than the existing 'Database_Name' database.  Msg 3013, Level 16, State 1, Line 1  RESTORE DATABASE is terminating abnormally.  

From what I understand, WITH REPLACE is supposed to be a workaround for this.

I'm stumped.

Is it ever harmful to set a database owner in SQL Server?

Posted: 19 Mar 2013 04:32 PM PDT

There are a bunch of databases on one of our SQL servers that have no owner. Generally speaking, is it harmful to give them one? For example:

USE dbName;  GO    EXEC sp_changedbowner 'sa';  GO  

I know sa may not be the best choice, but it is just an example. My primary concern is whether or not creating an owner when there is none may cause issues with software that can currently connect OK.

Custom sp_who/sp_whoUsers

Posted: 19 Mar 2013 02:50 PM PDT

I need to allow a client in a dev DW SQL 2K8R2 environment, to view and kill processes, but I do not want to grant VIEW SERVER STATE to this person (he's a former sql dba and is considered a potential internal threat).

When I run the following, it returns one row as if the user ran the sp themselves with their current permissions.

USE [master]  GO    SET ANSI_NULLS ON  GO  SET QUOTED_IDENTIFIER ON  GO    CREATE PROCEDURE [dbo].[usp_who] with execute as owner  AS  BEGIN      SET NOCOUNT ON;      exec master.dbo.sp_who;  END  

Changing the "with execute as" to "self" (I'm a sysadmin) returns the same results. I've also tried the below instead of calling sp_who, and it only returns one row.

select * from sysprocesses  

It seems that the context isn't switching, or persisting, throughout the execution of the procedure. And this is to say nothing of how I'm going to allow this person to "kill" processes.

Does anyone have a solution or some suggestions to this seemly unique problem?

sysadmin cannot drop login when ddl audit trigger enabled

Posted: 19 Mar 2013 12:19 PM PDT

I have a DDL trigger defined for database- and server-level events in SQL Server 2008R2 (Standard) which exists to log all DDL events into an audit table in an admin database. It's only function is to extract the relevant data from EVENTDATA() and insert into this table. For the purpose of the insert (as only sysadmins can access the admin database), I have created a dedicated SQL login with only INSERT permission to this table, and granted IMPERSONATE on this login to public. This is intended to prevent permissions-related errors from the trigger firing and attempting to insert into the audit table, when the caller does not have the necessary access to the database/table.

Here is the trigger definition:

CREATE TRIGGER [ddl_log]   ON ALL SERVER   FOR DDL_DATABASE_LEVEL_EVENTS, DDL_SERVER_LEVEL_EVENTS   AS  BEGIN    DECLARE @data XML  SET @data = EVENTDATA()     EXECUTE AS LOGIN='<dummy login>'      INSERT admin.dbo.ddl_audit (PostTime, DB_User, [Event], [TSQL], Host, DatabaseName)      VALUES (         GETDATE(),         CONVERT(NVARCHAR(100), ORIGINAL_LOGIN()),         @data.value('(/EVENT_INSTANCE/EventType)[1]', 'nvarchar(100)'),          @data.value('(/EVENT_INSTANCE/TSQLCommand)[1]', 'nvarchar(2000)'),          CONVERT(NVARCHAR(100), HOST_NAME()),         @data.value('(/EVENT_INSTANCE/DatabaseName)[1]','nvarchar(100)')         ) ;  REVERT   END  

I have had no issues with this trigger since implemented months ago. However, now it appears to be preventing even a sysadmin from executing an ALTER LOGIN, DROP LOGIN, etc. under certain circumstances as follows:

My environment also includes MS Dynamics GP 2010 (Great Plains). Great Plains allows an admin to manage users, and for each new Great Plains user, the software creates a SQL login for that user in SQL Server. Resetting a password in the Great Plains interface resets the SQL password. And so forth...

However, even if logged into Great Plains as 'sa' as long as the above trigger is enabled any attempt to alter or drop a login fails with error 15151 (Cannot alter the login 'loginname', because it does not exist or you do not have permission). If I disable the trigger, everything works normally.

The same operations executed in SSMS, or through some other interface, are successful, even for non-sysadmins who have some level of DDL permissions. It only fails when performed in Great Plains. A profiler trace of the operation shows that GP is merely submitting a standard T-SQL 'ALTER LOGIN' or 'DROP LOGIN' statement, and that the statement correctly shows as called by the sa account. It does not appear that the session ever switched to a different context, other than for the insert into the audit table (which it never actually got to, as no record was logged for the statement). And just in case the session somehow was maintaining the wrong context after that impersonation, I tried making the dummy-insert login a sysadmin with no success.

My question is, are there certain combinations of SET options/connection settings, etc. that could result in this type of behavior or issues with DDL trigger execution that could prevent a sysadmin from performing certain operations? Or is there some other avenue to investigate that I am completely missing?

Create Temp Table From Stored Procedure Result

Posted: 19 Mar 2013 12:23 PM PDT

Is it possible to create a temp table from a stored procedure result set without having to specify column datatypes? If not, is it possible to create a physical table at least from the SP? The reason I am asking is because I have an SP that returns too many columns for me to declare in a create table command.

Is there something like

create table from

where columns do not need to specified and sql automaticly determines column attributes??

I am running SQL server 2008

trap xp_logininfo stored procedure error

Posted: 19 Mar 2013 12:03 PM PDT

Sql server 2000, windows 2003 server.

I am trying find all the invalid domain login creates in sql 2000 server in a script using xp_logininfo and get message "Could not obtain information about Windows NT group/user" How can I trap this error in T-sql scripts and continue. Or is there any other way to valid domain id with AD group.

Thanks

Pan

Load sybase .db into new sybase developer server

Posted: 19 Mar 2013 02:06 PM PDT

We've got a few .db files laying around here which were created by Sybase (version 8.x, I guess) and I've the task to extract some data from those old databases.

Now I've installed the latest Sybase developer version (15.7) and want to add these databases to the sybase server.

Unfortunately I do not have any experience with sybase and I couldn't find any documentation in the official sybase manual for this task.

Is this even possible? Does anyone of you know about an article describing the integration of existing complete databases or the necessary commands for the task?

INNODB real-time locks?

Posted: 19 Mar 2013 11:46 AM PDT

How can I view real-time locks in innodb when information_schema.innodb_locks is not available?

Investigating high SQL waits, especically LCK_M_U

Posted: 19 Mar 2013 01:26 PM PDT

Recently I have been struggling with SQL server performance and although we have fixed a huge multitude of basic errors in the config it is still performing badly. To clarify it is not overall performance but rather fairly frequent time outs (from the client application). Previously I had looked to the memory as a cause but this has now been resolved and we are still getting the same behaviour.

Looking at the graphs from Management Data Warehouse I can see that LCK_M_U/X/IX are causing the majority of our waits around the time a user experiences a timeout. Everything I am reading states I need to look at the queries and processes running but I have yet to find anything aimed at a level I can understand. The locks, as you can see in the picture below, seem to spike which coincides with the error on the users side. Is there a clever DMV or some such that I can address to try and work out what query is being run that is creating the lock? Is it a case of trawling through a trace to find the details? Any guidance greatly appreciated and apologies if the information is not clear.

enter image description here

Update strings for all cmdexec Jobs

Posted: 19 Mar 2013 11:47 AM PDT

Does anyone know if it's possible, (and furthermore advisable) to update all cmdexec strings in jobs on a server?

We'd like to change our logging location, and in principle I'd think this could be done very simply by changing the /O "\\LOCATION\file.log" strings.

I can see how you'd do that in fact

USE MSDB  GO  update dbo.sysjobsteps  set [command] = replace([command], '\\OLDLOCATION\... ', '\\NEWLOCATION\... ')  WHERE...  

BUT I've had it drummed into me that you should never update the sys tables manually, so the idea of doing this makes me beyond nervous. For anything else I'd use the dbo.sp_update_job proc (http://msdn.microsoft.com/en-gb/library/ms188745.aspx for those who've strayed here looking for something else job related), but that procedure doesn't seem to be able to handle this.

I have a hunch that this might possibly be an exception to the "Never Update the Sys Tables" rule though? I can't imagine that the cmdexec string is implied in any further relations in the way, for example, enabled/disabled status evidently is. Wishful thinking?

So I don't know for sure, and I can't seem to find out. Anyone know or have any experience with this?

Thanks Dan

PostgreSQL 9.2: vacuum returning disk space to operating system

Posted: 19 Mar 2013 03:11 PM PDT

Vacuum usually does not return disk space to operating system, except some special cases. From the docs:

The standard form of VACUUM removes dead row versions in tables and indexes and marks the space available for future reuse. However, it will not return the space to the operating system, except in the special case where one or more pages at the end of a table become entirely free and an exclusive table lock can be easily obtained. In contrast, VACUUM FULL actively compacts tables by writing a complete new version of the table file with no dead space. This minimizes the size of the table, but can take a long time. It also requires extra disk space for the new copy of the table, until the operation completes.

The question is: how can this database state when one or more pages at the end of a table become entirely free be achieved? This can be done via VACUUM FULL, but I haven't got enough space to implement it. So are there any other possibilities?

In what data type should I store an email address in database?

Posted: 19 Mar 2013 12:20 PM PDT

I understand that an 254 character email address is valid, but implementations I have researched tend to use a varchar(60) to varchar(80) or equivalent. For example: this SQL Server recommendation uses varchar(80)

Is there a reason to not use the full 254 character maximum? Doesn't a varchar by definition use only as much storage as needed to hold the data?

Are there significant performance implications/trade-offs which cause so many implementations to use less than the full 254 possible characters?

SQL Server in a state of confusion: Does the database exist or doesn't it?

Posted: 19 Mar 2013 01:54 PM PDT

Got a really weird, annoying problem.. Somehow the instance of SQL Server 2008 R2 running on our server has gotten somewhat corrupted.

First, we noticed that the database we created yesterday was missing. So, we looked around and found that it was still there, but detached. So, we tried to attach the mdf but got a message which was something like The file is currently in use.

I thought that was odd, so restarted SQL Server. Same thing... okay, time for drastic measures... so, I stopped the service, zipped up the mdf, started the service, unzipped it and then tried to restore it. The above message was gone, but then I got:

Cannot attach a database with the same name as an existing database  

Ouch. Of course it's not showing in the database explorer, so no idea what's going on... last resort:

DROP DATABASE [DatabaseName]  

Of course that didn't work.. that tells me the database does not exist. So, I'm stuck... at one point MS SQL thinks the db does exist and at another point it thinks the db does not exist.. obviously it's in a state of confusion.

Has anyone seen this before? Got any ideas on how to fix it?

Retrieving all PK and FK with pgadmin

Posted: 19 Mar 2013 04:11 PM PDT

I have a big database that I need to extract all primary keys and foreign keys from each table.

I have pgAdmin III.

Is there a way to do this automatically and not go over each table manually?

Overview of how MongoDB uses its various threads

Posted: 19 Mar 2013 08:07 PM PDT

On one instance I have MongoDB using ~85 threads. In lieu of having time to investigate directly, I am curious:

  1. What are the threads used for? Do they fall into a few high-level usage categories?
  2. How can I control/influence the number of threads used? E.g. max connection params, etc.

What are the differences between leaf and non-leaf pages?

Posted: 19 Mar 2013 11:36 AM PDT

I've been running some index usage reports, and I'm trying to get a definition of Leaf and Non-leaf. There seem to be both Leaf and Non-leaf Inserts, Updates, Deletes, Page Merges, and Page Allocations. I really don't know what it means, or if one is better than the other.

If someone could give a simple definition of each, and also explain why Leaf or Non-leaf matters, it would be appreciated!

Ms-Access 2007-2010 SQL - from single row record how to create multiple rows

Posted: 19 Mar 2013 12:51 PM PDT

I need help in below table updation using SQL in Ms-Access 2007-2010 database.

I've a table called table1 which has below entries:

  table1:  --------------------------------------------------  |  col1  |   col2   |          col3         |  -------------------------------------------------  |    A1  |  abcd    |      123;456;789      |  -------------------------------------------------  |    B2  |  efgh    |       246;369         |  --------------------------------------------------  |    C3  |   ijk    |                       |  --------------------------------------------------   

Using SQL, I want to change the above table (particularly the column col3 multiple values into individual multiple rows) as shown below:

  --------------------------------------------------  |  col1  |   col2   |          col3       |  -------------------------------------------------  |    A1  |  abcd    |         123         |  -------------------------------------------------  |    A1  |  abcd    |         456         |  -------------------------------------------------  |    A1  |  abcd    |         789         |  -------------------------------------------------  |    B2  |  efgh    |         246         |  --------------------------------------------------  |    B2  |  efgh    |         369         |  -------------------------------------------------  |    C3  |   ijk    |                     |  --------------------------------------------------  

How do I do it? I tried various forums. but could not come up with exact solution so far.

Appreciate your help.

Thanks, Jak.

Stress test to saturate memory for MySQL innodb

Posted: 19 Mar 2013 01:30 PM PDT

I'm investigating windows 32bit memory problems (the 2G limit) and am after a test which will max out mysql's memory, the innodb_buffer_pool, per connection memory and/or any other uses of memory.

Perhaps a query I could use for mysqlslap?

How to search whole MySQL database for a particular string

Posted: 19 Mar 2013 11:03 AM PDT

is it possible to search a whole database tables ( row and column) to find out a particular string.

I am having a Database named A with about 35 tables,i need to search for the string named "hello" and i dont know on which table this string is saved.Is it possible?

Using MySQL

i am a linux admin and i am not familiar with databases,it would be really helpful if u can explain the query also.

Splitting TempDB into multiples files equal to number of CPUs

Posted: 19 Mar 2013 04:45 PM PDT

The article SQL Server tempdb Best Practices Increase Performance suggests that I should split tempdb into a number of files equal to the number of cores. So for 4 cores you get 4 files.

By having the larger number of files, you can increase the number of physical I/O operations that SQL Server can push to the disk at any one time. The more I/O that SQL Server can push down to the disk level, the faster the database will run. With standard databases, SQL Server can cache a large amount of the data that it needs into memory. Because of the high-write nature of the tempdb, the data needs to be written to the disk before it can be cached back up into memory.

Though it sounds good in theory, is it really that good as a general optimisation? Is it something that may only apply for specific systems where IO is very high?

multivalued weak key in ER database modeling

Posted: 19 Mar 2013 11:44 AM PDT

I was wondering since i didnt find out any clarification for this. I want to store movies that exist in different formats (dvd, bluray etc) and the price for each format differs from each other as well as the quantity of each format, so i came up with this:

example

Is this correct from a design perspective? Does this implies redundancy? I dont understand how will this be stored in a table. Would it be better to do it like this :

enter image description here

Thanks in advance.

EDIT : I add some more descriptive information about what i want to store in this point of the design. I want to store information about sales. Each movie that exist in the company i need to store format, price and stock quantity. I will also need to store customer information with a unique id, name, surname, address, movies that he/she has already bought and his credit card number. Finally i will have a basket that temporary keeps items (lets suppose that other items exist apart from movies) that the customer wants to buy.

Microsoft Office Access database engine could not find the object 'tableName'

Posted: 19 Mar 2013 02:44 PM PDT

First a little background: I am using MS access to link to tables in an advantage database. I created a System DSN. In the past in Access I've created a new database, and using the exteranl data wizard, successfully linked to tables. Those databases and the linked tables are working fine.

Now I am trying to do the same thing, create a new access db, and link to this same DSN. I get as far as seeing the tables, but after making my selection, I get the error, " The Microsoft Office Access database engine could not find the object 'tableSelected'. Make sure the object exists and that you spell its name and the path name correctly.

I've tried creating another datasource (system and user) with no luck. Environment is Wn XP, Access 2007, Advantage DB 8.1

Search This Blog