Wednesday, March 20, 2013

[how to] sql server sum and count

[how to] sql server sum and count


sql server sum and count

Posted: 20 Mar 2013 08:52 PM PDT

Sample data :

LINE_NO E_FIELD F_FIELD G_FIELD HSA_STATUS  FAMILY  CACHE_FAMILY    Count  Percentage     23053B  00000   00000   00000   S           SUMMIT      WER           43   ??  23053B  00000   00000   00000   T           SUMMIT      WER           144  ??  23053B  00000   00000   00684   T           SUMMIT      WER           2    ??  23053B  00353   00418   00684   T           SUMMIT      WER           1    ??  23053B  00353   00418   00763   T           SUMMIT      WER           1    ??  23053B  00353   00418   01512   T           SUMMIT      WER           1    ??  23053B  00353   00418   06797   T           SUMMIT      WER           1    ??  23053B  00353   00418   30228   T           SUMMIT      WER           1    ??  23053B  00353   00418   31935   T           SUMMIT      WER           2    ??  23053B  05601   01402   00758   T           SUMMIT      WER           1    ??  23053B  05601   01402   09091   T           SUMMIT      WER           1    ??  23053B  05601   01402   65053   T           SUMMIT      WER           1    ??  

This is my query:

SELECT LINE_NO,    E_FIELD,    F_FIELD,    G_FIELD,    HSA_STATUS,    FAMILY,    CACHE_FAMILY,    Count = ((SUM(TOTAL)) )    FROM  (    SELECT LINE_NO,      E_FIELD,      F_FIELD,G_FIELD,      HSA_STATUS,      FAMILY,      CACHE_FAMILY,       Count(LINE_NO) as Total     FROM TX_HSA_SUMM     WHERE MT_TIMESTAMP2 BETWEEN ('2013-03-07 10:10:00') AND ('2013-03-08 10:20:00')     GROUP BY LINE_NO,E_FIELD,F_FIELD,G_FIELD,HSA_STATUS,FAMILY,CACHE_FAMILY  ) as a  GROUP BY LINE_NO,E_FIELD,F_FIELD,G_FIELD,HSA_STATUS,FAMILY,CACHE_FAMILY,total  ORDER BY LINE_NO,E_FIELD,F_FIELD,G_FIELD,HSA_STATUS,FAMILY,CACHE_FAMILY,total  

This is my sample data actually. I already make the count. As You can see the column header. My problem is I need to sum the count and the for each row I need to get the percentage. For example the total for the above record is 199 so for the first record count is 43 so the calculation should be 43/199 * 100. How can I view the percentage? Please help me I need this urgently.

Is there any suggestion to show how to get the count first. then from that sum the count and finally. the count should be divided with the sum and * 100 to get percentage. Thanks

sql server-percentage calculation

Posted: 20 Mar 2013 06:39 PM PDT

Sample data :

LINE_NO E_FIELD F_FIELD G_FIELD HSA_STATUS  FAMILY  CACHE_FAMILY    Count  23053B  00000   00000   00000   S           SUMMIT      WER           43  23053B  00000   00000   00000   T           SUMMIT      WER           144  23053B  00000   00000   00684   T           SUMMIT      WER           2  23053B  00353   00418   00684   T           SUMMIT      WER           1  23053B  00353   00418   00763   T           SUMMIT      WER           1  23053B  00353   00418   01512   T           SUMMIT      WER           1  23053B  00353   00418   06797   T           SUMMIT      WER           1  23053B  00353   00418   30228   T           SUMMIT      WER           1  23053B  00353   00418   31935   T           SUMMIT      WER           2  23053B  05601   01402   00758   T           SUMMIT      WER           1  23053B  05601   01402   09091   T           SUMMIT      WER           1  23053B  05601   01402   65053   T           SUMMIT      WER           1  

This is my query:

SELECT LINE_NO,    E_FIELD,    F_FIELD,    G_FIELD,    HSA_STATUS,    FAMILY,    CACHE_FAMILY,    Count = ((SUM(TOTAL)) )    FROM  (    SELECT LINE_NO,      E_FIELD,      F_FIELD,G_FIELD,      HSA_STATUS,      FAMILY,      CACHE_FAMILY,       Count(LINE_NO) as Total     FROM TX_HSA_SUMM     WHERE MT_TIMESTAMP2 BETWEEN ('2013-03-07 10:10:00') AND ('2013-03-08 10:20:00')     GROUP BY LINE_NO,E_FIELD,F_FIELD,G_FIELD,HSA_STATUS,FAMILY,CACHE_FAMILY  ) as a  GROUP BY LINE_NO,E_FIELD,F_FIELD,G_FIELD,HSA_STATUS,FAMILY,CACHE_FAMILY,total  ORDER BY LINE_NO,E_FIELD,F_FIELD,G_FIELD,HSA_STATUS,FAMILY,CACHE_FAMILY,total  

This is my sample data actually. I already make the count. As You can see the column header. My problem is I need to sum the count and the for each row I need to get the percentage. For example the total for the above record is 199 so for the first record count is 43 so the calculation should be 43/199 * 100. How can I view the percentage? Please help me I need this urgently.

Communication link failure for some queries to linked server

Posted: 20 Mar 2013 04:28 PM PDT

I am seeing the following error in SSMS (server has Windows Server 2008 and Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 installed) when running some queries against the linked server, in particular long running queries.

Simple selects from tables across the linked server work fine. This is a new issue that was noticed when SPs that have worked for years started failing.

I have run a Wireshark capture on the server, capturing for packets to port 1433 on the linked server host. At the tail of the capture, I see many (10) TCP Keep-Alives being issued (after a message regarding bad checksum) and then an RST packet. The RST packet is correlated with the error below being returned to the client.

There are other database servers on our network, where the linked server is configured identically, that don't exhibit this issue.

I have found some articles such as this and this. We are using the implicated Broadcom NICs. The Chimney Offload State setting is enabled on the server.

We will try disabling. Other thoughts on troubleshooting would be much appreciated.

OLE DB provider "SQLNCLI10" for linked server "myServer" returned message "Protocol error in TDS stream".  OLE DB provider "SQLNCLI10" for linked server "myServer" returned message "Communication link failure".  Msg 65535, Level 16, State 1, Line 0  Session Provider: Physical connection is not usable [xFFFFFFFF].   OLE DB provider "SQLNCLI10" for linked server "myServer" returned message "Communication link failure".  Msg 65535, Level 16, State 1, Line 0  Session Provider: Physical connection is not usable [xFFFFFFFF].   OLE DB provider "SQLNCLI10" for linked server "myServer" returned message "Communication link failure".  Msg 64, Level 16, State 1, Line 0  TCP Provider: The specified network name is no longer available.  

When is data moved during an Oracle partition split?

Posted: 20 Mar 2013 06:31 PM PDT

I need to split an Oracle partition, and I'm confused about whether or not Oracle is going to physically relocate the data when I execute the split. Here's what I intend to do:

alter table SCHEMA.DATATABLE     split partition MAXVAL_PART     at (TO_DATE(' 2013-04-01 00:00:00',                 'SYYYY-MM-DD HH24:MI:SS',                 'NLS_CALENDAR=GREGORIAN'))     into (partition PARTFOREXISTINGROWS          tablespace EXISTINGMAXVALTABLESPACE,           partition MAXVAL_PART          tablespace NEWMAXVALTABLESPACE);  

The problem is that my current MAXVAL_PART has 320 GB of data in it, and I don't want to physically move this data on the disk. If I cut off the partition at 2013-04-01, then there will be no data in the new partition, but I'm getting conflicting information about whether this will still necessitate a move of all the data. Ideally, Oracle sees that my new MAXVAL partition will be empty, defines it in the new table space, and I'm all done.

Will this command move the data that's already on the disk, or will it leave it in place and just create a new partition?

This article says Oracle will detect the empty partition and do a fast split http://docs.oracle.com/cd/B19306_01/server.102/b14231/partiti.htm

But I see in a number of places that a split means all data gets moved to new storage. Which is it?

SQL Server 2008R2 DBCC CheckDB never finishing

Posted: 20 Mar 2013 02:20 PM PDT

I'm running SQL Server 2008 R2 on Windows server 2008 R2 and up until recently we were having no problems. After windows updates recently (no sql server updates) we cannot run step 1 of our maintenance plan DBCC CHECKDB It does not error out, it just runs indefinitely, longest time so far 36 hours. The DB is only about 500MB so I dont think size is an issue.

I was just wondering if anyone has seen this sort of issue in the past.

Why does that query cause lock wait timeouts?

Posted: 20 Mar 2013 06:24 PM PDT

From time to time, I find a lot of these errors in my PHP error log:

MYSQL.1213: Deadlock found when trying to get lock; try restarting transactionSQL  

The problem persists for about 2 or 3 minutes. Thanks to stackoverflow, the reason was quite easy to find:

------------------------  LATEST DETECTED DEADLOCK  ------------------------  130320 15:53:37  *** (1) TRANSACTION:  TRANSACTION 0 83395751, ACTIVE 35 sec, process no 26405, OS thread id 140507872417536 starting index read  mysql tables in use 3, locked 3  LOCK WAIT 3 lock struct(s), heap size 1216, 2 row lock(s)  MySQL thread id 1163191, query id 199629038 localhost sosci Updating  UPDATE `database`.`table` SET `invalidate`='2013-03-21 03:53:02' WHERE ((token='C7G8X3H9ABCDEEFGT') AND (invalidate IS NULL)) AND (project=26118) LIMIT 1  *** (1) WAITING FOR THIS LOCK TO BE GRANTED:  RECORD LOCKS space id 0 page no 65548 n bits 192 index `PRIMARY` of table `database`.`table` trx id 0 83395751 lock_mode X locks rec but not gap waiting  Record lock, heap no 4 PHYSICAL RECORD: n_fields 12; compact format; info bits 0   0: len 4; hex 000c2591; asc   % ;; 1: len 6; hex 000004e36ace; asc     j ;; 2: len 7; hex 8000000a830110; asc        ;; 3: len 4; hex 80000001; asc     ;; 4: len 4; hex 80006606; asc   f ;; 5: len 1; hex 07; asc  ;; 6: len 16; hex 32455637363853485447444734584252; asc 2EV768SHTGDG4XBR;; 7: SQL NULL; 8: len 30; hex 3935363436362c656e672c616e6e612e63616d706f7265736940676d6169; asc 956466,eng,anna.camporesi@gmai;...(truncated); 9: SQL NULL; 10: len 8; hex 8000124ef477640e; asc    N wd ;; 11: len 8; hex 8000124ef495e88e; asc    N    ;;  

What I do not understand is: Why? The locked table is very small, only 61 entries (about 30 new and 30 deleted per day, the auto-incremented primary index is near 800.000). No column is especially large.

I use the InnoDB enging for this table (one key refers to another table with approx. 20.000 entries) and the problem occurs from time to time. RAM should not be an issue.The webserver and MySQL server run on the same (virtual) machine that usually does not suffer performance problems. Other transactions (there were thousands during the locked minutes) in large tables (1-2 mio. entries) did not make any trouble.

Thanks for your hints!

PostgresSQL: Get single attribute of UDT in SELECT statement

Posted: 20 Mar 2013 03:16 PM PDT

I created a user-defined type in a PostgreSQL 9.2 database and am trying in the SELECT statement to get only an attribute of the UDT. However, I don't seem to get my code to work.

Creation of type and table:

CREATE TYPE  ttp AS (f1 int, f2 int);  CREATE TABLE tbl (ctyp ttp);  

The SELECT statement:

-- does not work  SELECT ctyp.f1 FROM tbl  SELECT ctyp(f1) FROM testtable  

Both times I get an error. Is it possible to access a single attribute defined in a UDT in a SELECT?

Optimal drive configuration for SQL Server 2008R2

Posted: 20 Mar 2013 04:22 PM PDT

I have a fairly busy database server running SQL Server 2008 R2 that has the following setup:

  • SATA RAID 1 (2 Drives) - OS / Programs
  • SAS RAID 10 (4 Drives) - Sql Database Files (data and logs)
  • SAS RAID 1 (2 Drives) - TempDB (data and logs)

Assuming I can't add additional drives into this server, have I made the best use of the configuration I have available? Or should I consider another scheme here where logs are isolated from the data files, for example?

Streaming replication WAL files pg_xlog

Posted: 20 Mar 2013 05:40 PM PDT

We are using this awesome tool repmgr developed by 2nd Quadrant for streaming replication.

We set 1000 WALs to be archived. What I noticed though is that on master we have about 600 WALs in the pg_xlog directory whilst on the standby we have about 10 - yet replication is not lagging and from the looks on the data it looks like all data is there ie replication works.

I might be missing something but I'd expect the same amount of WALS to be on both master and standby ?

Is there some magical process which is removing them ?

Need help with syntax of Instr in Access 2000. Trying to find values 0.0 to 9.9 in a text string. [migrated]

Posted: 20 Mar 2013 01:08 PM PDT

I am an Access novice trying to use Instr to select numeric data from a text string.

 Instr ( [start], string_being_searched, string2, [compare] )        [start] defaults to 1        string_being_searched is [Source Key]        string2 is a number, decimal and number between 0.0 and 9.9        [compare] is optional and I do not think it applies here  

My problem is with string2, which returns a 0 for every occurrence when I use

 Expr1: InStr([Source Key],'#.#') or, with double quotes    Expr1: InStr([Source Key],"#.#")  

If I specify a value such as 1.4 or 3.2 or even a text value such as QJX (not even a number) it returns the value specified, but I need it to select two numbers with a decimal point between them.

Please can you help as I am really stuck.

Thanks

When is it appropriate to use SQL Server Developer Edition?

Posted: 20 Mar 2013 04:27 PM PDT

Do you use SQL Server Developer Edition on server-class machines in DEV and STAGING environments?

I am working on a large project where (if it passes the proof of concept stage) we will have several large geographically distributed enterprise class database servers running SQL Server 2012 Enterprise Edition.

We will have a production environment will initially have 3 servers, our Staging environment will have a minimum of 3 servers, and our Development environment will have 1 server (hosting three instances).

I was under the impression that we would only need to acquire Enterprise licences for the actual production servers, and we could get by with developer edition in our developer and staging environments because they are not "production".

Other sources have told me that we need to have an enterprise licence on all of the above mentioned machines, and that the developer edition is only meant for a single developer on their workstation.

Since developer edition has all the sexy features of Enterprise, I can't really see the value of it on a workstation class machine, especially for developing and testing the type of high availability system we are building.

If we have to fork out Enterprise licences for a dev server, that will just about kill our proof of concept stage, thus killing the project. Forcing an enterprise licence on a staging environment will make management just want to skip staging altogether.

Modeling staff dimension in data warehouse

Posted: 20 Mar 2013 12:44 PM PDT

I need to write a data warehouse diagram. I've got a problem with staff area. I have to store a information about workers details like name, age etc., and workers job time and agreement details like job_start, job_end, salary etc.

Firstly, I tried to draw a dimension for each data, but I consider whether it should be done like a connection between these dimensions each other?

Avoiding repetition without creating a view

Posted: 20 Mar 2013 03:27 PM PDT

Suppose that I have a query Q1 and I need to run a query like the following:

Q1  union  select *  from (some query that uses Q1 outcome)  

I would like to do that:

  • Without creating any view
  • Without running twice the Q1 query.

How can I do that on PostgreSQL?

How to restore deleted rows from a binary log?

Posted: 20 Mar 2013 01:06 PM PDT

Background

I am trying to recover a few thousand rows that were accidentally deleted from a MySQL database. Luckily replication is enabled, so in the binary logs I have a record of each INSERT, UPDATE and DELETE that was performed on the database since it was created.

Unfortunately in a previous attempt to recover the lost rows, someone had copied the entire INSERTS from the binlogs into the database. This created an even bigger mess by adding a bunch of duplicates and changing the ID value which is set to auto_increment.

Remediation

I created a new recovery database based on a backup that was made a few weeks prior to the incident. I then found the exact point in the binary logs were the backup ended. I extracted all the INSERTS/UPDATES/DELETES from the binlog file and fed them into the new recovery database up until the point were the rows got deleted; this restored the database back to where it was before all the rows had been deleted.

Problem

Because of the first failed attempt to recover the lost rows, a lot of invalid data was added to the original database, along with new legitimate data and because one of the columns in the row is set to auto_increment, this creates a mismatch for the value of the column that auto increments when trying to restore the database. I can't simply COPY/PASTE all the missing legitimate binlog SQL statements, because the value of the auto_increment column has changed due to the invalid data that was inserted, which I don't want.

Example

An example of the binlog file:

INSERT INTO reservations (id, name, date) VALUES (null, "john doe", "2013-03-20");  INSERT INTO reservations (id, name, date) VALUES (null, "jane doe", "2013-03-21");  INSERT INTO reservations (id, name, date) VALUES (null, "garbage", "2013-03-21");  INSERT INTO reservations (id, name, date) VALUES (null, "garbage", "2013-03-22");  INSERT INTO reservations (id, name, date) VALUES (null, "mary jane", "2013-03-23");  UPDATE reservations SET name = "mary janett" WHERE id=5;  

OLD database

id, name, date  1, john doe, 2013-03-20  2, jane doe, 2013-03-21  3, garbage, 2013-03-21  4, garbage, 2013-03-22  5, mary janett, 2013-03-23  

NEW database

id, name, date  1, john doe, 2013-03-20  2, jane doe, 2013-03-21  3, mary jane, 2013-03-23  

Now if I were to copy the binlog SQL statements into the recovery database, ignoring the garbage INSERTS which I don't want, the UPDATE statement would fail because the value of `id` would be different. Same goes for DELETE's and INSERT's which rely on the value of `id`.

Question

Is there an "easy" way to recover all the new legitimate data that was introduced to the database after the incident? The only solution I can think of at the moment is manually going through each INSERT/UPDATE/DELETE and adjusting the value of `id`.

I am hoping there is a better solution, because with thousands of records, this approach would take a long time. Any help is greatly appreciated!

Database and query optimizacion

Posted: 20 Mar 2013 07:32 PM PDT

I have a database containing three tables: tbl_database (main table is the primary), tbl_cmdatabase and tbl_blacklist. The three tables share the same structure but differ in the number of records that can be. This is the SQL structure, that basically is the same:

CREATE TABLE IF NOT EXISTS `tbl_ndatabase` (    `id` int(11) NOT NULL AUTO_INCREMENT,    `id_telefonica` int(11) NOT NULL,    `number` varchar(11) COLLATE utf8_spanish2_ci NOT NULL,    PRIMARY KEY (`id`)  ) ENGINE=InnoDB  DEFAULT CHARSET=utf8 COLLATE=utf8_spanish2_ci AUTO_INCREMENT=1 ;  

Table tbl_database contains ~194 074 records and they should continue to grow in large numbers, table tbl_cmdatabase contains ~45,742 records and similarly can increase but perhaps not so exaggerated as tbl_database but would increase and finally the table tbl_blacklist having ~92,038 records. I am developing an application where I need to verify that the number that I will insert into table tbl_database is not in tables tbl_blacklist and tbl_cmdatabase so I need to check for every row looking for the number (this queries are slow in this amount of records). What I need to do is to optimize some tables or change the structure or not because when I perform these queries the server tends to fall as the shared hosting does not support large queries, anyone can help me with this issue? Any suggestions?

Edit: Added a file for test data

Retrieving cached plans fails due to cached objects of database Mirror

Posted: 20 Mar 2013 07:58 PM PDT

I'm trying to retrieve cached execution plans for a specific database object, but this fails because of cached plans which belong to a mirrored database in MIRROR state.

The below query, without any additional WHERE clauses is failing with the following SQL error:

Msg 954, Level 14, State 1, Line 1 The database "DatabaseName" cannot be opened. It is acting as a mirror database.

SELECT  *  FROM    sys.dm_exec_query_stats AS deqs   CROSS APPLY     sys.dm_exec_query_plan(deqs.plan_handle) AS deqp  

I suspect what happens is the sys.dm_exec_query_plan function is first trying to parse all the plan handles from the cache, but fails on cached objects for a mirrored database.

Does anyone know if there are any ways to get around this, T-SQL wise?

Off course I could execute DBCC FREEPROCCACHE to clear the cache, however I'm hoping for other solutions. I'm a bit amazed this function is not discarding any objects from mirrored databases while trying to parse the plans.

Fastest way to delete matching rows in MySQL?

Posted: 20 Mar 2013 12:16 PM PDT

I have more than 10 million records in a table. I want to delete the records where an email column contains .ac..

What is the fastest method to perform the task?

help with simple sql queries

Posted: 20 Mar 2013 12:03 PM PDT

Can I use some help please with the following query

Consider the following schema: Suppliers(sid,sname,saddress) Parts(pid,pname,color) Catalog(sid->Suppliers,pid->Parts,cost)sid and pid are in this table FK

The primary keys are emphasis and ->Rel indicates a foreign key relationship with the primary key of Rel.

  1. find the pids of parts that are supplied by at least two different suppliers
  2. find the sids of supplies who supply every red part
  3. find the sids of suppliers who supply every red part or every green part

can somebody help me with those queries please?

Create log file only for my executed queries

Posted: 20 Mar 2013 03:24 PM PDT

I have been using the following to set up the log file:

log_statement='all',    log_collector='on',    log_destination='on',    log_filename='filename'  

After that, for testing I just executed a query then I have seen the log file in tools>serverstatus. Unfortunately it has a lot of anonymous queries including the query which I ran.

How can I filter out the queries which were executed along with my query (anonymous queries)? Is there any possibility to log the affected table information like:

  • which table got affected?
  • at what time was the query executed?
  • what fields got affected?

And how can I get these details using queries?

Retrieve Log Detail

Posted: 20 Mar 2013 03:10 PM PDT

I have been using the following to set up the log file:

log_statement='all',  log_collector='on',  log_destination='on',  log_filename='filename'  

After that, for testing I just executed a query then I have seen the log file in tools>serverstatus. Unfortunately it has lot of anonymous queries including the query which I ran.

How can I filter out the queries which were executed along with my query (anonymous queries)? Is there any possibility to log the affected table information like:

  • which table got affected?
  • at what time was the query executed?
  • what fields got affected?

And how can I get these details using queries?

PL/SQL trigger error ORA-0001 and ORA-06512

Posted: 20 Mar 2013 04:27 PM PDT

create or replace   TRIGGER "SUP" AFTER INSERT ON "EMP_REPORT" REFERENCING OLD AS "OLD" NEW AS "NEW" FOR EACH ROW  DECLARE      miclaim_supervisor_count number;    employee_company_code VARCHAR2(10);    employee_businessunit number;    projMgrs NUMBER;    supId NUMBER;      cursor  projMgrsCursor is select b.BU_MEMBER_ID    from BU_MEMBER b, EMP_SUB_DIV s    where s.EMP_NO = :NEW.EMP_NO    and s.SUB_DIVISION_CODE = '01' and s.DIV_CODE = '2'    and b.BU_ID IN (select BU_ID from BU_MEMBER where BU_MEMBER_ID = :NEW.EMP_NO);      BEGIN          delete from MICL_SUPERVISORS where EMP_NO = :NEW.EMP_NO and IS_OVVERRIDDEN = 0;          select count(*) into miclaim_supervisor_count from MICL_SUPERVISORS where EMP_NO = :NEW.EMP_NO and IS_OVVERRIDDEN = 1;          select COMPANY_CODE into employee_company_code from  EMPLOYEE_MASTER where EMP_NO = :NEW.EMP_NO;          projMgrs := 0;    if (employee_company_code ='F')then       OPEN  projMgrsCursor;      LOOP         select micl_sup_id_seq.nextval into  supId from dual;      FETCH projMgrsCursor INTO projMgrs;      EXIT WHEN projMgrsCursor%NOTFOUND;        insert into SUP VALUES ((supId), (SELECT SYSDATE FROM DUAL), :NEW.ENTRYADDEDBY_EMP_NO, 3000, 0,projMgrs, NULL,:NEW.EMP_NO);      END LOOP;         CLOSE projMgrsCursor;  else      if(miclaim_supervisor_count IS NULL or miclaim_supervisor_count<1) then      insert into SUP VALUES ((:NEW.ID), (SELECT SYSDATE FROM DUAL), :NEW.ENTRYADDEDBY_EMP_NO, 3000, 0, :NEW.SUP_EMP_NO, NULL,:NEW.EMP_NO);      end if;  end if;    END;  

I created this trigger a week go, with no compilation errors. But when I insert a record into "EMP_REPORT", I get the following errors:

 *"INSERT INTO"EMP_REPORT" (ID, ADDEDDATE, ENTRYADDEDBY_EMP_NO, SUP_EMP_NO, EMP_NO) VALUES ('71', TO_TIMESTAMP('19-MAR-13 09.55.57.983000000 AM', 'DD-MON-RR HH.MI.SS.FF AM'), '81', '841', '5295')  ORA-00001: unique constraint (SYS_C0023329) violated  ORA-06512: at line 1    One error saving changes to table  "EMP_REPORT":  Row 51: ORA-00001: unique constraint (SYS_C0023329) violated  ORA-06512: at line 1"*   

I can't figure out where I went wrong. How can I fix this?

Please note that I can't remove the constraint, and it is a primary key.

Segmentation fault (core dumped) with self-complied mysql-5.1.9-beta.tar.gz

Posted: 20 Mar 2013 06:35 PM PDT

I am compiling mysql-5.1.9-beta.tar.gz source code under Ubuntu 12.04. My steps are following:

tar -xzf mysql-5.1.9-beta.tar.gz  cd /home/gaoyu/mysql-5.1.9-beta  ./configure --prefix=/home/gaoyu/mysql-5.1.9 --with-debug  sudo make  sudo make install  sudo cp /home/gaoyu/mysql-5.1.9-beta/support-files/my-medium.cnf /etc/my.cnf  cd /home/gaoyu/mysql-5.1.9  bin/mysql_install_db --user=mysql  sudo cp /home/gaoyu/mysql-5.1.9-beta/support-files/mysql.server /etc/init.d/mysql  sudo bin/mysqld_safe --user=mysql &  

The bug is :

gaoyu@FlashGroup:~/mysql-5.1.9$ sudo bin/mysqld_safe --user=mysql &  [1] 26196  gaoyu@FlashGroup:~/mysql-5.1.9$ nohup: ignoring input and redirecting stderr to stdout  Starting mysqld daemon with databases from /home/gaoyu/mysql-5.1.9/var  ^C  

(here I wait for a very long time, so I must press Ctrl + C to stop it and continue)

gaoyu@FlashGroup:~/mysql-5.1.9$ bin/mysql  Segmentation fault (core dumped)  

After I input sudo bin/mysqld_safe --user=mysql &, I must wait for a very long time. Why is that? Why do I get a segfault? Has anyone encountered a similar problem?

MySQL replication between VPS and shared host

Posted: 20 Mar 2013 12:18 PM PDT

I have a VPS where I have ssh access and shared host with CPanel. Both have MySQL.

I need to create a master/slave setup among the two server. As per this tutorial: How To Set Up Database Replication In MySQL, I will need to modify the my.conf file.

Is it possible to achieve this with the following permissions on both the servers?

  • VPS

    I have root access with ssh login where I can modify /etc/my.conf file.

  • Shared host

    Cpanel with phpmyadmin.

Slave SQL thread got hanged

Posted: 20 Mar 2013 12:30 PM PDT

We have a master - slave setup with ROW based replication. We are seeing huge delay's on the salve even though there is no activity running wither on master or slave.

When we looked in, we observed the SQL thread looks like hanged. It has been in "Reading event from the relay log" state since last 3 hours or more.

baleaf:(none)> show processlist ;  +--------+-------------+-----------+------+---------+-------+----------------------------------+-----    -------------+  | Id     | User        | Host      | db   | Command | Time  | State                            |        Info             |  +--------+-------------+-----------+------+---------+-------+----------------------------------+-----    -------------+  | 217159 | system user |           | NULL | Connect |  1039 | Waiting for master to send event | NULL             |  | 217160 | system user |           | NULL | Connect |  10045 | Reading event from the relay log | NULL             |  +--------+-------------+-----------+------+---------+-------+----------------------------------+-----    -------------+  4 rows in set (0.00 sec)        baleaf:blackarrow_dw> SHOW SLAVE STATUS \G  *************************** 1. row ***************************                 Slave_IO_State: Queueing master event to the relay log                    Master_Port: 3306                  Connect_Retry: 60                Master_Log_File: binlog.001403            Read_Master_Log_Pos: 95601911                 Relay_Log_File: relay-bin.000002                  Relay_Log_Pos: 12757514          Relay_Master_Log_File: binlog.001403               Slave_IO_Running: Yes              Slave_SQL_Running: Yes                Replicate_Do_DB:             Replicate_Ignore_DB:              Replicate_Do_Table:          Replicate_Ignore_Table:         Replicate_Wild_Do_Table:     Replicate_Wild_Ignore_Table:                      Last_Errno: 0                     Last_Error:                    Skip_Counter: 0            Exec_Master_Log_Pos: 32820355                Relay_Log_Space: 75539220                Until_Condition: None                 Until_Log_File:                   Until_Log_Pos: 0             Master_SSL_Allowed: No             Master_SSL_CA_File:              Master_SSL_CA_Path:                 Master_SSL_Cert:               Master_SSL_Cipher:                  Master_SSL_Key:           Seconds_Behind_Master: 7720  Master_SSL_Verify_Server_Cert: No                  Last_IO_Errno: 0                  Last_IO_Error:                  Last_SQL_Errno: 0                 Last_SQL_Error:     Replicate_Ignore_Server_Ids:                Master_Server_Id: 1  1 row in set (0.00 sec)  

Shall some one please have a look into this ASAP.

Partition of tablespace

Posted: 20 Mar 2013 07:45 PM PDT

My postgres server is running in a windows server and three tablespace are created. Postgresql server is installed in C: drive and the tablespaces are in D: and E: drive. So how to take a backup and restore. Is it similar to normal backup and while restore, in another machine, do i need to have the same setup or i can restore it in a single drive? From the tablespace folder, is it possible to retreive information for only one table which is huge?

Restore a Database to a New database (T-SQL)

Posted: 20 Mar 2013 08:45 PM PDT

I have a database backup from a database, say SourceData which needs to be restored on another database, same schema, but different name (SourceData_1) using T-SQL. I don't know why I gets the following error, despite specifying REPLACE option. For example I used the links below:

Restore Database script

http://msdn.microsoft.com/en-us/library/ms190447(v=sql.105).aspx

Based on the posts I found on the internet, first I got the logical names of the backup file using script below:

RESTORE FILELISTONLY  FROM DISK = 'C:\Backups\SourceData_20121227.BAK'  

Then used the logical names in the following script:

RESTORE DATABASE SourceData_1   FROM DISK = 'C:\DEV\SourceData_20121227.BAK'  WITH      MOVE 'SourceDate' TO 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\SourceData_1.mdf',      MOVE 'SourceDate_log' TO 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\SourceData_1_log.ldf',      REPLACE  GO  

Below is the error I'm getting:

MODIFY FILE encountered operating system error 112 (failed to retrieve text for this error. Reason: 15105) while attempting to expand the physical file 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\SourceData_1_log.ldf'.
RESTORE DATABASE is terminating abnormally.

I checked the disk space and can confirm it is fine. I assume the target database should exist before running the restore. Is that right?

Any suggestion how to fix this issue?

Thanks.

Character vs Integer primary keys

Posted: 20 Mar 2013 02:48 PM PDT

I'm designing a database with multiple lookup tables containing possible attributes of the main entities. I'm thinking of using a 4 or 5-character key to identify these lookup values rather than an auto-incrementing integer so that when I store these attribute IDs on the main tables I'll see meaningful values rather than just random numbers.

What are the performance implications of using a character field as a primary key rather than an integer?

I'm using MySQL if that matters.

[Edit]
These lookup tables have new records added infrequently. They are manually maintained, and the character-based keys are manually created as well. Here's an example:

      CUISINES   ID      Description  -----  --------------  CHNSE  Chinese  ITALN  Italian  MXICN  Mexican  

No comments:

Post a Comment

Search This Blog