Saturday, August 31, 2013

[SQL Server] error:Setuser failed because of one of the following reasons: the database principal 'schemaUser' does not exist,

[SQL Server] error:Setuser failed because of one of the following reasons: the database principal 'schemaUser' does not exist,


error:Setuser failed because of one of the following reasons: the database principal 'schemaUser' does not exist,

Posted: 30 Aug 2013 10:38 PM PDT

with reference to http://technet.microsoft.com/en-us/library/ms191534(v=sql.100).aspxI copy the commands word by word [quote]E. Granting VIEW DEFINITION permission on an XML schema collectionSETUSERGOUSE masterGOif exists( select * from sysdatabases where name='permissionsDB' ) drop database permissionsDBGOif exists( select * from sys.sql_logins where name='schemaUser' ) drop login schemaUserGOCREATE DATABASE permissionsDBGOCREATE LOGIN schemaUser WITH PASSWORD='Pass#123',DEFAULT_DATABASE=permissionsDBGOGRANT CONNECT SQL TO schemaUserGOUSE permissionsDBGOCREATE USER schemaUser WITH DEFAULT_SCHEMA=dboGOCREATE XML SCHEMA COLLECTION MySC AS '<schema xmlns="http://www.w3.org/2001/XMLSchema" targetNamespace="http://ns"xmlns:ns="http://ns"> <simpleType name="ListOfIntegers"> <list itemType="integer"/> </simpleType> <element name="root" type="ns:ListOfIntegers"/> <element name="gRoot" type="gMonth"/></schema>'GO-- schemaUser cannot see the contents of the collection.SETUSER 'schemaUser'GOSELECT XML_SCHEMA_NAMESPACE(N'dbo',N'MySC')GO-- Grant schemaUser VIEW DEFINITION and REFERENCES permissions-- on the xml schema collection.SETUSERGOGRANT VIEW DEFINITION ON XML SCHEMA COLLECTION::dbo.MySC TO schemaUserGOGRANT REFERENCES ON XML SCHEMA COLLECTION::dbo.MySC TO schemaUserGO-- Now schemaUser can see the content of the collection.SETUSER 'schemaUser'GOSELECT XML_SCHEMA_NAMESPACE(N'dbo',N'MySC')GO-- Revoke schemaUser VIEW DEFINITION permissions-- on the xml schema collection.SETUSERGOREVOKE VIEW DEFINITION ON XML SCHEMA COLLECTION::dbo.MySC FROM schemaUserGO-- Now schemaUser cannot see the contents of -- the collection anymore.SETUSER 'schemaUser'GOSELECT XML_SCHEMA_NAMESPACE(N'dbo',N'MySC')GO[/quote]every thing is working until [code]SETUSERGOREVOKE VIEW DEFINITION ON XML SCHEMA COLLECTION::dbo.MySC FROM schemaUserGO-- Now schemaUser cannot see the contents of -- the collection.setuser 'schemaUser'[/code]I encountered the following error:Setuser failed because of one of the following reasons: the database principal 'schemaUser' does not exist, its corresponding server principal does not have server access, this type of database principal cannot be impersonated, or you do not have permission.why is this is so? Is there any solution.thanks

[how to] How often does the Impala StateStore refresh?

[how to] How often does the Impala StateStore refresh?


How often does the Impala StateStore refresh?

Posted: 31 Aug 2013 03:41 PM PDT

I am using Hive and Impala on the same cluster. I find that when I create new tables, the Impala StateStore does not refresh automatically even after a few hours.

I know that I can accomplish this by running "refresh" in impala-shell (in a cron job if need be), but I would like to know: how often does impalad refresh metadata and can I set this interval to be shorter? If so, how?

sql server-mirroring 3 sql server

Posted: 31 Aug 2013 07:36 PM PDT

we are developing an application for a company and this company has 3 branches, we design a SQL server for our application we can put this server in one of the branches but as the connection between these branches is over internet and this connection can be lost some times but the SQL server availability is vital so we decided to mirror the server in each branch, I need a simple solution for how should i config these servers to be mirror of each other with the same privilege to do changes in DBs, all we want to do is configuring 3 SQL server exactly with the same data. we read about merge replication but we want a simpler solution because our scenario is not really enterprise.

How to determine when to reduce physical memory for SQL Server?

Posted: 31 Aug 2013 06:38 AM PDT

If I repeatedly notice SQL Server reporting a lot of free memory, can I likely remove about that amount of memory from the server without affecting performance? Or, can/should it use more for caching?

For this particular example, the server is running SQL Server 2012 Standard Edition on Windows Server 2012. It has 20 GB of physical memory and hosts over 100 GB of data. SQL Server is the only application running on the server.

  SQLServer;Buffer Manager      Buffer cache hit ratio       99.737 %      Page life expectancy            874 s  SQLServer:Memory Manager      Database Cache Memory         6,744 MB      Free Memory                   5,937 MB      Optimizer Memory                  5 MB      SQL Cache Memory                 29 MB      Target Server Memory         19,015 MB      Total Server Memory          18,407 MB  

What exactly is notarization in an Oracle database? [on hold]

Posted: 31 Aug 2013 10:47 AM PDT

Recently, I was reading this article that talks about how SSL (Secure Socket Layer) can be applied to an Oracle database.

Sometimes one of the requirements for obtaining a certification authority involves notarization of the certificate request form. I wanted to know what exactly notarization is, and how can it be applied to a database in general.

MongoDB GUI written on Node.js

Posted: 31 Aug 2013 04:30 AM PDT

Maybe I doubt it - but more than a third instance of MongoDB working with Node.js. I read on official site about Admin UIs, answers on stackoverflow and haven't found GUI's @ Node.js. Why should I install this hateful PHP for RockMongo or plating with django and Fang of Mongo - there is really no MongoDB Admin UI's on Node.js

what are the ways to use SYSDATE and timestamp pattern? [on hold]

Posted: 31 Aug 2013 04:22 AM PDT

what are the ways to use SYSDATE and timestamp pattern? I would like to know the importance of sysdate and timestamp in the database query language. Database experts kindly help me also as beginner to SQL which website I can refer and what topics I need to cover.

I an not able to fetch a record based on a condition [on hold]

Posted: 31 Aug 2013 06:03 AM PDT

I have a table with these records:

0 sales   1 salesOrder  2 SalesInfo  3 SalesDescr  2 Purchase  

I want to transform it into another table like this:

0 sales   1 sales salesorder   2 sales salesorder salesInfo  3 sales salesorder salesInfo sales Descr  2 sales salesorder Purchase  

In the above up to the 4th record I can handle a loop and do the same, but I am not able to do the 5th record. Best Code I have Got in this they are clearing the Record :

call function 'BAPI_COSTELEMENTGRP_GETDETAIL'       exporting         chartofaccounts = p_kokrs         groupname       = v_groupname       tables         hierarchynodes  = i_hierarchynodes         hierarchyvalues = i_hierarchyvalues.     if not i_hierarchynodes[] is initial.       clear : i_group[], k_group.       loop at i_hierarchynodes.         case i_hierarchynodes-hierlevel.           when 0.             k_group-gname0 = i_hierarchynodes-groupname.             k_group-desc0 = i_hierarchynodes-descript.             clear : v_gname0, v_desc0.             v_gname0 = i_hierarchynodes-groupname.             v_desc0 = i_hierarchynodes-descript.           when 1.             if v_gname1 is initial.               k_group-gname0 = v_gname0.               k_group-desc0 = v_desc0.               k_group-gname1 = i_hierarchynodes-groupname.               k_group-desc1 = i_hierarchynodes-descript.               v_gname1 = i_hierarchynodes-groupname.               v_desc1 = i_hierarchynodes-descript.             else.               clear : v_gname1, v_desc1, v_gname2, v_desc2, v_gname3, v_desc3.               k_group-gname0 = v_gname0.               k_group-desc0 = v_desc0.               k_group-gname1 = i_hierarchynodes-groupname.               k_group-desc1 = i_hierarchynodes-descript.               v_gname1 = i_hierarchynodes-groupname.               v_desc1 = i_hierarchynodes-descript.             endif.           when 2.             if v_gname2 is initial.               k_group-gname0 = v_gname0.               k_group-desc0 = v_desc0.               k_group-gname1 = v_gname1.               k_group-desc1 = v_desc1.               k_group-gname2 = i_hierarchynodes-groupname.               k_group-desc2 = i_hierarchynodes-descript.               v_gname2 = i_hierarchynodes-groupname.               v_desc2 = i_hierarchynodes-descript.             else.               clear : v_gname2, v_desc2, v_gname3, v_desc3.               k_group-gname0 = v_gname0.               k_group-desc0 = v_desc0.               k_group-gname1 = v_gname1.               k_group-desc1 = v_desc1.               k_group-gname2 = i_hierarchynodes-groupname.               k_group-desc2 = i_hierarchynodes-descript.               v_gname2 = i_hierarchynodes-groupname.               v_desc2 = i_hierarchynodes-descript.             endif.           when 3.             if v_gname3 is initial.               k_group-gname0 = v_gname0.               k_group-desc0 = v_desc0.               k_group-gname1 = v_gname1.               k_group-desc1 = v_desc1.               k_group-gname2 = v_gname2.               k_group-desc2 = v_desc2.               k_group-gname3 = i_hierarchynodes-groupname.               k_group-desc3 = i_hierarchynodes-descript.               v_gname3 = i_hierarchynodes-groupname.               v_desc3 = i_hierarchynodes-descript.             else.               clear : v_gname3, v_desc3.               k_group-gname0 = v_gname0.               k_group-desc0 = v_desc0.               k_group-gname1 = v_gname1.               k_group-desc1 = v_desc1.               k_group-gname2 = v_gname2.               k_group-desc2 = v_desc2.               k_group-gname3 = i_hierarchynodes-groupname.               k_group-desc3 = i_hierarchynodes-descript.               v_gname3 = i_hierarchynodes-groupname.               v_desc3 = i_hierarchynodes-descript.             endif.         endcase.         append k_group to i_group.         clear : k_group, i_hierarchynodes.       endloop.     endif.  

Does a re-index update statistics?

Posted: 31 Aug 2013 05:28 AM PDT

I've been doing the MS10775A course this past week and one question that came up that the trainer couldn't answer reliably is:

Does a re-index update the statistics?

We found discussions online arguing both that it does and that it doesn't.

How to make Postgres autovacuum not impact performance?

Posted: 31 Aug 2013 07:11 PM PDT

I am running into problems where Postgres autovacuum processes are interfering with the overall performance of my database. It is making my select, insert, and update queries slower than usual. What are some settings, and optimization strategies that will make autovacuum have less of an impact on my database performance?

Is there a way to make it not consume as much memory/CPU so that client queries can run more efficiently?

ORA-21780: Maximum number of object durations exceeded

Posted: 31 Aug 2013 06:23 AM PDT

I am getting this error when records are streamed over to another database

Oracle DB version:11.2.0.3.0    Transaction Flow:   DML on Schema1@DB1 Streamed to Schema2@DB2 then trigger on this table will insert into  Schema3@DB2 then trigger on this table will insert into table in Schema4@DB2    ORA-21780 error happening at the last stage.  

Please advise.

Targeting MSX with job on Multi-Server Administration

Posted: 31 Aug 2013 02:17 PM PDT

I am using Multi-Server Administration to execute jobs on multiple targets. This works well, but I am not able to enlist the master as a target. I receive the following error when attempting to enlist the MSX as a TSX:

Server 'XXXXX' is an MSX. Cannot enlist one MSX into another MSX. (Microsoft SQL Server, Error: 14299)

I would think that running these same jobs on the master would not require managing a local job as well as the multi-server jobs.

Any help is appreciated.

FileWatcher Troubleshooting Options

Posted: 31 Aug 2013 11:17 AM PDT

I have setup a Credential/Procedure/Program/File Watcher/Job to monitor a folder for new files, but it is not working. I am trying to figure out what I can check to troubleshoot why this isn't working. After scaling back the code to a bare minimum, here are somethings I have already done.

  • Verify that files are being created in the folder monitored - They are.
  • Verify that the procedure can be run by itself - It can.
  • Verify that the File Watcher shows up in DBA_Scheduler_File_Watchers and is enabled. - It does and is.
  • Verify that DBA_Scheduler_Job_Run_Details shows a successful execution - It does NOT show any entries for this job.
  • Check the alert log - Nothing interesting.
  • Check for other trace files - I am getting trace files mattching *_j000_*.trc, but they just have this: FILE_TRANSFER error is: with nothing after it.

Could someone explain what I am doing wrong or give me a way to troubleshoot this further? Here is my test code:

--Create Credential.  BEGIN      sys.dbms_scheduler.create_credential(          username => 'oracle',          password => 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxx',          database_role => NULL,          windows_domain => NULL,          comments => NULL,          credential_name => 'TestCredential'      );  END;  /      CREATE TABLE FilesFound(FoundDate Date);      CREATE OR REPLACE PROCEDURE TestProcedure(iFileInfo In sys.scheduler_filewatcher_result) AS  BEGIN     INSERT INTO FilesFound VALUES (sysdate);     COMMIT;  END;  /      BEGIN     DBMS_SCHEDULER.create_program(        program_name => 'TestProgram',        program_type => 'stored_procedure',        program_action => 'TestProcedure',        number_of_arguments => 1,        enabled => False);     DBMS_SCHEDULER.define_metadata_argument(        program_name => 'TestProgram',        metadata_attribute => 'event_message',        argument_position => 1);  END;  /        BEGIN  dbms_scheduler.create_file_watcher(     file_watcher_name => 'TestFileWatcher',     directory_path => '/u01/test',     file_name => '*.*',     credential_name => 'TestCredential',     destination => NULL,     enabled => False);  END;  /      BEGIN     dbms_scheduler.create_job(     job_name => 'TestJob',     program_name => 'TestProgram',     event_condition => NULL,     queue_spec => 'TestFileWatcher',     auto_drop => False,     enabled => False);       --Enable Everything.     dbms_scheduler.enable('TestProgram, TestFileWatcher, TestJob');  end;  /    --Set a one minute check interval.  BEGIN    DBMS_SCHEDULER.set_attribute(      'file_watcher_schedule',      'repeat_interval',      'freq=minutely; interval=1');  END;  /    --Create a file.  DECLARE    vFile utl_file.file_type;  BEGIN    EXECUTE IMMEDIATE 'create or replace directory TESTDIRECTORY as ''/u01/test''';    vFile := utl_file.fopen('TESTDIRECTORY', 'TestFileWatcher.txt', 'w', NULL);    utl_file.put_line(vFile, 'File has arrived '||SYSTIMESTAMP, TRUE);    utl_file.fclose(vFile);  END;  /      --Delay to give time for file to appear.  BEGIN     DBMS_LOCK.SLEEP(120);  END;  /    --Check the table.  BEGIN     FOR vx IN (select count(*) ct from sys.FilesFound) LOOP        If (vx.ct = 0) Then           DBMS_Output.Put_Line('Failed - No File Watcher Activity.');        Else           DBMS_Output.Put_Line('Success - File Watcher Activity Found.');        End If;     END Loop;  END;  /    --Cleanup.  EXECUTE dbms_scheduler.drop_job(job_name => 'TestJob');  EXECUTE dbms_scheduler.drop_program(program_name => 'TestProgram');  EXECUTE dbms_scheduler.drop_file_watcher(file_watcher_name => 'TestFileWatcher');  EXECUTE DBMS_SCHEDULER.drop_credential(credential_name => 'TestCredential');  drop table FilesFound;  drop procedure TestProcedure;  drop directory TestDirectory;  

restrict user host settings to socket connection only

Posted: 31 Aug 2013 01:17 PM PDT

Is there a way to restrict a user's host setting so that (s)he can only connect by socket and not TCP/IP? I'd like to have this setting for the root account.

edit: As Abdul Manaf pointed out skip-networking can be used to turn off TCP/IP connectivity altogether. But can it be done on a user account basis?

Unable to create a new listener in oracle 11g

Posted: 31 Aug 2013 03:17 PM PDT

In net manager i get an error message "A valid logfile is required" when i click on listener and choose '+' button. and then it keeps on giving me that error when i click on any other tab or button in net manager.

Grant access to a table to all users

Posted: 31 Aug 2013 10:17 AM PDT

Is it possible to assign a grant to a table for all users, or a default permission so that when new users are created they will have the specific grants for that table to SELECT, UPDATE, INSERT and DELETE?

Performance difference between MySQL and PostgreSQL for the same schema/queries

Posted: 31 Aug 2013 07:14 PM PDT

I'm a newbie DBA, and I have experience in Microsoft SQL Server but I want to jump to FLOSS.

I'm starting a company, and we develop an app (PHP) with a Postgres backend, and we did some tests comparing with MySQL too. We observe that MySQL is twice as fast as PostgreSQL.

I did a tangible performance test:

  • Same columns in table with equivalent column datatypes.
  • Same number of rows.
  • Same indexes in both (primary key included).
  • The CPU load are idle and Postgres machine it's significantly better.
  • And the same query (obviously).

What am I doing wrong?

P.S: I read many "howtos" on performance tuning for database engines.
P.S(2): We're using InnoDB (one file per table) on the MySQL database.


Hi Mat!

I did the three common select (and hardest) queries.

The question about disk, certainly it's not the same; In Postgres it's a SSD (almost three time fastest).

MySQL cache data:

+------------------------------+----------------------+  | Variable_name                | Value                |  +------------------------------+----------------------+  | binlog_cache_size            | 32768                |  | have_query_cache             | YES                  |  | key_cache_age_threshold      | 300                  |  | key_cache_block_size         | 1024                 |  | key_cache_division_limit     | 100                  |  | max_binlog_cache_size        | 18446744073709547520 |  | query_cache_limit            | 1048576              |  | query_cache_min_res_unit     | 4096                 |  | query_cache_size             | 16777216             |  | query_cache_type             | ON                   |  | query_cache_wlock_invalidate | OFF                  |  | table_definition_cache       | 256                  |  | table_open_cache             | 64                   |  | thread_cache_size            | 8                    |  +------------------------------+----------------------+  

I don't know how to view this in PostgreSQL.

Thanks in advance.

Should I keep pg_xlog on the same disk as data if using disk snapshotting?

Posted: 31 Aug 2013 07:22 PM PDT

We're running on EBS volumes on EC2. We're interested in leveraging EBS snapshotting for backups. However, does this mean we'd need to ensure our pg_xlog is on the same EBS volume as our data?

(I believe) the usual reasoning for separating pg_xlog onto a separate volume is for performance. However, if they are on different volumes, the snapshots may be out of sync.

mysql second slave not syncing while first slave works fine

Posted: 30 Aug 2013 10:17 PM PDT

I have a master (m) - slave (s1) setup using mysql 5.1.45

When I try to add a second slave (s2) the slave lags behind and never catches up on the sync.

Even after having synced the s2 with the whole system offline and there were (Seconds_Behind_Master = 0) after a few hours the s2 gets out of sync.

Strange is that s1 is always on sync.

any ideas?

SHOW SLAVE STATUS \G  (on slave2)  *************************** 1. row ***************************             Slave_IO_State: Waiting for master to send event                Master_Host: xxx.xxx.xxx.xxx                Master_User: xxxx_xxxx5                Master_Port: 3306              Connect_Retry: 60            Master_Log_File: mysql-bin.013165        Read_Master_Log_Pos: 208002803             Relay_Log_File: xxxxxxxxxx-relay-bin.000100              Relay_Log_Pos: 1052731555      Relay_Master_Log_File: mysql-bin.013124           Slave_IO_Running: Yes          Slave_SQL_Running: Yes            Replicate_Do_DB: xxxxxxxxx        Replicate_Ignore_DB:         Replicate_Do_Table:     Replicate_Ignore_Table:    Replicate_Wild_Do_Table:  Replicate_Wild_Ignore_Table:                 Last_Errno: 0                 Last_Error:               Skip_Counter: 0        Exec_Master_Log_Pos: 1052731410            Relay_Log_Space: 44233859505            Until_Condition: None             Until_Log_File:              Until_Log_Pos: 0         Master_SSL_Allowed: No         Master_SSL_CA_File:         Master_SSL_CA_Path:            Master_SSL_Cert:          Master_SSL_Cipher:             Master_SSL_Key:      Seconds_Behind_Master: 69594  Master_SSL_Verify_Server_Cert: No              Last_IO_Errno: 0              Last_IO_Error:             Last_SQL_Errno: 0             Last_SQL_Error:  

iperf results between servers:

M -> s2  [ ID] Interval       Transfer     Bandwidth  [  5]  0.0-10.0 sec    502 MBytes    420 Mbits/sec  [ ID] Interval       Transfer     Bandwidth  [  4]  0.0-10.0 sec  1.05 GBytes    902 Mbits/sec    M -> s1  [ ID] Interval       Transfer     Bandwidth  [  4]  0.0-10.0 sec    637 MBytes    534 Mbits/sec  [ ID] Interval       Transfer     Bandwidth  [  5]  0.0-10.0 sec    925 MBytes    775 Mbits/sec  

vmstat for s2

 vmstat   procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------    r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st   1  0    268 126568 199100 22692944    0    0   100   836    8   81  1  0 96  3    vmstat 2 10  procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st  0  0    268 1150144 197128 21670808    0    0   100   835    9   81  1  0 96  3  0  0  0    268 1144464 197160 21674940    0    0   644  3096 1328 1602  0  0 97  2  0  0  2    268 1140680 197176 21679624    0    0   846  5362 1002 1567  0  0 98  2  0  0  1    268 1135332 197192 21685040    0    0   960  3348  850 1193  0  0 98  1  0  0  0    268 1130776 197204 21688752    0    0   576  2894  978 1232  0  0 98  2  0  0  0    268 1127060 197264 21693556    0    0   586  5202 1075 1505  0  0 97  3  0  0  0    268 1122184 197272 21698412    0    0   896  1160  614  727  0  0 98  1  0  0  0    268 1118532 197300 21702780    0    0   586  5070 1279 1708  0  0 93  6  0  0  0    268 1114000 197324 21705820    0    0   402  1522  947  942  0  0 95  4  0  0  0    268 1109708 197336 21710188    0    0   704  9150 1224 2109  0  0 97  2  0  

top output on s2

top - 14:44:25 up 16:36,  1 user,  load average: 1.62, 1.47, 1.42  Tasks: 140 total,   1 running, 139 sleeping,   0 stopped,   0 zombie  Cpu0  :  2.9%us,  1.1%sy,  0.0%ni, 73.8%id, 21.8%wa,  0.0%hi,  0.4%si,  0.0%st  Cpu1  :  0.8%us,  0.3%sy,  0.0%ni, 95.5%id,  3.3%wa,  0.0%hi,  0.0%si,      0.0%st  Cpu2  :  0.6%us,  0.3%sy,  0.0%ni, 97.7%id,  1.4%wa,  0.0%hi,  0.0%si,  0.0%st  Cpu3  :  0.5%us,  0.2%sy,  0.0%ni, 98.9%id,  0.4%wa,  0.0%hi,  0.0%si,  0.0%st  Cpu4  :  0.0%us,  0.0%sy,  0.0%ni, 99.9%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st  Cpu5  :  0.0%us,  0.0%sy,  0.0%ni, 99.9%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st  Cpu6  :  0.0%us,  0.0%sy,  0.0%ni, 99.9%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st  Cpu7  :  0.0%us,  0.0%sy,  0.0%ni, 99.9%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st  Mem:  24744184k total, 24005508k used,   738676k free,   199136k buffers  Swap:  1050616k total,      268k used,  1050348k free, 22078920k cached  

Any ideas?

Is there any chance that the Mysql version is the culprit of all this in conjuction with the nearly 5 fold increase in traffic to the master ?

If that is the case then why s1 syncs and not s2?

Any ideas if 5.6.x solves similar probs ?

How does MySQL determine the 'index first key' and 'index last key' with indexes?

Posted: 31 Aug 2013 12:17 AM PDT

In other words, how does MySQL find the result range with indexes on the first step when it's executing a select query?

Viewing MySQL Account Resource Limits

Posted: 31 Aug 2013 03:17 AM PDT

Is there any way of viewing an account's remaining resources that are allocated to it? I setup an account that's allowed 7200 queries an hour. At any point, could I then run a query to find out how many remaining queries it's allowed?

MySQL must be storing this information somewhere as FLUSH USER_RESOURCES; will reset the counters however, I tried a few variants such as SHOW USER_RESOURCES and they don't seem to display anything. I've also hunted around information_schema and mysql tables.

Is it just not possible to retrieve that information?

heroku pg:info shows 0 tables

Posted: 31 Aug 2013 08:38 PM PDT

My heroku app has dev plan.

The app is useable and works and if I use pg:backups to download a backup and view it locally I can see that it has data.

However, when I do heroku pg:info, it gives me 0 tables and 0 rows:

Connections: 0  Created:     2012-10-25 09:18 UTC  Data Size:   6.1 MB  Fork/Follow: Unavailable  PG Version:  9.1.6  Plan:        Dev  Rows:        0/10000 (In compliance)  Status:      available  Tables:      0  

What's going on?

When using a master plus multiple slaves, how do I change over from one server to a brand new one with minimal downtime?

Posted: 31 Aug 2013 12:17 AM PDT

In my particular case, I'm using Heroku Postgres (repackaged Amazon EC2 instances running PostgreSQL), but platform-agnostic answers are also good!

Heroku's instructions describe the process for transfering over a single database (i.e. without slaves):

  • You have an existing server, A.
  • Create server B as a follower of A.

    $ heroku addons:add heroku-postgresql:ronin --follow HEROKU_POSTGRESQL_AUBURN  ... Attached as HEROKU_POSTGRESQL_BLUE  
  • Wait for B to be up-to-date with A.

  • Disable the application so no new writes are made to A.
  • Wait for B to fully catch up.
  • Sever the connection between B and A, so B is an independent db.

    $ heroku pg:unfollow HEROKU_POSTGRESQL_BLUE  
  • Reconfigure the app to use B as your database.

    $ heroku pg:promote HEROKU_POSTGRESQL_BLUE  
  • Re-enable the application.

  • Destroy A.

    $ heroku addons:remove HEROKU_POSTGRESQL_AUBURN  

This all works fine if you have a single database, but what happens when you have slaves? Heroku's docs only say that you'll have to re-create the slaves, without going into further depth. (n.b. Once you've done the process above, you can't immediately create the new slaves. The server needs a few hours before it can be forked/followed.)

Now, if your app can handle ditching the slaves entirely, then you can do the following:

  • disable the slaves
  • change over the master server
  • create new slaves
  • enable the new slaves

But if reducing your server farm to just a single server is going to cause the app to crash and burn, you need a way to keep your slaves up and running throughout this whole process!

My first thought was to create a server C, following B, even while B is still following A. When switching over, B becomes the new master and C the new slave. But, alas, Heroku (and presumably also PostgreSQL) doesn't support followers-following-followers.

QUESTION: How do you change over from one whole master/slave set to another set? How do you change over your master without having to run without slaves for an extended period of time?

And just in case it's note clear: I'm not asking about failing over to an existing slave. I want to upgrade all of the servers to a different-sized EC2 instance. And I want to maintain my master/slave setup throughout the process.

Does pt-table-checksum point out rows that are different between Master and Slave?

Posted: 30 Aug 2013 11:17 PM PDT

Trying to verify replication integrity of my replica. And I found that pt-table-checksum is one of the best ways of doing it.

I have a simple Master->Slave set up. Have created a sample table with 10 rows on the Master, which has successfully replicated on the Slave. Intentionally on the slave I have changed a column in one of the rows of that table. Executed -

pt-table-checksum "h=host,u=user,p=password,P=3306" --function=MD5 --databases=test --tables=tentotal  

And indeed in my checksums table that pt-table-checksum created, I could see different checksum for master and slave for that table. Excellent so far!

Output was -

TS                  ERRORS DIFFS    ROWS    CHUNKS  SKIPPED  TIME  TABLE  09-05T12:17:30      0      0        3       1       0        0.182 test.tentotal  

Strangely the DIFF column says 0 which is incorrect. What am I missing here?

Percona tool kit version: 2.1.3

[MS SQL Server] The Request Failed or the service didnot respond in a timely fashion

[MS SQL Server] The Request Failed or the service didnot respond in a timely fashion


The Request Failed or the service didnot respond in a timely fashion

Posted: 30 Aug 2013 06:20 AM PDT

Hi, I was tryingto install SP2 after installing Sql 2008 R2 (RTM) and i received following error when i restart the Reporting Service: the Request Failed or the service didnot respond in a timely fashion. Consult the eventlog or other applicable error logs for details. I tried following but it didn't help me out 1) Disable VIA 2) Change the Reporting Service account 3) I Also tried with different settings for following: •Go to Start > Run > and type regedit•Navigate to: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control•With the control folder selected, right click in the pane on the right and select new DWORD Value•Name the new DWORD: ServicesPipeTimeout•Right-click ServicesPipeTimeout, and then click Modify•Click Decimal, type '180000′, and then click OK•Restart the computer Thanks.

[SQL 2012] testing

[SQL 2012] testing


testing

Posted: 30 Aug 2013 07:15 AM PDT

testing:hehe:

Application Error

Posted: 30 Aug 2013 04:15 AM PDT

One of the application is getting the error as below after migarted and upgraded to SQL 2008 R2 from SQL 2000. Error: User Lock not found in dB, System Terminted Application Vendor is helpless The database was tested with both comptibility modes.

[T-SQL] Is there a way to improve this SELECT statement for performance?

[T-SQL] Is there a way to improve this SELECT statement for performance?


Is there a way to improve this SELECT statement for performance?

Posted: 30 Aug 2013 01:23 AM PDT

Hello all, I have to run this query during inserts. It works fine for thousands of records but once it hits tens or hundreds of thousands of records the performance is in minutes and not seconds. Is there a way to improve it? : SELECT DISTINCT @SeasonStamp, r.RunnerID, r.RunID AS RunID, (SELECT COUNT(DISTINCT RunnerID) from RACES WITH (NOLOCK) where universalid = r.universalid AND rtrim(RunID) = rtrim(r.RunID) AND RaceDate >= @CutoffDate) AS NUMOFSponsorS, (SELECT COUNT(DISTINCT RaceDate) from RACES WITH (NOLOCK) where universalid = r.universalid AND rtrim(RunID) = rtrim(r.RunID) AND RaceDate >= @CutoffDate AND RunnerID like r.RunnerID + '%') AS NUMBEROFinhouseMeets, (SELECT COUNT(DISTINCT RaceDate) from RACES WITH (NOLOCK) where universalid = r.universalid AND rtrim(RunID) = rtrim(r.RunID) AND RaceDate >= @CutoffDate AND RunnerID NOT like r.RunnerID + '%') AS NUMBEROFoutsideMeets, r.universalidFROM RACES r WITH (NOLOCK)WHERE r.RaceDate >= @CutoffDateAND RunnerID like @Sponsor+'%'AND r.ZorX = @ZorX

[SQL Server 2008 issues] How to put If Else condition to check one coulmn value is present in another set of column values in sql 2008

[SQL Server 2008 issues] How to put If Else condition to check one coulmn value is present in another set of column values in sql 2008


How to put If Else condition to check one coulmn value is present in another set of column values in sql 2008

Posted: 30 Aug 2013 04:36 PM PDT

[font="Courier New"][size="3"]Hi All,I have main data set which consists of lacks of records and below is my queryDECLARE @USER_DT DATETIME SET @USER_DT = '2013-08-01'DECLARE @RANGE_DATE DATETIMESELECT distinct @RANGE_DATE = RANGE_DATE FROM dbo.TABLE WHERE RANGE_DATE >= (CONVERT(VARCHAR(8), DATEADD (M, -2, GETDATE()), 21)+ '26') AND RANGE_DATE <= (CONVERT(VARCHAR(8), DATEADD (M, -1, GETDATE()), 21)+ '25')IF(@USER_DT = @RANGE_DATE ) BEGINSELECT * FROM MAIN_TABLEWHERE USER_DATE = @USER_DT AND RANGE_DATE >= (CONVERT(VARCHAR(8), DATEADD (M, -2, GETDATE()), 21)+ '26') AND RANGE_DATE <= (CONVERT(VARCHAR(8), DATEADD (M, -1, GETDATE()), 21)+ '25')ENDBEGINSELECT * FROM MAIN_TABLEWHERE USER_DATE = @USER_DT AND RANGE_DATE >= (CONVERT(VARCHAR(8), DATEADD (M, -1, GETDATE()), 21)+ '26') AND RANGE_DATE <= (CONVERT(VARCHAR(8), DATEADD (M, 0, GETDATE()), 21)+ '25')ENDi.e. @USER_DT is for user selection in SSRS Report window. Now, let say he has enter '2013-08-01' now second parameter @RANGE_DT is created for storing distinct dates values now my problem is I want to check that user entered date in @Range_DT and if it is present as shown in the above query i will get desired output. But thing is we can check only single value in If-Else and I wanna check single date value is present or not and displya output accordingly. Is there any way to do this (This later i am going to put in SSRS report).[/size][/font]

I partitioning ever simple?

Posted: 30 Aug 2013 03:12 PM PDT

Hi,*IS partitioning ever simple?I have a web report that I want to retrieve data as quickly as possible. The report is driven by a view of current year transactions - in December it will be about 20 million rows. Instead of taking 5 seconds to get 10k rows, I want it to take 1 second. Instead of 1 second for 200 rows, I want it to be instant. Sorry for the book I've written below. I tried to provide all the information I thought might help understand the situation.The report view has three tables. The main table has an account (~700 of these), cost center (~7,000 of these), several descriptive text fields and a few numeric columns (dollars, quantities). The second and third tables are flat file hierarchies for the cost centers and accounts respectively.On the first table I have a clustered index on cost center, account. On the second and third tables I have primary keys on cost center and account respectively.The account and cost center tables change every month or two and when they do, they are completely overwritten. The main table with texts, quantities and amounts is written from scratch once per day and swapped out instantly (the view points at a synonym). The web report ALWAYS filters on the account sub-category (about 70 of these) and returns all of the descriptive text fields and numbers. It almost always also filters on the cost center or one of the 8 levels of cost center hierarchy.Having heard the word "partition" but knowing nothing about it, I was thinking that my report might likely be faster if I either:a) write the sub-account into my main table and create a clustered index on sub-account, cost center (instead of clustered cost center, account and joining on the account table to get sub-account).orb) write the sub-account into my main table and partition the table on sub-account.orc)change the web report query so that it looks at not only account sub-category (~70 of these) but also account category (8 of these); write out 8 separate tables, one for each account category.Before looking into anything at all, I figured that b) would be the most straightforward as I probably just needed to "add a partition" after each time I drop/recreate my main table and before populating it, the same way I currently add an index. I figured it would be as simple as 'create partition over [account_sub_level] on myTableA and SQL Server would figure it out... But after doing an hour or two of reading up online, it sounds like it is WAY more complicated than I thought and it may require permissions that I don't necessarily have. I might end up trying to figure it out for a couple of days and realizing I can't do it. Maybe given the extent of what I am trying to accomplish I would be better of trying a) and if that doesn't work try c)...What do you think? Thanks for any advice!Tai

SQLSERVER 2008 : Rows into Comma seperated values

Posted: 30 Aug 2013 11:59 AM PDT

[code="sql"]declare @year int = 2013, @week int = 27declare @dte dateselect @dte = dateadd(week, @week - 1, dateadd(year, @year - 1900, 0))SELECT *FROM( SELECT [DATE] = DATEADD(DAY, n, dateadd(day, (datediff(day, '17530107', @dte) / 7) * 7, '17530107')) FROM ( VALUES (0), (1), (2), (3), (4), (5), (6) ) num (n)) dWHERE [DATE] >= dateadd(year, @year - 1900, 0)AND [DATE] <= dateadd(year, @year - 1900 + 1, -1)[/code]my output needs to be as follows[2013-06-30],[2013-07-01],[2013-07-02],[2013-07-03],[2013-07-04],[2013-07-05],[2013-07-06]note : i need the symbol [ ] as well.can any one help me in this

Test and document recovery scenarios

Posted: 30 Aug 2013 11:01 AM PDT

I have to test and document possible recovery scenarios in SQL Server 2008 R2.I can think of the following scenarios. Can you please suggests me more scenarios?1. restore a database that was accidentally deleted2. recover the database to point in time3. recover the database if mdf file is deleted4. recover the database if ldf file is deleted5. restore a table that was accidentally deleted from a databaseThanks

List objects to move

Posted: 30 Aug 2013 06:46 AM PDT

In one database(Order), all my objects (in this db, some objects are not my app) for a app have to move to another server.SQL server administrator want me to list all objects(tables, views, functions and store procedures).Is there a easy to select objects and generate a script for administrator?

Reporting Services scheduling- hourly between 8 and 5 impossible?

Posted: 04 Jun 2012 05:14 AM PDT

Trying to get a report to run daily between certain hours. I can set a start time and an "end date" in Report Manager, but as far as I can tell, I can't say run hourly every day from X to Y. am I missing something? If this isn't doable from RS, can I just find the associated job in the Agent and change the schedule of that job?

how to delete a log file?

Posted: 29 Aug 2013 09:48 PM PDT

Hi All, Have a production database with two log files. The second ldf file is pointing to a wrong drive and we dont want to have a second ldf file. Hence need to delete the second log file.Can any one please help me to achieve this?

SQL only sampled 1 row for 47,000,000 table for a user created statistic

Posted: 29 Aug 2013 10:42 PM PDT

I have an unusual problem which cause as major performance issues with our Application. A user created statistic on our main table in our database hit the 20% modification threshold so SQL refreshed the statistics. It only sampled 1 row.[b]Before:[/b]StatisticName StatisticType NoRecompute LastUpdated Rows RowsSampled UnfilteredRows RowModifications ModificationPercentage Steps_dta_stat_244664415_1_2_15_7_3_8 User Created 0 2013-03-17 16:31:47 39176591 34 39176591 7834425 20 28[b]After:[/b]StatisticName StatisticType NoRecompute LastUpdated Rows RowsSampled UnfilteredRows RowModifications ModificationPercentage Steps_dta_stat_244664415_1_2_15_7_3_8 User Created 0 2013-08-29 17:02:25 46999958 1 46999958 0 0 1Once the issue was identified, an UPDATE STATISTIC WITH FULLSCAN fixed the problem.Does anyone know a valid reason why SQL would have only sampled 1 row ? Personally I think this is a bug in the SQL Engine.

sql 2008 r2 ole db driver

Posted: 29 Aug 2013 11:54 PM PDT

i installed native client for sql 2008 r2 and i have universe which are depend on ole db driver but do not know how to install and where to get it.. anyone can recommend any link..ReagrdsShaun

Friday, August 30, 2013

[SQL Server] I have written a Stored Procedure having 3 nested while loops in it .This is Working fine and I am getting desired result as I have tested this on small set of data ,but while running on large set of records it taking huge of amount of time, might be due to multiple nested while looping . Co

[SQL Server] I have written a Stored Procedure having 3 nested while loops in it .This is Working fine and I am getting desired result as I have tested this on small set of data ,but while running on large set of records it taking huge of amount of time, might be due to multiple nested while looping . Co


I have written a Stored Procedure having 3 nested while loops in it .This is Working fine and I am getting desired result as I have tested this on small set of data ,but while running on large set of records it taking huge of amount of time, might be due to multiple nested while looping . Co

Posted: 29 Aug 2013 11:18 PM PDT

Hi Guys, I have written a Stored Procedure having 3 nested while loops in it .This is Working fine and I am getting desired result as I have tested this on small set of data ,but while running on large set of records it taking huge of amount of time, might be due to multiple nested while looping .Could someone please help me out to optimize this SP or eliminate some while looping without affecting its business logic, or any other solution?Here is my Business Logic:Loop Calculates Number of times customer has visited that particular city in given time span (DateProvided to Till Date).Below is the Pseudo code, Sample Data and Sample results for your reference.Pseudo code:1. Select Customer in a table. Eg. There are 5 customers in a table with Id 1, 2,3,4,52. Select Each City for Particular customer. Eg . Customer whose Id is 1 travelling to 3 different cities let's say Mumbai, Delhi and Bangalore.3. Now I have to calculate the Visiting status of those Customers based on Date Provided column to till date for each city. a. If Customer visited particular city in one year from date provided to till date then M1 for each city.b. If Customer visited particular city in two year from date provided to till date then M2 for each city.C. If Customer visited particular city more than 3 year from date provided to till date then M3 for each city .Sample Date: Customer City DateProvided Eg. 1 Mumbai 12/02/2011 Delhi 07/30/2008 Delhi 05/18/2009 Bangalore 04/13/2012 Expected Result: Customer City Status1 Mumbai M2 Delhi M3 Delhi M3 Bangalore M1

Help

Posted: 30 Aug 2013 08:27 AM PDT

How to unlink a linked server?I have two database server and i just want to unlink them from each other.

Clause Group By

Posted: 30 Aug 2013 12:50 AM PDT

Hi all, hope in your help.I try this sql query:[code]SELECT [NAMES], [NUMBER]FROM [CV].[dbo].[T40]WHERE [NUMBER] = '44644'GROUP BY [NAMES], [NUMBER];[/code]the output is:[CODE]NAMES NUMBER BENCORE S.R.L. 44644BENCORES.R.L. 44644[/CODE]I need instead this other output, can you help me?[code]NAMES NUMBER BENCORE S.R.L. 44644[/code]

5nf

Posted: 29 Aug 2013 04:56 PM PDT

can anybody explain me about 5 normal form ? I have searched the internet but I am not convinced!please helpThanks and Regards

Help

Posted: 29 Aug 2013 09:55 PM PDT

I am DBA with two year of experience on SQL server 2008 R2,i have some questions that should i stick to this field or switch to BI as a SQL Developerlearning SSIS . So friends please give suggestions :-)

Sql query Help

Posted: 30 Aug 2013 06:21 AM PDT

Hello all,i have view which displays duplicate id numbers. Is there somehow i can alter my view to not pull up duplicate id's? Any help would be greatly appreciated. Thanks:-)[code="sql"]SELECT DISTINCTCAST(NAME_MASTER.ID_NUM AS VARCHAR) AS 'ID_NUM',NAME_MASTER.PREFERRED_NAME,NAME_MASTER.FIRST_NAME,NAME_MASTER.LAST_NAME,NAME_MASTER.MIDDLE_NAME,IND_POS_HIST.POS_TITLE 'POSITION_TITLE',IND_POS_HIST.POS_START_DTE,EMPL_MAST.TERMINATION_DTE,CAST(IND_POS_HIST.SUPER_ID_NUM AS VARCHAR) AS 'SUPERVISOR_ID',EMPL_MAST.UDEF_5A_1 AS 'DEPT_CODE',BIOGRAPH_MASTER.BIRTH_DTE,ADDRESS_MASTER.ADDR_LINE_1 AS 'EMAIL',mse_cardsystem_swipevalue.swipe_valueFROM NAME_MASTER INNER JOIN IND_POS_HIST ON NAME_MASTER.ID_NUM = IND_POS_HIST.ID_NUM INNER JOIN EMPL_MAST ON NAME_MASTER.ID_NUM = EMPL_MAST.ID_NUM AND IND_POS_HIST.ID_NUM = EMPL_MAST.ID_NUM INNER JOIN BIOGRAPH_MASTER ON NAME_MASTER.ID_NUM = BIOGRAPH_MASTER.ID_NUM INNER JOIN ADDRESS_MASTER ON NAME_MASTER.ID_NUM = ADDRESS_MASTER.ID_NUM INNER JOIN mse_cardsystem_swipevalue ON NAME_MASTER.ID_NUM = mse_cardsystem_swipevalue.id_num WHERE ADDRESS_MASTER.ADDR_CDE = '*EML' AND IND_POS_HIST.POS_STS = 'P' AND EMPL_MAST.ACT_INACT_STS = 'A' AND IND_POS_HIST.POS_TITLE != 'Title'[/code]

[how to] MyISAM vs InnoDB for Forums

[how to] MyISAM vs InnoDB for Forums


MyISAM vs InnoDB for Forums

Posted: 30 Aug 2013 04:38 PM PDT

I want to design a forum script with MySql. Which database engine would have better performance? MyISAM or InnoDB ?

How do you get the graph plugin for Squirrel SQL?

Posted: 30 Aug 2013 05:02 PM PDT

I installed Squirrel SQL and the website claims the Graph plugin is part of the install. It is not available nor is it in the jar. How do I get the graph plugin?

Lookup data while in mysql cli edit mode (external editor) (\e)

Posted: 30 Aug 2013 09:07 PM PDT

I'm using the mysql cli in edit mode (\e).

> Select * from \e  

Goes to vim and I finish typing my command.

Select * from test_table;  

Only to realize I've forgotten to use the database I need. Is there a way I can preserve this script in edit mode, switch back to the regular mysql mode, run my command, and then come back to edit?

Another example would be: I forget how a column name is spelled and I need to look it up after I've started typing a command in edit mode.

Thanks!

How to -create- performance issues? - oracle [on hold]

Posted: 30 Aug 2013 01:16 PM PDT

I want to dedicate some time to learn more about performance and tuning issues. I assign for that a clean DB. I want to know how can I load into it some data and performance-problem queries/DML/DDL? Do you know some scripts that can cause/generate that (purposely or not)?

The idea is to learn the basic(and maybe more) of em/awr/addm etc - tuning and performance wise.

Books are great but I have the knowledge I need some actual work on that.

Combine - Select Into, with Insert Into Select

Posted: 30 Aug 2013 01:12 PM PDT

I've read many explainations like this one here that say "Select Into ..." is to create new tables and "Insert Into ..." is to append to existing tables.

I'm automating a coworker's sql scripts. Currently these sql scripts create new tables (assuming they don't exist) using Select Into, and this is causing a dilemma. When the automation fires off the second time, I get an error because the table already exists and, consequently, the second round of data isn't inserted.

I'd prefer not to tell my coworker to rewrite his hundreds of lines of code by specifying all the column names twice in all his scripts. Is there some minimalist way I can combine the idea of the "Select Into" and the "Insert Into" into a single query, without explicitly duplicating all the column names? maybe like "Select Into ... On Error " or something like that?

ORA-7445 Error on 12c

Posted: 30 Aug 2013 02:53 PM PDT

I am installing an Oracle 12.1.0.1 Enterprise Edition on RHEL 6.4 I set it up with ASMM and HugePages (64G of RAM)

There may be a correlation between the errors and signing in to em express (not 100% certain yet). I mention this because of the following blog post with a similar error: http://surachartopun.com/2013/06/learn-something-wtih-12c-and-enjoy-bug.html

I have opened an SR already but wanted to see if anyone had thoughts on this.

Fri Aug 30 14:53:22 2013  Exception [type: SIGSEGV, Address not mapped to object] [ADDR:0x5] [PC:0xB71F161, qervwRowProcedure()+113] [flags: 0x0, count: 1]  Errors in file /home/oracle/diag/rdbms/prdtdr1m/prdtdr1m/trace/prdtdr1m_s000_5987.trc  (incident=33811):  ORA-07445: exception encountered: core dump [qervwRowProcedure()+113] [SIGSEGV] [ADDR:0x5] [PC:0xB71F161] [Address not mapped to object] []  Incident details in: /home/oracle/diag/rdbms/prdtdr1m/prdtdr1m/incident/incdir_33811/prdtdr1m_s000_5987_i33811.trc  Use ADRCI or Support Workbench to package the incident.  See Note 411.1 at My Oracle Support for error and packaging details.  Fri Aug 30 14:53:24 2013  Dumping diagnostic data in directory=[cdmp_20130830145324], requested by (instance=1, osid=5987 (S000)), summary=[incident=33811].  Fri Aug 30 14:53:33 2013  Sweep [inc][33811]: completed  Sweep [inc2][33811]: completed  

Execute a script with SQLPlus containing html entity mappings

Posted: 30 Aug 2013 11:40 AM PDT

When executing a script with SQL PLus, I was prompted to enter value for lt, gt, and apos.

Here are some code examples in this script.

v_qry := replace(v_qry, '&lt;', '<');  v_qry := replace(v_qry, '&gt;', '>');  v_qry := replace(v_qry, '&apos;', '''') ;  

How to execute this script without being prompted to enter values?

Thank you Frank

Tools to generate MySQL database migration script on Ubuntu [on hold]

Posted: 30 Aug 2013 10:37 AM PDT

I am basically looking for tools which can generate database migration script. My team maintains an application written in Java which uses hibernate to map MySQL Database to Java classes. We often change the application and also modify the class definition. Once we build the application it does provide the DB schema. We generally look for schema difference and manually create database migration script. This process is error prone and take some time to get it right. We are looking for tools on Ubuntu which can take the schema files and generate the migration script. Our main focus is schema migration, if tool also provides help in migrating data that would be great..

Any help will be greatly appreciated. Thanks.

EDIT

Currently I am looking at mysqldiff(libmysql-diff-perl). But I am not able to install this package.

sudo apt-get update  sudo apt-get install libmysql-diff-perl  

It throws the following error.

Reading state information... Done  E: Couldn't find package libmysql-diff-perl  

Connecting to Oracle Database with ODBC/Python

Posted: 30 Aug 2013 12:25 PM PDT

We've recently installed a new piece of software that uses an Oracle Database to store the data. My goal is to connect to that database and pull the data from the back end to do analysis on. I'm having the world's most difficult time connecting to it.

I need to access it two ways:

  1. ODBC - I would like to setup an ODBC connection in Windows 7, so that I can pull the information using Excel. I know how to setup the connection, but I cannot for the life of me manage to setup the Oracle Drivers correctly. All the links from documentation for this piece of software go to dead/changed pages. Apparently the Oracle website has changed since this documentation was written.

  2. Python - From what I can tell, cx_Oracle seems to be the best module, but again I can't seem to set it up correctly. And again, most documentation refers to sections of the Oracle website that has since changed. I'm not sold on using cx_Oracle if someone has a better recommendation.

Thanks for looking over this, and I appreciate your help.

How to prevent "ibdata files do not match the log sequence number"?

Posted: 30 Aug 2013 10:04 AM PDT

I am dealing with a very large set of databases that are all innodb.

I've enountered this on mysql restart too many times for my comfort:

ibdata files do not match the log sequence number

But I've clearly watched mysql shutdown properly just before the restart when that message happens.

Then it "repairs" right up to the original sequence number with nothing lost.

What is the best approach to deal with and fix this permanently?

Using Percona with innodb_file_per_table=1

Example log:

InnoDB: Initializing buffer pool, size = 80.0G  InnoDB: Completed initialization of buffer pool  InnoDB: Highest supported file format is Barracuda.  InnoDB: The log sequence numbers 475575972691 and 475575972691 in ibdata files do not match the log sequence number 925369860131 in the ib_logfiles!  InnoDB: Database was not shutdown normally!  InnoDB: Starting crash recovery.  InnoDB: Reading tablespace information from the .ibd files...  InnoDB: Restoring possible half-written data pages   InnoDB: from the doublewrite buffer...  InnoDB: 128 rollback segment(s) are active.  InnoDB: Waiting for purge to start  InnoDB: Percona XtraDB started; log sequence number 925369860131  

Note how the final log sequence number now matches what it thought was wrong in the first place, so there was 100% recovery?

So why is the log sequence not being properly written to ibd?

Is it possible shutdown is incomplete somehow?

Thank you for any advice.

ps. I always wonder if I should be asking this on serverfault or here? Is it okay I asked here?

Two Different Sources Taking Transaction Log Backups

Posted: 30 Aug 2013 11:10 AM PDT

I have inherited a database server with SQL Server 2005. The databases have full recovery mode with hourly transaction log backups. There are two sources taking full backups overnight and hourly transaction log backups: Tivoli Data Protection (TDP) which is part of IBM's Tivoli Storage Manager (TSM) as well as a regular SQL Server Agent job.

If a disaster occurred and I needed to recover the database from backup, would I be missing data?

I was thinking that if the SQL transaction log backup fires off, it will mark the committed transactions in the log file as ready to be overwritten. After that happens, users will commit more transactions. When TSM/TDP comes in, it will mark the committed transactions in the log as ready to be overwritten. If I were to do a restore from either backup (unless it's using only the full backup), then it would not include all of the data.

migration sql 2003 to 2008, Sql side and php side [on hold]

Posted: 30 Aug 2013 08:27 AM PDT

I have a system in php using MSSQL 2003. I need to change the database to MMSQL 2008.

I want to know if there's issue I should look at on

  • SQL Side (Query/SP/....)
  • PHP Side (Query, connection type, ...)

Answer will be usefull even if it is a "dont worry, using standard connection will work" or "there's not so much depricated function between 2003 and 2008". If you goth link to other posts answering some of those interrogation or how to see it by myself.

Amazon rds lock wait timeout after restart

Posted: 30 Aug 2013 07:07 AM PDT

I have bunch of rails application servers which run a particular update query on every pageload on centralised amazon rds mysql (v5.1.63). Recently due to heavy load rds instance got restarted and when they were back we started having "lock wait timeout" error for the update queries. Can anyone please explain what exactly happened and what can I do to mitigate the problem. Update query is like " UPDATE TABLE x where x.y = z;"

Check if SQL Agent job is being executed manually

Posted: 30 Aug 2013 08:29 AM PDT

I have a stored procedure that performs database backups. Because I'm looking at Availability Groups within SQL Server 2012 I want to utilise the *sys.fn_hadr_backup_is_preferred_replica* function within that script to test whether the script should produce the backup.

If the script is run manually instead of as part of a scheduled SQL Agent job, or if the SQL Agent job is executed manually rather than via a schedule, then I want the job to fail with an error message to allow the user to see that the backup has not succeeded due to not being on the preferred replica. If the job is run on a schedule then I obviously don't want it to fail as it would be producing errors and sending out alerts all day long on the server that is not the preferred replica.

The only bit that I don't know how to do is to check whether the job is being executed by a schedule or manually. Is this possible?

Failing that, is there any other way that I can alert a user that the script hasn't produced a backup, whilst not causing the scheduled task to fail and produce an alert?

SQL Server select with regex

Posted: 30 Aug 2013 12:05 PM PDT

Can we use regular expression to select the item from database?

The table item is like below

Table column|name|  10.01.02    | a  |  100.2.03    | b  |  1021.10.04  | c  |  

Now my problem is that i need to select the code and get the substring like below

Table column|name|  10.01       | a  |  100.2       | b  |  1021.10     | c  |  

any suggestion with regular expression or substring?

When to stop splitting tables?

Posted: 30 Aug 2013 07:30 AM PDT

I am creating a league management system. I came up with the diagram below. I am no database expert but I feel it would be better to merge the three following tables: season_league, league_division and division schedule(see second image). This is for mySQL using doctrine with Symfony2 if it matters.

What are your opinions? Any suggestions?

enter image description here

enter image description here

How do I deal with items that I need to repeat monthly?

Posted: 30 Aug 2013 07:20 AM PDT

I have a table where users will enter transactions on a weekly basis, these transactions will either be single items or will be recurring i.e. rent would be recurring and should be the same for every month. How should I store this recurring item?

We thought just add in 12 new entries so it atleast covers the year (The users will want to look a few months into the future) but then the problem is that if the cost changes, we have to update all following entries

Our other option was to create another table which would store the key of the original with the dates or day of the month that it needs to repeat on?

Is there a better solution than these?

pg_dump format => custom vs sql

Posted: 30 Aug 2013 04:17 PM PDT

There are 2 main formats for pg_dump: custom vs sql. For custom, it's compressed by default, and not readable if you try to open it. But is it faster to dump into this format as opposed to SQL format. Or is the speed the same?

Pick one single row each time from either table?

Posted: 30 Aug 2013 05:16 PM PDT

I will run a query and get a list of aID and based on that I will try to match below tables.For each aID I just want one record either from tblGAlert or tblEAlert based on gDateTime and eDateTime depends on which time comes first.

    CREATE TABLE IF NOT EXISTS `tblGAlert` (        `gAlertID` int(11) NOT NULL AUTO_INCREMENT,        `eID` int(5) NOT NULL,        `aID` int(5) NOT NULL,        `gEntryStatus` enum('Do','Ad','Ej') NOT NULL,        `gDateTime` datetime NOT NULL,        `gInsertDateTime` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,        `gMessage` varchar(255) NOT NULL,        PRIMARY KEY (`gAlertID`)      ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ;          CREATE TABLE IF NOT EXISTS `tblEAlert` (    `eAlertID` int(11) NOT NULL AUTO_INCREMENT,    `eID` int(5) NOT NULL,    `aID` int(5) NOT NULL,    `eDateTime` datetime NOT NULL,    `eInsertDateTime` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,    `eMessage` varchar(255) NOT NULL,    PRIMARY KEY (`eAlertID`)  ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ;  

Sample data I say for tblGAlert

1,12,1122,'Do',2013-07-13 14:30:19,2013-07-13 15:30:19,''  2,13,1122,'Ad',2013-07-13 14:35:19,2013-07-13 15:35:19,''  3,13,1122,'Ad',2013-07-13 14:38:19,2013-07-13 15:39:19,''  4,14,1122,'Ej',2013-07-13 14:45:19,2013-07-13 15:55:19,''  5,14,1122,'Ej',2013-07-13 14:50:19,2013-07-13 15:56:19,''  

Same data for tblEAlert.

 1,1,1122,2013-07-13 14:33:19,2013-07-13 15:35:19,''      2,5,1122,2013-07-13 14:36:19,2013-07-13 15:36:19,''      3,6,1122,2013-07-13 14:37:19,2013-07-13 15:39:19,''      4,7,1122,2013-07-13 14:48:19,2013-07-13 15:55:19,''      5,8,1122,2013-07-13 14:52:19,2013-07-13 15:56:19,''  

The output I want to merge them together in such a way arrange by the gDateTime and eDateTime. Any idea how to merge them ?

I have tried below script but the answer I get is not right and it hogs my db server.

SELECT X . *   FROM (    SELECT gAlertID,  'gType' AS PTYPE, gDateTime AS DATE  FROM tblGAlert  WHERE tblGAlert.aID =2494  UNION   SELECT eAlertID,  'eType' AS PTYPE, eDateTime AS DATE  FROM tblEAlert  WHERE tblEAlert.aID =2494  )X  ORDER BY X.`date`  

Comparing binary 0x and 0x00 turns out to be equal on SQL Server

Posted: 30 Aug 2013 01:27 PM PDT

It seems that SQL Server considers 0x and 0x00 equal values:

SELECT CASE WHEN 0x = 0x00 THEN 1 ELSE 0 END  

This outputs 1.

How can I get true binary bit-for-bit comparison behavior? Also, what are the exact rules under which two (var)binary values are considered equal?

Also note the following behavior:

--prints just one of the values  SELECT DISTINCT [Data]  FROM (VALUES (0x), (0x00), (0x0000)) x([Data])    --prints the obvious length values 1, 2 and 3  SELECT DATALENGTH([Data]) AS [DATALENGTH], LEN([Data]) AS [LEN]  FROM (VALUES (0x), (0x00), (0x0000)) x([Data])  

Background of the question is that I'm trying to deduplicate binary data. I need to GROUP BY binary data, not just compare two values. I'm glad I even noticed this problem.

Note, that HASHBYTES does not support LOBs. I'd also like to find a simpler solution.

Using max for each sub group does not seem to work.

Posted: 30 Aug 2013 07:58 AM PDT

I have a table that looks like this in Oracle 11g:

+----------------------------------------------------------+  | ACCT_NBR | MAIL_TY | ORGA      | PERS       | RUN_DATE   |  +----------------------------------------------------------+  | 123      | ALT     | 2         |            | 21-JAN-13  |  | 123      | ALT     | 2         |            | 22-FEB-13  |  | 123      | ALT     |           | 3          | 23-FEB-13  |  | 124      | PRI     | 4         |            | 24-JAN-13  |  | 124      | PRI     | 4         |            | 25-FEB-13  |  +----------------------------------------------------------+  

I need to get the latest RUN_DATE based on Orga and Pers columns. So that the table looks like this:

+----------------------------------------------------------+  | ACCT_NBR | MAIL_TY | ORGA      | PERS       | RUN_DATE   |  +----------------------------------------------------------+  | 123      | ALT     | 2         |            | 22-FEB-13  |  | 123      | ALT     |           | 3          | 23-FEB-13  |  | 124      | PRI     | 4         |            | 25-FEB-13  |  +----------------------------------------------------------+  

I tried using this query but it doesn't seem to work:

Select *    from wh_acct     where a.rundate = (select max(a2.rundate)    from wh_acct a2)    WHERE a2.ORGA = a.ORGA)  UNION  Select *    from wh_acct     where a.rundate = (select max(a2.rundate)    from wh_acct a2)    WHERE a2.PERS = a.PERS)  

Can anyone point me to the right direction?

MySQL table architecture

Posted: 30 Aug 2013 10:17 AM PDT

Background information:

I have a table containing upload information, every day I get more and more uploads but most of the queries I run center around information gathered within the last 12 months; frequently it uses an even tighter time scale and queries are restricted to information gathered in the last 30 days

This system has been in existence from 2004 when there were 400 uploads, today there are 2+ million

Table structure:

CREATE TABLE `data_mediagallery` (    `id` int(11) NOT NULL AUTO_INCREMENT,    `status` tinyint(3) unsigned NOT NULL DEFAULT '0',    `contenttype` char(40) NOT NULL DEFAULT '',    `filename` varchar(100) NOT NULL DEFAULT '',    `injector` char(40) NOT NULL DEFAULT '',    `hits` int(11) DEFAULT '0',    `message` longtext,    `date` datetime DEFAULT NULL,    `title` varchar(80) NOT NULL DEFAULT '',    `tags` varchar(255) NOT NULL DEFAULT '',    `metadata` blob,    `location` char(8) NOT NULL DEFAULT '',    `uid` int(11) unsigned NOT NULL DEFAULT '0',    `filesize` bigint(20) NOT NULL DEFAULT '0',    `upload` datetime DEFAULT NULL,    `privacy` tinyint(3) unsigned NOT NULL DEFAULT '0',    `width` int(10) unsigned NOT NULL DEFAULT '0',    `height` int(10) unsigned NOT NULL DEFAULT '0',    `offensive` int(10) unsigned NOT NULL DEFAULT '0',    `sourcelocation` char(8) NOT NULL DEFAULT '',    `autoblog` tinyint(1) NOT NULL DEFAULT '0',    `extension` char(10) NOT NULL DEFAULT '',    `filetype` tinyint(3) unsigned NOT NULL DEFAULT '0',    `conversiontime` float NOT NULL DEFAULT '0',    `converttime` datetime DEFAULT NULL,    `sender` varchar(100) NOT NULL DEFAULT '',    `vhost` int(10) unsigned NOT NULL DEFAULT '0',    `channel` int(10) unsigned NOT NULL DEFAULT '0',    `rotation` tinyint(3) unsigned NOT NULL DEFAULT '0',    `ofilesize` bigint(20) NOT NULL DEFAULT '0',    `moderationstatus` tinyint(3) unsigned NOT NULL DEFAULT '0',    `rating` decimal(8,6) DEFAULT NULL,    `votecount` int(10) unsigned NOT NULL DEFAULT '0',    `url` varchar(150) NOT NULL DEFAULT '',    `geo_latitude` double DEFAULT NULL,    `geo_longitude` double DEFAULT NULL,    `length` decimal(8,2) DEFAULT '0.00',    `parentid` int(11) NOT NULL DEFAULT '0',    `language` char(2) NOT NULL DEFAULT '',    `author` varchar(100) NOT NULL DEFAULT '',    `context` tinyint(3) unsigned NOT NULL DEFAULT '0',    `externalid` varchar(255) DEFAULT NULL,    `originalsaved` bit(1) NOT NULL DEFAULT b'1',    `hidden` tinyint(4) NOT NULL DEFAULT '0',    `commentcount` int(11) NOT NULL DEFAULT '0',    `approvedcomments` int(11) NOT NULL DEFAULT '0',    `notdeniedcomments` int(11) NOT NULL DEFAULT '0',    `lastupdatetime` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,    `channelleft` int(10) unsigned NOT NULL DEFAULT '0',    `originalLocation` char(8) NOT NULL DEFAULT '',    PRIMARY KEY (`id`),    KEY `upload` (`upload`),    KEY `vhostupload` (`vhost`,`upload`),    KEY `vhostmodstatus` (`vhost`,`status`,`moderationstatus`,`uid`),    KEY `complexfiletype` (`vhost`,`status`,`moderationstatus`,`filetype`,`channel`),    KEY `vhostcontext` (`vhost`,`moderationstatus`,`context`,`parentid`,`status`,`filetype`),    KEY `externalid` (`externalid`),    KEY `externalcomments`.    KEY `vhostchannel` (`vhost`,`status`,`moderationstatus`,`context`,`channelleft`)  ) ENGINE=InnoDB;  

Questions

Is there a way to partition the table that would make the most sense? Does partitioning even make sense? How do I deal with new data if I do partition?

mongodb user for ubuntu EC2 instance

Posted: 30 Aug 2013 08:17 AM PDT

I am trying to install mongodb on Ubuntu EC2 instance. However, I am confused about what user the DB would run as:

If I follow: http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/

Then it says : "mongodb"

If I follow : http://docs.mongodb.org/ecosystem/tutorial/install-mongodb-on-amazon-ec2/

It says : "mongod"

I think it leads me to inconsistent state. There was a process running is ps output for mongodb but sudo service mongodb status or stop says: Unknown Instance.

What should be the user of mongodb?

Can't change root password: "The system cannot find the file specified."

Posted: 30 Aug 2013 09:17 AM PDT

I'm trying to change the root password in MySQL on my development machine (I've just installed MySQL, so it currently doesn't have a password), but it keeps failing with the following error message:

The system cannot find the file specified.

I'm using MySQL 5.1.70 (x86, 32-bit) on Windows 7 SP1 (64 bits). I've added MySQL's "bin" directory to my "Path" environment variable.

In the comments of the MySQL documentation, I read that I should have installed the service using the absolute path, so I stopped MySQL, and uninstalled it:

C:\Windows\system32>mysqld --remove  Service successfully removed.  

Then I installed it again, using the absolute path this time:

C:\Windows\system32>C:\web\mysql-5.1.70\bin\mysqld.exe --install  Service successfully installed.  

I started MySQL, and tried to change the password again:

C:\Windows\system32>mysqladmin -u root password Pe%8XiduwOqdZ<ZFE5!  The system cannot find the file specified.  

I also tried with quotes:

C:\Windows\system32>mysqladmin -u root password 'Pe%8XiduwOqdZ<ZFE5!'  The system cannot find the file specified.  

I also tried to change the current directory to MySQL's "bin" directory:

C:\Windows\system32>cd C:\web\mysql-5.1.70\bin    C:\web\mysql-5.1.70\bin>mysqladmin -u root password Pe%8XiduwOqdZ<ZFE5!  The system cannot find the file specified.    C:\web\mysql-5.1.70\bin>mysqladmin -u root password 'Pe%8XiduwOqdZ<ZFE5!'  The system cannot find the file specified.  

What's wrong?

Fulltext stoplist replication

Posted: 30 Aug 2013 08:17 PM PDT

In MS SQL Server 2008 R2 there is replication of table with fulltext index on it. But stoplist that is associated with replicated fulltext index doesn't replicate.

Is there any possibility to replicate stoplist also?

Breaking Semisynchronous Replication in MySQL 5.5

Posted: 30 Aug 2013 06:41 AM PDT

I've set up Semisynchronous Replication between two MySQL 5.5 servers running on Windows 7.

My application is running and updating the database of the master server and same is being updated in the slave database server.

But due to some unknown reasons sometimes, Replication breaks.

On running the command:

SHOW STATUS LIKE 'Rpl_semi_sync%';  

It gives this status:

'Rpl_semi_sync_master_no_times', '0'  'Rpl_semi_sync_master_no_tx', '0'  'Rpl_semi_sync_master_status', 'ON'     <<-------------  'Rpl_semi_sync_master_timefunc_failures', '0'  'Rpl_semi_sync_master_tx_avg_wait_time', '338846'  'Rpl_semi_sync_master_tx_wait_time', '29479685'  'Rpl_semi_sync_master_tx_waits', '87'  'Rpl_semi_sync_master_wait_pos_backtraverse', '0'  'Rpl_semi_sync_master_wait_sessions', '0'  'Rpl_semi_sync_master_yes_tx', '3106'  

Ideally, in semi synchronization, when the sync breaks the status should come as OFF since master is not able to receive any acknowledgement from the slave. Please help us in this regard.

sql server database sharding - what to do with common data / non sharded data

Posted: 30 Aug 2013 07:17 PM PDT

We have a very large scale enterprise level database. As part of our business model all web users hit our web servers at the same time each month which in turn hammer our sql box. The traffic is very heavy and continues to grow heavier the larger the company grows. sql proc optimization has been performed and hardware has already been scaled up to a very high level.

We are looking to shard the database now to ensure that we can handle company growth and future loads.

We have decided what particular data should be sharded. It is a subset of our database which is highly utilized.

However, my question is regarding the non sharded data which is common/universal. An example of data like this may be an Inventory table for instance or possibly an Employee table, user table etc .

I see two options to handle this common/universal data:

1) design 1 - Place the common/universal data in an external database. All writes will occur here. This data will then be replicated down to each shard allowing each shard to read this data and inner join to this data in t-sql procs.

2) design 2 - Give each shard its own copy of all common/universal data. Let each shard write locally to these tables and utilize sql merge replication to update/sync this data on all other shards.

concerns about design #1

1) Transactional issues: If you have a situation in which you must write or update data in a shard and then write/update a common/universal table in 1 stored proc for instance, you will no longer be able to do this easily. The data now exists on seperate sql instances and databases. You may need to involve MS DTS to see if you can wrap these writes into a transaction since they are in a separate database. Performance is a concern here and possible rewrites may be involved for procs that write to sharded and common data.

2)a loss of referential integrity. Not possible to do cross database referential integrity.

3) Recoding large areas of the system so that it knows to write common data to the new universal database but read common data from the shards.

4). increased database trips. Like #1 above, when you run into a situation in which you must update sharded data and common data you are going to make multiple round trips to accomplish this since the data is now in separate databases. Some network latency here but I am not worried about this issue as much as the above 3.

concerns about design #2

In design #2 each shard gets its own instance of all common/universal data. This means that all code that joins to or updates common data continues to work/run just like it does today. There is very little recoding/rewriting needed from the development team. However, this design completely depends on merge replication to keep data in sync across all shards. the dbas are highly skilled and are very concerned that merge replication may not be able to handle this and should merge replication fail, that recovery from this failure is not great and could impact us very negatively.

I am curious to know if anyone has gone with design option #2. I am also curious to know if i am overlooking a 3rd or 4th design option that I do not see.

thank you in advance.

Is database normalization dead?

Posted: 30 Aug 2013 06:26 AM PDT

I've been brought up old school - where we learned to design the database schema BEFORE the application's business layer (or using OOAD for everything else). I've been pretty good with designing schemas (IMHO :) and normalized only to remove unnecessary redundancy but not where it impacted speed i.e. if joins were a performance hit, the redundancy was left in place. But mostly it wasn't.

With the advent of some ORM frameworks like Ruby's ActiveRecord or ActiveJDBC (and a few others I can't remember, but I'm sure there are plenty) it seems they prefer having a surrogate key for every table even if some have primary keys like 'email' - breaking 2NF outright. Okay, I understand not too much, but it gets on my nerves (almost) when some of these ORMs (or programmers) don't acknowledge 1-1 or 1-0|1 (i.e. 1 to 0 or 1). They stipulate that it's just better to have everything as one big table no matter if it has a ton of nulls "todays systems can handle it" is the comment I've heard more often.

I agree that memory constraints did bear a direct correlation to normalization (there are other benefits too :) but in today's time with cheap memory and quad-core machines is the concept of DB normalization just left to the texts? As DBAs do you still practice normalization to 3NF (if not BCNF :)? Does it matter? Is "dirty schema" design good for production systems? Just how should one make the case "for" normalization if it's still relevant.

(Note: I'm not talking about datawarehouse's star/snowflake schemas which have redundancy as a part/need of the design but commercial systems with a backend database like StackExchange for example)

Sql Server Management Studio slow opening new windows

Posted: 30 Aug 2013 10:51 AM PDT

After upgrading both sql server and management studio from 2008 to 2012, the management studio 2012 is very slow when new query windows and dialog boxes are opened. Even right click on tables is slow. There is typically at least a 5 second delay before I can start doing anything within a window. This happens every time, even if I open the same window twice in a row. What happens and how could I fix this?

I tried a hit on google where I'd need to modify my hosts file, but that didn't help.

Search This Blog