Sunday, March 31, 2013

[SQL Server] Cursor in Stored Procedure

[SQL Server] Cursor in Stored Procedure


Cursor in Stored Procedure

Posted: 31 Mar 2013 12:53 AM PDT

Hi,I am a SQL newbie and need to update code in a Stored Procedure that has a cursor. Is it possible to have multiple queries in a cursor? Currently, we are checking for duplicates in a database table when importing an Excel upload file. Now, we also want to check for potential duplicates in an Excel upload file and if the Excel file record falls into an already exisiting date range (passing start and end dates). I do not need help with the queries, but just wanted to give a little background information. I need help to understand how to get three queries to work in a cursor and make a decision if duplicates are found.Can someone please provide sample code on how to place three queries in a cursor and/or provide a good reference? Basically, on a high level this is what I want to do:Run Qry 1 - check for duplicates in database tableRun Qry 2 - check for duplicates in fileRun Qry 3 - check to see if record already exists in specific date rangeif no duplicates, then "0" count of dupsif dup is found, than Count and error type descriptionThanks in advance, any help is much appreciated!

[how to] How do you find the right Path? [closed]

[how to] How do you find the right Path? [closed]


How do you find the right Path? [closed]

Posted: 31 Mar 2013 03:34 PM PDT

Now, I understand that this might be the wrong place to be posting, and i apologize if it is, but I am out of ideas and out of my depth in trying to understand the system/path of becoming a DBA.

I have just started to teach myself SQL and I really enjoy it, so naturally the first question that comes to mind is,"How do I turn this into my career?"

I've been searching all the forums I can find to try and understand what it is that I need to do to get on the right path, with no luck. I have found a great deal of useful websites, but nothing that can help me right now.

So now I turn to you ladies and gents, I am aware that it wont happen over night and I'm not worried about starting at the bottom and working my way up which brings me to my question(s)...

Where do I start?

How do I start?

What position am I supposed to be applying for?

Thanks in Advance,

Fabian.

Why is a hash match operator in this very basic query

Posted: 31 Mar 2013 04:31 PM PDT

I'm beginning to learn some about looking at execution plans and making queries more efficient

Consider these two basic queries

select distinct pat_id, drug_class, drug_name from rx     select pat_id, drug_class, drug_name from rx   

and their execution plans

enter image description here

index being used:

CREATE CLUSTERED INDEX [ix_overlap] ON [dbo].[rx]   (      [pat_id] ASC,      [fill_date] ASC,      [script_end_date] ASC,      [drug_name] ASC  )  

Even though the first query supposedly has the higher cost by a 4:1 margin it runs faster than the second one. Why is it that a simple distinct added to the query will add the (what I assume to always be bad, corrections are welcome) hash match operator? And why does it have the higher query cost relative to the second query if it runs faster.

Restore standby database with undo rollback file

Posted: 31 Mar 2013 05:57 PM PDT

I made full backup to specific database and restore it with stand-by recovery where data-base in read-only I tried to insert value or edit to table in this database but not inserted

this values stored in undo rollback file I want after I restore database to read-write How I can recover this inserted value to table and cheak it existed in the table ?

Limiting number of results in a Partition using OVER(PARTITION BY)

Posted: 31 Mar 2013 05:56 PM PDT

In the following query, why is it that we have to limit the results returned from each Partition by using the clause WHERE foo.row_num < 3 outside of the subquery foo but not from within the subquery with WHERE row_num < 3?

Query

SELECT pid, land_type, row_num, road_name, round(CAST(dist_km AS numeric), 2) AS dist_km    FROM (      SELECT ROW_NUMBER()       OVER (          PARTITION by loc.pid          ORDER BY ST_Distance(r.the_geom, loc.the_geom)      ) as row_num,      loc.pid, loc.land_type, r.road_name,       ST_Distance(r.the_geom, loc.the_geom)/1000 as dist_km      FROM ch05.land AS loc      LEFT JOIN ch05.road AS r      ON ST_DWithin(r.the_geom, loc.the_geom, 1000)      WHERE loc.land_type = 'police station'  ) AS foo    WHERE foo.row_num < 3  ORDER BY pid, row_num;  

Query that does not work

SELECT pid, land_type, row_num, road_name, round(CAST(dist_km AS numeric), 2) AS dist_km    FROM (      SELECT ROW_NUMBER()       OVER (          PARTITION by loc.pid          ORDER BY ST_Distance(r.the_geom, loc.the_geom)      ) as row_num,      loc.pid, loc.land_type, r.road_name,       ST_Distance(r.the_geom, loc.the_geom)/1000 as dist_km      FROM ch05.land AS loc      LEFT JOIN ch05.road AS r      ON ST_DWithin(r.the_geom, loc.the_geom, 1000)      WHERE loc.land_type = 'police station'      AND row_num < 3  ) AS foo    ORDER BY pid, row_num;  

Error: ERROR: column "row_num" does not exist

Very fast replication

Posted: 31 Mar 2013 05:05 PM PDT

I used this guide to make replication between my two db servers : http://wiki.postgresql.org/wiki/Streaming_Replication

But, when I updated a row on master server it will be available in slave instantly. It should be asynchronous , isn't it? What I am missing?

Thank You

mysql optimize table crash

Posted: 31 Mar 2013 11:27 AM PDT

When I try OPTIMIZE TABLE `table` (myisam) on a table which is about 300MB, then it is crashed and must be repaired. What could cause this problem? The same problem occurs on other tables over 300MB.

which postgres do i have installed? what directory needs to be in my path to run initdb?

Posted: 31 Mar 2013 10:41 AM PDT

I need to use postgresql with ruby on rails.

I have these directories

/Library/PostgreSQL  

and

/usr/local/Cellar/postgresql/usr:local:Cellar:postgresql:9.2.2  

This 2nd directory name is confusing. In finder, when I'm in the directory

/usr/local/Cellar/postgresql,  usr:local:Cellar:postgresql:9.2.2   

is actually one directory with forward slashes in the name. usr/local/Cellar/postgresql. It's in terminal where I see the :'s

"brew info postgres" returns the following:

postgresql: stable 9.2.2  http://www.postgresql.org/  Depends on: readline, ossp-uuid  /usr/local/Cellar/postgresql/usr:local:Cellar:postgresql:9.2.2 (2819 files, 39M)  https://github.com/mxcl/homebrew/commits/master/Library/Formula/postgresql.rb  

So do I have postgres installed properly? Do I have 2 versions? One in Library/PostgreSQL and one in /usr/local/Cellar (via HomeBrew)?

Regarding trying to use initdb to create my first postgres database:

I have ohmyzsh installed and I am trying to run the initdb command but getting command not found. Which path should I put in my path statement so that it can find the command?

/usr/local/var/postgres/server.log did not exist, so I created it.

pg_ctl -D /usr/local/var/postgres -l /usr/local/var/postgres/server.log start says server is starting and when I do ps auxwww | grep postgres

I get:

postgres   260   0.0  0.1  2522180   8084   ??  SNs  Tue07PM   0:58.79  /System/Library/Frameworks/CoreServices.framework/Frameworks/Metadata.framework/Versions/A/Support/mdworker MDSImporterWorker com.apple.Spotlight.ImporterWorker.504  

so I think that means it is running, right?

Well I found initdb script file in both Library and brew locations. I navigated to the brew directory containing initdb and tried to run the initdb command and got zsh: command not found.

I then navigated to the Library containing initdb, tried to run it, and got the same results.

Isn't the current directory part of the path because I'm IN IT?

Summary of my questions:

  • Which postgres do i have installed? Is it installed twice and should I uninstall a version? If so, how?
  • What directory needs to be in my path to run initdb and do I really need to run it?
  • Should the postgres server start when I start my computer? I heard it should. How do I set that up?

mysql master master replication issue

Posted: 31 Mar 2013 05:18 PM PDT

I have set the 2 database servers with master-master replication. On both servers, the following parameters are showing properly when I check the slave status

Slave_IO_Running: Yes

Slave_SQL_Running: Yes

But there is a database size difference between these 2 serves

Could some one give me the solution for this?

Copying my Oracle 10g Express database to another PC

Posted: 31 Mar 2013 07:36 AM PDT

I have Oracle 10g Express. How can I make a copy of my database and application?

I don't want to make a back up, I want to move my DB and application to another PC.

MySql Cluster for educational use

Posted: 31 Mar 2013 09:50 AM PDT

I am student and I am writing Master of Science paper about in-memory databases. I want to use MySql Cluster for benchmarks in my work.
Can I use MySql Cluster free of charge for educational purposes or do I need to pay for it?

Oracle 11g R2 on CentOS 6.3, and Net Configuration Assistant

Posted: 31 Mar 2013 10:41 AM PDT

this is my first time to install Oracle 11g R2 on Linux.

I made all instructions in Oracle documentation.

But in the installation process, the Net Configuration Manager is failed.

I continued the process, and after it finished, I couldn't connect to the sqlplus.

This is my tnsnames.ora

# tnsnames.ora Network Configuration File: /Oracle/u01/app/oracle/product/11.2.0/db_1/network/admin/tnsnames.ora  # Generated by Oracle configuration tools.    ORCL =    (DESCRIPTION =      (ADDRESS = (PROTOCOL = TCP)(HOST = localhost.localdomain)(PORT = 1521))      (CONNECT_DATA =        (SERVER = DEDICATED)        (SERVICE_NAME = orcl.localhost)      )  

and listener.ora

# listener.ora Network Configuration File: /Oracle/u01/app/oracle/product/11.2.0/db_1/network/admin/listener.ora  # Generated by Oracle configuration tools.    LISTENER =    (DESCRIPTION_LIST =      (DESCRIPTION =        (ADDRESS = (PROTOCOL = TCP)(HOST = localhost.localdomain)(PORT = 1521))      )    )    ADR_BASE_LISTENER = /Oracle/u01/app/oracle  

Output of lsnrctl status

LSNRCTL for Linux: Version 11.2.0.1.0 - Production on 29-MAR-2013 02:35:48    Copyright (c) 1991, 2009, Oracle.  All rights reserved.    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost.localdomain)(PORT=1521)))  TNS-12541: TNS:no listener   TNS-12560: TNS:protocol adapter error    TNS-00511: No listener     Linux Error: 111: Connection refused  

I open ./netca to reconfigure the listener again.

But the program told me that the port is in use.

I deleted it and added a new one but the same warning that port is in use

**

Solution:

** Finally I found the solution, that I didn't change permission to be owned by oracle user and oinstall group for the database dir in which the oracle software is. And I think that there are some scripts or programs that related to listener and the oracle user cannot run it.

mysql second slave not syncying while first slave works fine

Posted: 31 Mar 2013 10:14 AM PDT

I have a master (m) - slave (s1) setup using mysql 5.1.45

When I try to add a second slave (s2) the slave lags behind and never catches up on the sync.

Even after having synced the s2 with the whole system offline and there were (Seconds_Behind_Master = 0) after a few hours the s2 gets out of sync.

Strange is that s1 is always on sync.

any ideas?

SHOW SLAVE STATUS \G  (on slave2)  *************************** 1. row ***************************             Slave_IO_State: Waiting for master to send event                Master_Host: xxx.xxx.xxx.xxx                Master_User: xxxx_xxxx5                Master_Port: 3306              Connect_Retry: 60            Master_Log_File: mysql-bin.013165        Read_Master_Log_Pos: 208002803             Relay_Log_File: xxxxxxxxxx-relay-bin.000100              Relay_Log_Pos: 1052731555      Relay_Master_Log_File: mysql-bin.013124           Slave_IO_Running: Yes          Slave_SQL_Running: Yes            Replicate_Do_DB: xxxxxxxxx        Replicate_Ignore_DB:         Replicate_Do_Table:     Replicate_Ignore_Table:    Replicate_Wild_Do_Table:  Replicate_Wild_Ignore_Table:                 Last_Errno: 0                 Last_Error:               Skip_Counter: 0        Exec_Master_Log_Pos: 1052731410            Relay_Log_Space: 44233859505            Until_Condition: None             Until_Log_File:              Until_Log_Pos: 0         Master_SSL_Allowed: No         Master_SSL_CA_File:         Master_SSL_CA_Path:            Master_SSL_Cert:          Master_SSL_Cipher:             Master_SSL_Key:      Seconds_Behind_Master: 69594  Master_SSL_Verify_Server_Cert: No              Last_IO_Errno: 0              Last_IO_Error:             Last_SQL_Errno: 0             Last_SQL_Error:  

iperf results between servers:

M -> s2  [ ID] Interval       Transfer     Bandwidth  [  5]  0.0-10.0 sec    502 MBytes    420 Mbits/sec  [ ID] Interval       Transfer     Bandwidth  [  4]  0.0-10.0 sec  1.05 GBytes    902 Mbits/sec    M -> s1  [ ID] Interval       Transfer     Bandwidth  [  4]  0.0-10.0 sec    637 MBytes    534 Mbits/sec  [ ID] Interval       Transfer     Bandwidth  [  5]  0.0-10.0 sec    925 MBytes    775 Mbits/sec  

vmstat for s2

 vmstat   procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------    r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st   1  0    268 126568 199100 22692944    0    0   100   836    8   81  1  0 96  3    vmstat 2 10  procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st  0  0    268 1150144 197128 21670808    0    0   100   835    9   81  1  0 96  3  0  0  0    268 1144464 197160 21674940    0    0   644  3096 1328 1602  0  0 97  2  0  0  2    268 1140680 197176 21679624    0    0   846  5362 1002 1567  0  0 98  2  0  0  1    268 1135332 197192 21685040    0    0   960  3348  850 1193  0  0 98  1  0  0  0    268 1130776 197204 21688752    0    0   576  2894  978 1232  0  0 98  2  0  0  0    268 1127060 197264 21693556    0    0   586  5202 1075 1505  0  0 97  3  0  0  0    268 1122184 197272 21698412    0    0   896  1160  614  727  0  0 98  1  0  0  0    268 1118532 197300 21702780    0    0   586  5070 1279 1708  0  0 93  6  0  0  0    268 1114000 197324 21705820    0    0   402  1522  947  942  0  0 95  4  0  0  0    268 1109708 197336 21710188    0    0   704  9150 1224 2109  0  0 97  2  0  

top output on s2

top - 14:44:25 up 16:36,  1 user,  load average: 1.62, 1.47, 1.42  Tasks: 140 total,   1 running, 139 sleeping,   0 stopped,   0 zombie  Cpu0  :  2.9%us,  1.1%sy,  0.0%ni, 73.8%id, 21.8%wa,  0.0%hi,  0.4%si,  0.0%st  Cpu1  :  0.8%us,  0.3%sy,  0.0%ni, 95.5%id,  3.3%wa,  0.0%hi,  0.0%si,      0.0%st  Cpu2  :  0.6%us,  0.3%sy,  0.0%ni, 97.7%id,  1.4%wa,  0.0%hi,  0.0%si,  0.0%st  Cpu3  :  0.5%us,  0.2%sy,  0.0%ni, 98.9%id,  0.4%wa,  0.0%hi,  0.0%si,  0.0%st  Cpu4  :  0.0%us,  0.0%sy,  0.0%ni, 99.9%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st  Cpu5  :  0.0%us,  0.0%sy,  0.0%ni, 99.9%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st  Cpu6  :  0.0%us,  0.0%sy,  0.0%ni, 99.9%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st  Cpu7  :  0.0%us,  0.0%sy,  0.0%ni, 99.9%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st  Mem:  24744184k total, 24005508k used,   738676k free,   199136k buffers  Swap:  1050616k total,      268k used,  1050348k free, 22078920k cached  

Mysqldump from the Amazon RDS

Posted: 31 Mar 2013 10:02 AM PDT

I have a Mysql database deployed on Amazon web services RDS. I am running a crontab which runs mysqldump command. The problem is it freezes the rds instance and my website will be down for more then half and hour almost. It will be inaccessible. Is there any way to stop the freeze during the dump and website will be accessible during the dump also.

MySQL - ERROR 1045 (28000): Access denied for user

Posted: 31 Mar 2013 10:25 AM PDT

I just installed a fresh copy of Ubuntu 10.04.2 LTS on a new machine. I logged into MySQL as root:

david@server1:~$ mysql -u root -p123  

I created a new user called repl. I left host blank, so the new user can may have access from any location.

mysql> CREATE USER 'repl' IDENTIFIED BY '123';  Query OK, 0 rows affected (0.00 sec)  

I checked the user table to verify the new user repl was properly created.

mysql> select host, user, password from user;  +-----------+------------------+-------------------------------------------+  | host      | user             | password                                  |  +-----------+------------------+-------------------------------------------+  | localhost | root             | *23AE809DDACAF96AF0FD78ED04B6A265E05AA257 |  | server1   | root             | *23AE809DDACAF96AF0FD78ED04B6A265E05AA257 |  | 127.0.0.1 | root             | *23AE809DDACAF96AF0FD78ED04B6A265E05AA257 |  | ::1       | root             | *23AE809DDACAF96AF0FD78ED04B6A265E05AA257 |  | localhost |                  |                                           |  | server1   |                  |                                           |  | localhost | debian-sys-maint | *27F00A6BAAE5070BCEF92DF91805028725C30188 |  | %         | repl             | *23AE809DDACAF96AF0FD78ED04B6A265E05AA257 |  +-----------+------------------+-------------------------------------------+  8 rows in set (0.00 sec)  

I then exit, try to login as user repl, but access is denied.

david@server1:~$ mysql -u repl -p123  ERROR 1045 (28000): Access denied for user 'repl'@'localhost' (using password: YES)  david@server1:~$ mysql -urepl -p123  ERROR 1045 (28000): Access denied for user 'repl'@'localhost' (using password: YES)  david@server1:~$   

Why is access denied?

For a InnoDB only DB, which of these elements can be removed?

Posted: 31 Mar 2013 10:41 AM PDT

So, I'm trying to set up a Drupal 7 my.conf file that's combining best practices from various performance blogs. I'm realizing though that some of them are older than others, and many aren't assuming InnoDB. So of this list of configs, which are irrelevant if you're building for InnoDB.

[client]  port = 3306  socket = /var/run/mysqld/mysqld.sock    [mysqld_safe]  socket = /var/run/mysqld/mysqld.sock  nice = 0  open-files-limit = 4096    [mysqld]  port = 3306  user = mysql    default_storage_engine  default-storage-engine = InnoDB  socket = /var/run/mysqld/mysqld.sock  pid_file = /var/run/mysqld/mysqld.pid  basedir = /usr  tmpdir = /tmp  lc-messages-dir = /usr/share/mysql  local-infile = 0  automatic_sp_privileges = 0  safe-user-create = 1  secure-auth = 1  secure-file-priv = /tmp  symbolic-links = 0  key_buffer_size = 32M  myisam-recover = BACKUP,FORCE  concurrent_insert = 2  max_allowed_packet = 16M  max_connect_errors = 1000000  datadir = /var/lib/mysql  tmp_table_size = 64M  max_heap_table_size = 64M  query_cache_type = 1  query_cache_size = 0  query_cache_limit = 8M  query_cache_min_res_unit = 1K  default-storage-engine = InnoDB  thread_stack = 256K  thread_cache_size = 128  max_connections = 128  open_files_limit = 65535  skip-locking  skip-bdb  server-id  = 1  log_bin = /var/log/mysql/mysql-bin.log  binlog_cache_size = 256K  sync_binlog  = 256  expire_logs_days  = 14  max_binlog_size = 1G  binlog_do_db  = include_database_name  binlog_ignore_db = include_database_name  max_user_connections = 150  key_buffer = 16M  key_cache_block_size     = 4K  bulk_insert_buffer_size  = 8M  myisam_sort_buffer_size  = 64M  join_buffer_size = 8M  read_buffer_size = 2M  sort_buffer_size = 3M  read_rnd_buffer_size = 64M  table_cache = 4096  table_definition_cache = 4096  table_open_cache = 16384  optimizer_search_depth   = 4  collation-server = utf8_general_ci  interactive_timeout = 400  wait_timeout = 300  connect_timeout = 10  thread_concurrency=8  back_log = 2048  open-files = 10000  query_prealloc_size = 65536  query_alloc_block_size = 131072  

MySQL inserts/deletes slowing down after a while after Start/Reboot

Posted: 31 Mar 2013 10:49 AM PDT

The Setup here is: OS: Ubuntu 12.04 (1 Core and 1.75 GB of RAMO) MySQL Version: 5.5

There are about 8 to 9 tables in the database. On initially starting mysql the queries are fast at around 0.05 - 0.10 seconds, however after sometime the INSERTS and DELETES slow down to around 1 - 2 seconds, however the SELECTS are still around 0.05 - 0.10 seconds.

Then after rebooting the system the situation reverts back to how it was in the beginning of the previous paragraph. Everything is fast, but then INSERTS and DELETES slow down tremendously after a while.

Note: I'm just doing simple inserts and deletes on very few records, and the database starts with empty tables in the beginning.

Any insight?

Edit: Create Table of 1 of the Tables

| FeedUps |   CREATE TABLE `FeedUps` (    `post_id` int(10) unsigned DEFAULT NULL,    `liker` int(10) unsigned DEFAULT NULL,    UNIQUE KEY `post_id` (`post_id`,`liker`),    KEY `liker` (`liker`),    CONSTRAINT `FeedUps_ibfk_1` FOREIGN KEY (`post_id`)         REFERENCES `Feed` (`post_id`) ON DELETE CASCADE ON UPDATE CASCADE,    CONSTRAINT `FeedUps_ibfk_2` FOREIGN KEY (`liker`)         REFERENCES `Users` (`user_id`) ON DELETE CASCADE ON UPDATE CASCADE  ) ENGINE=InnoDB DEFAULT CHARSET=latin1   

Is it possible to have extra tables in a Slave with MySQL Replication

Posted: 31 Mar 2013 12:58 PM PDT

As my title mention I have a Master and a Slave database.

Master if for operations data and my slave mainly for reporting stuff.

The issue is that I need to create extra tables on reporting that can't be on the master, but the way my replication is set (the simplest one mentioned by the official doc) at the moment, this breaks the replication system.

How could I add tables on the Slave without Master caring about it ? Is it even possible ?

MySQL 5.1.67 - Replication failure Mysql, Master Log Truncated or corrupted

Posted: 31 Mar 2013 01:11 PM PDT

I have run into some issues with replication

After an in-place upgrade of mySQL from 5.0.77 -> 5.1.67 rel 14.4 Percona

replication between a master and multiple slaves is causing issues. slaves are at the same level as master.

The replication mode is mixed (SBR and RBR)

The master seems to be truncating the log incorrectly, seemingly at random, and slaves subsequently stop replicating due to corrupt log being sent through.

mysqlbinlog also segfaults when trying to read the log.

This is the error being thrown by the slaves (table names/server names obfuscated):

Last_Error: Could not execute Update_rows event on table ozdb1.tableA; Incorrect string >value: '\x8EIQ\x13)\x00...' for column 'Description1' at row 2, Error_code: 1366; >Incorrect string value: '\x8E\x08\x0F\x0F\x0F\x0F...' for column 'Description2' at row >1, Error_code: 1366; Incorrect string value: '\xBD\x06DINI...' for column 'Description3' >at row 1, Error_code: 1366; Corrupted replication event was detected, Error_code: 1610; >handler error HA_ERR_CORRUPT_EVENT; the event's master log apdb041-bin.005603, >end_log_pos 1070537659

currently the only way to recover is via full refresh of the slaves.

Would there be an issue on the master server? No disk / network / logging indicates a reason why the mysql master would incorrectly truncate the log.

What could be causing the master to incorrectly truncate or not close off the log properly?

I'd be happy to hear about any extra detail which might be required to help answer this question!

Replication issue - CREATE SELECT alternative?

Posted: 31 Mar 2013 11:58 AM PDT

I've an MySQL 5.1 slave for our BI team.

They need to make some CREATE SELECT with big select queries (several million lines).

As CREATE SELECT is a DDL, if the replication attempts to update some rows in same tables than the SELECT statement, replication is blocked until the freeing of the CREATE SELECT.

Do you now a good non-blocking alternative to thoses CREATE SELECT statements?

I thought to an SELECT INTO OUTPUT FILE then LOAD DATA INFILE but they will fill out our disks as BI guys like to do... :)

Max.

SQL Server replication subscriptions marked as inactive

Posted: 31 Mar 2013 02:58 PM PDT

Is there any way to force SQL Server NOT to mark subscriptions as inactive, ever?

It happens sporadically when there are connection issues and I don't want to have to reinitialize the subscription every time.

Note, I'm not talking about the subscriptions being marked as expired...just as inactive.

Thank you.

How to drop a DB2 instance when the instance owner was removed

Posted: 31 Mar 2013 09:58 AM PDT

This is a real sticky situation. I was handed over a machine (running an AIX 7.1), and my first task was to re-install DB2 server on it. But someone before me had conveniently removed an instance owner account, and probably recreated it. Now, the problem is this:

1) When I try to uninstall DB2, it says the instance is active and has to be dropped first.

2) When I try to drop this instance, DB2 says there is no such instance.

I am quite new to DB2 administration. Not sure how to proceed here. Any help is appreciated

Thanks

How to Convert Horizontal to Vertical Array?

Posted: 31 Mar 2013 10:58 AM PDT

I need to create a query (suitable for Standard Edition) that has data from multiple columns (Columns 1-6 with corresponding Date Started and Date Completed data) displayed vertically, but also has the column name in the preceding column to identify it, along with other data (Record Number, Status).

Sample data:

+--------------+--------+------------+-------------+---------------+  | RecordNumber | Status | ColumnName | DateStarted | DateCompleted |  +--------------+--------+------------+-------------+---------------+  |            1 | Open   | Column 1   | 2012-01-01  | 2012-02-01    |  |            2 | Hold   | Column 2   | 2012-01-03  | 2012-03-01    |  |            1 | Open   | Column 3   | 2012-02-05  | 2012-04-06    |  |            3 | Closed | Column 4   | 2012-05-10  | 2012-07-25    |  |            2 | Hold   | Column 5   | 2012-03-09  | 2012-04-01    |  |            1 | Open   | Column 6   | 2012-10-10  | 2012-12-12    |  +--------------+--------+------------+-------------+---------------+  
DECLARE @Data AS TABLE  (      RecordNumber    integer NOT NULL,      [Status]        varchar(10) NOT NULL,      ColumnName      varchar(10) NOT NULL,      DateStarted     date NOT NULL,      DateCompleted   date NOT NULL  );    INSERT @Data  (      RecordNumber,       [Status],      ColumnName,      DateStarted,      DateCompleted  )  VALUES      (1, 'Open', 'Column 1', '20120101', '20120201'),      (2, 'Hold', 'Column 2', '20120103', '20120301'),      (1, 'Open', 'Column 3', '20120205', '20120406'),      (3, 'Closed', 'Column 4', '20120510', '20120725'),      (2, 'Hold', 'Column 5', '20120309', '20120401'),      (1, 'Open', 'Column 6', '20121010', '20121212');  

Ensure correct username when using pg_restore

Posted: 31 Mar 2013 07:58 AM PDT

I have just installed postgres 9.1.6 on a local Ubuntu server. Now I'm trying to restore a database dump from a database on Heroku. The local database is setup like this:

sudo -u postgres psql -c "create user app_user with password 'pass';"  sudo -u postgres psql -c "create database app_production owner app_user;"  

Now, when I try to restore the the dump I use the following command:

pg_restore --verbose --schema=public --no-acl --no-owner --jobs=8 --exit-on-error --username=app_user --dbname=app_production /tmp/app_production.dump  

Now in psql with \l to see ownerships I get the following:

                                         List of databases            Name    |   Owner   | Encoding |   Collate   |    Ctype    |   Access privileges    ------------------+-----------+----------+-------------+-------------+-----------------------   app_production   | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |    postgres         | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |    template0        | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +                    |           |          |             |             | postgres=CTc/postgres   template1        | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +                    |           |          |             |             | postgres=CTc/postgres  

As you can see, the ownership of app_production database has now turned from app_user to postgres. I would have expected the owner of the app_production database to still be app_user, so what am I doing wrong?

BTW, The --schema=public was added, because I was getting a weird error:

"Could not execute query: ERROR: must be owner of extension plpgsql"

Another thing is, that the owner of the dump is the user that the database was having on heroku, which would be something like 'jebf473b73bv73v749b7'

How to script out push subscription creation at the subscriber?

Posted: 31 Mar 2013 12:58 PM PDT

I'm trying to set up a push subscription to a SQL Server publication from the subscriber.

I could set up the subscription at the publisher using the Replication Wizard in Management Studio. However, I would prefer to script the process relative to the subscriber so I can automate the deployment of a new SQL Server subscriber instance.

Initially, I'm happy to prompt for the name of the publisher before deployment. If I can get this working, I will look for a way to inject the correct value for my environment automatically.

What is a simple way to do this for a SQL Server instance that has to create multiple subscriptions at different publishers?

I'm open to using any supported SQL Server scripting solution: SMO, RMO, Sqlcmd, WMI, PSDrive, even pure T-SQL.

I've attempted to solve this problem in two ways. The first is a complete solution using T-SQL, but it involves some manual steps.

Using T-SQL

I have a manual solution in T-SQL. The solution is based on the output of the Management Studio Replication Script Generator output.

Using Management Studio, I run the following script to generate a T-SQL script that I can run at the publisher:

PRINT N'  EXECUTE MyDatabase.dbo.sp_addsubscription    @publication = N''MyPublication'',    @subscriber = ''' + CAST(SERVERPROPERTY('ServerName') AS SYSNAME) + ''',    @destination_db = ''SubscriberDatabase'',    @subscription_type = N''Push'',    @sync_type = N''automatic'',    @article = N''all'',    @update_mode = N''read only'',    @subscriber_type = 0;    EXECUTE MyDatabase.dbo.sp_addpushsubscription_agent    @publication = N''MyPublication'',    @subscriber = ''' + CAST(SERVERPROPERTY('ServerName') AS SYSNAME) + ''',    @subscriber_db = ''SubscriberDatabase'',    @job_login = null,    @job_password = null,    @subscriber_security_mode = 1,    @frequency_type = 64,    @frequency_interval = 1,    @frequency_relative_interval = 1,    @frequency_recurrence_factor = 0,    @frequency_subday = 4,    @frequency_subday_interval = 5,    @active_start_time_of_day = 0,    @active_end_time_of_day = 235959,    @active_start_date = 0,    @active_end_date = 0,    @dts_package_location = N''Distributor'';';  

On the MYSUBSCRIBER instance, the output would look like this:

EXECUTE MyDatabase.dbo.sp_addsubscription    @publication = N'MyPublication',    @subscriber = 'MYSUBSCRIBER',    @destination_db = 'SubscriberDatabase',    @subscription_type = N'Push',    @sync_type = N'automatic',    @article = N'all',    @update_mode = N'read only',    @subscriber_type = 0;    EXECUTE MyDatabase.dbo.sp_addpushsubscription_agent    @publication = N'MyPublication',    @subscriber = 'MYSUBSCRIBER',    @subscriber_db = 'SubscriberDatabase',    @job_login = null,    @job_password = null,    @subscriber_security_mode = 1,    @frequency_type = 64,    @frequency_interval = 1,    @frequency_relative_interval = 1,    @frequency_recurrence_factor = 0,    @frequency_subday = 4,    @frequency_subday_interval = 5,    @active_start_time_of_day = 0,    @active_end_time_of_day = 235959,    @active_start_date = 0,    @active_end_date = 0,    @dts_package_location = N'Distributor';  

I copy the output and execute the script at the publisher instance to set up the subscription.

I think I can't automate this in pure T-SQL without editing the script before running it, because T-SQL by design does not handle user input.

Using PowerShell and RMO

PowerShell has simple ways to process user input, so this seems like a good way to prototype the automation process.

MSDN has an eight-step guide to set up a push subscription using the .NET Replication Management Objects (RMO).

Here are the first two steps:

  1. Create a connection to the Publisher by using the ServerConnection class.
  2. Create an instance of the TransPublication class by using the Publisher connection from step 1. Specify Name, DatabaseName, and ConnectionContext.

I'm trying to translate these steps into a PowerShell script, but I can't get past step 2.

In the following code examples, I use fictional object names. I believe this does not affect the answerability of the question because the error message is identical when I use the real object names.

First attempt: setting the properties

My first attempt is to create the TransReplication object then set its properties. The code looks like this:

Add-Type -AssemblyName "Microsoft.SqlServer.Rmo, Version=11.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91";    $Publisher = New-Object Microsoft.SqlServer.Management.Common.ServerConnection MyServer    $Publication = New-Object Microsoft.SqlServer.Replication.TransPublication  $Publication.Name = 'MyPublication'  $Publication.DatabaseName = 'MyDatabase'  $Publication.ConnectionContext = $Publisher  

When I execute this script, I see the following error:

Exception setting "ConnectionContext": "Cannot convert the "server='(local)';Trusted_Connection=true;multipleactiveresultsets=false" value   of type "Microsoft.SqlServer.Management.Common.ServerConnection" to type "Microsoft.SqlServer.Management.Common.ServerConnection"."  At line:8 char:14  + $Publication. <<<< ConnectionContext = $Publisher      + CategoryInfo          : InvalidOperation: (:) [], RuntimeException      + FullyQualifiedErrorId : PropertyAssignmentException  

It looks like it's failing becuase it can't convert the type ServerConnection to the type ServerConnection. I don't understand how this could fail for the stated reason, because the value is already of the required type.

Second attempt: overloading the constructor

My second attempt is to specify the property values of the TransReplication object in the constructor. The code looks like this:

Add-Type -AssemblyName "Microsoft.SqlServer.Rmo, Version=11.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91";    $Publisher = New-Object Microsoft.SqlServer.Management.Common.ServerConnection MyServer    $Publication = New-Object Microsoft.SqlServer.Replication.TransPublication 'MyPublication', 'MyDatabase', $Publisher  

When I execute this script, I see the following error:

New-Object : Cannot find an overload for "TransPublication" and the argument count: "3".  At line:5 char:26  + $Publication = New-Object <<<<  -TypeName Microsoft.SqlServer.Replication.TransPublication 'MyPublication', 'MyDatabase', $Publisher      + CategoryInfo          : InvalidOperation: (:) [New-Object], MethodException      + FullyQualifiedErrorId : ConstructorInvokedThrowException,Microsoft.PowerShell.Commands.NewObjectCommand  

It looks like the New-Object cmdlet can't find the three-argument constructor documented by MSDN:

public TransPublication(    string name,    string databaseName,    ServerConnection connectionContext  )  

Parameters

As far as I can tell, I'm overloading the constructor correctly.

Am I doing something wrong? Is there something unusual about my environment? Am I better off using another solution?

Can I monitor the progress of importing a large .sql file in sqlite3 using zenity --progress?

Posted: 31 Mar 2013 01:58 PM PDT

I'm trying to monitor the progress of a sqlite3 command importing a large .sql file into a database using zenity --progress.

I've tried the following which will import the file, however progress is not shown:

sqlite3 DATABASE < import_file.sql | zenity --progress --percentage=0 --auto-close  

I know I need to provide zenity a numeric source for the progress, but don't really know how to obtain the number.

Can anyone help me?

Why is Database Administration so hard?

Posted: 31 Mar 2013 05:26 AM PDT

I know a lot of Database Administrators and they are all over 28-29 years old.

Is all database administration like that? I mean, is this about getting experience more than at least 7-8 years?

Or is being a database administrator so hard?

Are there any good tools for monitoring postgresql databases

Posted: 31 Mar 2013 11:33 AM PDT

I am planing to have very soon few highly loaded postgresql databases. I have some expirience managing mysql databases with high load, but now we have to use postgresql.

I want to know what are the best tools for day-to-day database management and status reporting. (Of course console is the best one, but I want to know about other options too)

All expirience is welcome!

How do I list all databases and tables using psql?

Posted: 31 Mar 2013 08:45 PM PDT

I am trying to learn PostgreSQL administration and have started learning how to use the psql command line tool.

When I log in with psql --username=postgres, how do I list all databases and tables?

I have tried \d, d and dS+ but nothing is listed. I have created two databases and a few tables with pgAdmin III, so I know they should be listed.

[MS SQL Server] Script to calculate when a job ended.

[MS SQL Server] Script to calculate when a job ended.


Script to calculate when a job ended.

Posted: 10 Mar 2013 12:31 AM PST

I need to alter the script listed below to calculate the DateTime that the Job completed and sort on the derived column in descending order.[code="sql"]select job_name, MAX(run_datetime) AS run_datetime, MAX(run_duration) AS run_durationfrom( select job_name, run_datetime, SUBSTRING(run_duration, 1, 2) + ':' + SUBSTRING(run_duration, 3, 2) + ':' + SUBSTRING(run_duration, 5, 2) AS run_duration from ( select DISTINCT j.name as job_name, run_datetime = CONVERT(DATETIME, RTRIM(run_date)) + (run_time * 9 + run_time % 10000 * 6 + run_time % 100 * 10) / 216e4, run_duration = RIGHT('000000' + CONVERT(varchar(6), run_duration), 6) from msdb..sysjobhistory h inner join msdb..sysjobs j on h.job_id = j.job_id WHERE j.name NOT LIKE 'Backup%'AND j.name NOT LIKE 'Cleanup%'AND j.name NOT LIKE 'Shrink%'AND j.name <> 'WMI_Disk_Space_Notification' AND j.name <> 'syspolicy_purge_history' ) t) tWHERE run_dateTime >= '2013-03-09' AND run_dateTime < '2013-03-10'GROUP BY job_nameorder by job_name, run_datetime[/code]In addition to displaying the job_name, run_datetime and duration I want to deplay and sort by the Job_ENd_DateTime.Any help would be greatly appreciated.

Version Change in Change Tracking

Posted: 30 Mar 2013 11:49 PM PDT

Hi,If any INSERT, Update OR delete operation happens in the table than a new version (SYS_CHANGE_VERSION) will get created or not?I have found that for some bunch of insert operation same version(SYS_CHANGE_VERSION, SYS_CHANGE_CREATION_VERSION) got created.Thanks,Anupam

[SQL Server 2008 issues] Low Procedure Cache Hit Rate means no use of stored procedures?

[SQL Server 2008 issues] Low Procedure Cache Hit Rate means no use of stored procedures?


Low Procedure Cache Hit Rate means no use of stored procedures?

Posted: 30 Mar 2013 06:46 AM PDT

Hi everyone. I'm not really understanding how the procedure cache works in SQL Server 2008 R2.I have 100GB ram allocated to SQL Server. Of the 100GB ram, 8.30 GB is allocated to the procedure cache. Spotlight on SQL Server Enterprise is telling me that the hit rate is 59.2%. The recompile rate is near zero.When I query the procedure cache, there are only three plans from the application that are being stored. Two triggers ad one proc. The rest are maintenance tasks from msdb. The application is very busy 24/7 so I'm wondering why the procedure cache is not full. Profiler trace of cache insert shows constant inserts of select queries. Everything from simple select * from <table> to more complex selects with multiple joins.I'm wondering if this means that the application has to compile every query that hits it because the application does not use stored procedures with parameters. I wonder how inefficient that is and if there is anything that can be done.Thanks for reading and helping me to understand.Howard

Problem having restoring database backup file on secondary server

Posted: 19 Mar 2013 06:41 AM PDT

Hi for last two days at one of our client side we have been trying figure out a why its taking two days two restore database backup files on secondary server. restore process is till going on and is stuck at 98% and no other process are there on secondary server.using sp_who2 i could find only two lastwaittypes as BACKUPTHREAD and BACKUPIO. how should i proceed ahead with analysis . i mean does disk has some bottleneck problem ?/the database of which we are trying to restore has file group.does it had to anything with filegropu

Eliminate FileGroup and move contents to Primary FG

Posted: 30 Mar 2013 02:24 PM PDT

Hi there,I'm looking for some guidance on how to resolve something I've been tasked with.I am working on a DB that has a Primary FG and 1 User Defined FG.I am looking to consolidate all the data that is stored in the User Defined FG into the Primary FG and remove any remnants of the User Defined FG from the Database.Any assistance someone could provide me would be greatly appreciated!

Unplanned outages causing bankruptcy

Posted: 30 Mar 2013 07:29 AM PDT

* I wasn't sure where to ask this throughout the forum, so I decided here was best. *For my thesis in graduate school I am choosing to do a paper and presentation on unplanned database availability and the losses it can cause, including losing money or going bankrupt. I'm trying to find information about companies that have gone bankrupt because of an unplanned database outage. I am going to present a case study on a such a company. Is there any public available information about such a company? Several Google searches have not yielded anything for me. Maybe I'm using the wrong combination of keywords.

Queries and tables

Posted: 30 Mar 2013 03:13 AM PDT

Hi!I was wondering if I could find some queries and database tables for them. I would like to have some complicated tasks for which I need to write queries and some tables with inserted rows to check if my queries are good. It's too boring and takes much time to make up my own queries and create tables.

Saturday, March 30, 2013

[how to] Executing sys.dm_fts_parser without sysadmin server role

[how to] Executing sys.dm_fts_parser without sysadmin server role


Executing sys.dm_fts_parser without sysadmin server role

Posted: 30 Mar 2013 08:41 PM PDT

A web application needs to execute sys.dm_fts_parser to discover which words SQL Server Fulltext Search used to match documents for the purpose of syntax highlighting. That stored procedure can only be executed if the caller is in the sysadmin server role.

Permissions

Requires membership in the sysadmin fixed server role and access rights to the specified stoplist.

http://msdn.microsoft.com/en-us/library/cc280463%28v=sql.105%29.aspx

Since it would be most unwise to grant that server role to the web server's user, I attempted to create a stored procedure that runs as a separate user that is in that role.

create procedure usp_fts_parser      @query nvarchar(max)  with execute as owner  as  select display_term from sys.dm_fts_parser(@query, 1033, 0, 0);  go  

I then created a user login_sign_fts_parser, made it the owner of the stored procedure

alter authorization on usp_fts_parser to login_sign_fts_parser

and attempt to allow the web server's DB user rights to impersonate that user:

GRANT IMPERSONATE ON USER::[IIS APPPOOL\Strategic Window] TO login_sign_fts_parser

When I attempt to execute usp_fts_parser, I receive the error:

Cannot execute as the database principal because the principal "login_sign_fts_parser" does not exist, this type of principal cannot be impersonated, or you do not have permission.

Why do I get this error? How can I accomplish my goal of executing sys.dm_fts_parser without granting undue permissions to the web server's DB user?

Cannot create perlplu function

Posted: 30 Mar 2013 05:01 PM PDT

Running PostgreSQL 9.1.8 on Xubuntu 12.04, installed from the repos.

From the shell, I have executed:

createlang plperl db_name;  createlang plperlu db_name;  

As the superuser running psql, I have executed:

GRANT ALL ON LANGUAGE plperl TO account_name;  GRANT ALL ON LANGUAGE plperlu TO account_name;  

The pg_language table reveals:

select lanname,lanpltrusted from pg_language where lanname like 'plperl%';    "plperl";t  "plperlu";t  

When I create the following function:

CREATE OR REPLACE FUNCTION get_hostname()    RETURNS text AS  $BODY$    use Sys::Hostname;    return hostname;  $BODY$    LANGUAGE plperlu IMMUTABLE    COST 1;  ALTER FUNCTION get_hostname()    OWNER TO account_name;  

I receive the following error:

ERROR:  Unable to load Sys/Hostname.pm into plperl at line 2.  BEGIN failed--compilation aborted at line 2.  CONTEXT:  compilation of PL/Perl function "get_hostname"  

Yet the following works:

CREATE OR REPLACE FUNCTION get_hostname()    RETURNS text AS  $BODY$    return '127.0.0.1';  $BODY$    LANGUAGE plperlu IMMUTABLE    COST 1;  ALTER FUNCTION get_hostname()    OWNER TO account_name;  

The following Perl script works as expected from the shell:

use Sys::Hostname;  print hostname;  

I tried to run the function as an anonymous block:

DO $$    use Sys::Hostname;    print hostname;  $$ LANGUAGE plperlu;  

This returned the same error as before, with this additional information:

ERROR: Unable to load Sys/Hostname.pm into plperl at line 3.  BEGIN failed--compilation aborted at line 3.  SQL state: 42601  Context: PL/Perl anonymous code block  

From the documentation, error 42601 is a syntax error.

I see no syntax error.

Furthermore, the function loaded fine with the superuser account when I imported the database:

psql -d db_name -U account_name -W -f db-dump.sql > import.log 2> error.log  

I tried to simplify the code to no avail:

CREATE OR REPLACE FUNCTION get_hostname() RETURNS text AS  $$    use Sys::Hostname;    return hostname;  $$  LANGUAGE plperlu;  

Same error as before.

What I don't understand is why, according to the error message, PostgreSQL is trying to load the code into plperl instead of plperlu.

Any ideas?

Update #1

Note that the postgres user is a Superuser:

                             List of roles   Role name |                   Attributes                   | Member of   -----------+------------------------------------------------+-----------   postgres  | Superuser, Create role, Create DB, Replication | {}  

How do I design a database for a resource scheduling and allocation application?

Posted: 30 Mar 2013 02:39 PM PDT

I have two entities: Resource and Project. A Project runs for a certain amount of time (in calendar days), and I need to allocate several Resources to it (allocation interval - 1 work day, not an hourly basis).

Now if I create two tables (resource and project), how do I achieve this? How should the tables be designed? Do I create a new entry for every day and resource?

Any help is much appreciated!

Retrieving Alternating Values

Posted: 30 Mar 2013 07:39 PM PDT

I have a column with two values in my database, I want to retrieve the two values alternately, for example I have:

name   itemcode  ----------   A       1   B       2   C       2   D       2   E       1   F       1  

I want to display them as:

name itemcode  ----------  A      1  B      2  E      1  C      2  F      1  D      2  

Getting next and previous document

Posted: 30 Mar 2013 09:53 AM PDT

In a mongo collection I have a list of words (millions words).

{word:'a'}  {word:'b'}  {word:'x'}  {word:'y'}  {word:'z'}  

I need to get lexical adjacent of a word. I am looking for most efficient method.

Database design - do I need another table?

Posted: 30 Mar 2013 06:18 PM PDT

I am trying to make a database that follows a form that the company uses. When a client walks in the membes of staff have to fill in a form and the data is recorded. The form on paper is 10 pages long. The first time a client is seen the entire form is filled in and the client gets a clientID.

I have split the form into sections that make sense like accommodation and employment. I know I can link these tables together with the clientsID. Thats the simple part.

Now when a client returns the form comes out again but this time only certain parts are filled in, what ever the clients needs are. The records for most parts don't need updating but a new record needs inserting. what would be the best way around this.

So at the moment I have for example a table called client with an id and name another table called accommodation with clientid and address and another table employment with clientid and employer.

But how do I go about it when a client comes in to let us know he has a new employer. I cant update the current one as that is needed but I would need to add new record for the client.

Would this mean I would have to add a look up table for all my current tables?

mysqlworkbench after cmd mysql startup went wrong

Posted: 30 Mar 2013 05:18 PM PDT

i have problems with my mysql server. I have installed the newest mysqlserver but having problems after starting my server from command line on localhost. Before that i was running xammp. The problem is that when starting from command line and not from xammp, mysqlworkbench shows me tables from another datadir when connecting with database . I really don't know how to fix my problem.

Insert from one row to another using cases

Posted: 30 Mar 2013 01:38 PM PDT

My original table was(being used since 2005):

CREATE TABLE `request` (      `msg` VARCHAR(150) NOT NULL,      `id` VARCHAR(20) NOT NULL,      `ctg` VARCHAR(10) NOT NULL DEFAULT 'misc',      `date` VARCHAR(25) NULL DEFAULT NULL,      `status` VARCHAR(10) NOT NULL DEFAULT 'empty',      `fid` BIGINT(20) NOT NULL AUTO_INCREMENT,      PRIMARY KEY (`fid`),      UNIQUE INDEX `msg_id` (`msg`, `id`),      UNIQUE INDEX `msg_ctg` (`msg`, `ctg`)  )  COLLATE='utf8_general_ci'  ENGINE=MyISAM  CHECKSUM=1  AUTO_INCREMENT=18491;  

Since, it was neither good management of table nor appreciable by me, I created a newer table:

CREATE TABLE `requests` (      `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,      `ctg` VARCHAR(15) NOT NULL,      `msg` VARCHAR(250) NOT NULL,      `nick` VARCHAR(32) NOT NULL,      `filled` ENUM('Y','N') NOT NULL DEFAULT 'N',      `dated` DATETIME NOT NULL,      `filldate` DATETIME NULL DEFAULT NULL,      `filledby` VARCHAR(32) NULL,      PRIMARY KEY (`id`),      UNIQUE INDEX `nick_msg` (`nick`, `msg`),      UNIQUE INDEX `ctg_msg` (`ctg`, `msg`)  )  COMMENT='Requests from users in any of the categories.'  COLLATE='utf8_general_ci'  ENGINE=MyISAM;  

What I want to do now is:

SELECT data from first table and insert as follows:

  • msg to msg, id to nick, ctg to ctg and date to dated are copied as is.
  • status in original table is either empty or filled. I want to use a CASE clause while insertion so that Y and N are inserted accordingly.

My query:

INSERT INTO `requests`( `id`, `ctg`, `msg`, `nick`, `dated`, `filled` )  SELECT `fid`,      `ctg`,       `msg`,       `id`,       `date`,      CASE          WHEN `status` = 'empty' THEN 'N'          WHEN `status` = 'filled' THEN 'Y'      END CASE  FROM `request`  

My question is

Is the query above perfect? Also, one of the sample string from request table with 'filled' status is like:

.hack//SIGN (filled by hjpotter92 in 02/08/12 09:47:07 )  

the format is the same: filled by <<nick>> in <<<date>> in <<mm/dd/yy H:I:S>> for all filled entries. Can I extract the date from there and use it with STR_TO_DATE() to pass it in filldate field too? I know it'd need me to create a procedure/function.

For now, I am thinking of using PHP for the task but I wanted a clean MySQL solution if possible.

Understanding MySQL Integer Types?

Posted: 30 Mar 2013 03:39 PM PDT

Please could someone explain to me how the MySQL integer types and lengths work as it is somewhat confusing.

Which different integer types and lengths should I use for each of the following ranges to keep them as efficient as possible?

0 - 1  0 - 700,000  0 - 9,999,999,999  

What is the difference between TINYINT(5), SMALLINT(5) and INT(5), are they the same?

mysql trigger on update with insert statement

Posted: 30 Mar 2013 10:16 AM PDT

I am trying to create a simple trigger which is handled on update operations. The trigger is checking before update if a record of an update exists, then if not it is created.

DELIMITER ;;  CREATE TRIGGER trig  BEFORE UPDATE ON table  FOR EACH ROW  BEGIN  IF old is null then  INSERT INTO table(table.col1, table.col2) VALUES ('new.table.col1','new.table.col2');  END IF;  END;  ;;  DELIMITER ;  

Why is it not adding a record if update is trying to update some non existing record?

How to Shrink Oracle SYSTEM Tablespace?

Posted: 30 Mar 2013 02:32 PM PDT

SYSTEM Tablespace grow out of control because of SYS.AUD$ table.

SYS.AUD$ truncated but datafile still very big= 30G

Resize doesn't work because

file contains used data beyond requested RESIZE value

What should I do here?

Oracle trigger to update columns daily

Posted: 30 Mar 2013 11:14 AM PDT

I'm trying to create a trigger that would run twice daily and do updates based on a specific condition.

To be run on Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi

We have Table A and Table B with columns

  1. Primary Key
  2. Date 1
  3. Date 2
  4. Date 3
  5. Status

For a specified set of Primary Keys in Table A, until Table A.Status==Completed, I want the Date fields to be copied over from Table B to Table A twice daily. The same primary keys will be already setup in Table B. The number of records to update will decrease over time as more records in the primary keys list reach completed status.

The purpose is to maintain certain static values on the Date fields till status==completed. We are unable to control an external system that keeps over-writing them. As a stopgap we want to rewrite them at frequent intervals to have the notion of maintaining static values. Once all PKs in our list reach completed status, we want to turn off the trigger.

Run Multiple Postgres Server

Posted: 30 Mar 2013 02:58 PM PDT

Hello I am new to postgres, i wanted to run postgres v8.3 and v9.1 same at a time, but don't know how..

anyone help me how to do it simply.

How do I fix the definer problem The user specified as a definer ('fred'@'192.168.0.%') does not exist

Posted: 30 Mar 2013 05:04 PM PDT

I dumped an existing database from my web site into a new MySQL setup on my laptop. Everything works except pages that use a view, when the error message above appears. There is only one view on the system. I assumed that this was caused because the user was not in the system, but looking in the users database of MySQL the user is listed correct with the password.

The dump was from an earlier version of MySQL (5.0) into the latest one that comes with Xamp 5.527. At first I thought it could be that the latest version was incompatible with the existing one.

Looking on this forum, there are some existing answers covering this problem that refer to making changes to the database and then dumping it again.

However, my on line database works fine, so I dare not make changes to it in case I kill it. That is assuming that I understood what changes I should make and how to do it. I have PHPmyadmin. Can I modify it on the existing dumped database,or do I have to change it on the on line one and then export it?

And exactly what do I need to do to modify it. The original database was a Microsoft Access one, converted to MySQL, and it has worked without error since 2003.

SQL Server update query on linked server causing remote scan

Posted: 30 Mar 2013 08:01 PM PDT

I have a SQL Server 2012 setup as a linked server on a SQL Server 2008 server.

The following queries executes in less than 1 second:

   SELECT kg.IdGarment     FROM Products p      INNER JOIN ZAMStaffRentals r ON r.ProductID = p.UniqueID      INNER JOIN ZAMCustomer_Staff s ON r.StaffID = s.UniqueID      INNER JOIN ZAMRentalGarments g ON g.StaffRentalID = r.UniqueID      INNER JOIN [REMOTE_SERVER].[REMOTE_DB].dbo.t_ProcessIT_Garment kg on g.BarcodeRFID = kg.IdGarment      INNER JOIN ZAMKannChanges log on log.GarmentID = g.UniqueID      WHERE log.ActionType = 'I'   

t_ProcessIT_Garment contains 37,000 rows, the query returns two records, the IdGarment column is the Primary Key. No problem here.

However, if I run this query to do a remote update, it takes 24 seconds, and 2 rows is affected:

   UPDATE [REMOTE_SERVER].[REMOTE_DB].dbo.t_ProcessIT_Garment SET      IdGarment = IdGarment     FROM Products p      INNER JOIN ZAMStaffRentals r ON r.ProductID = p.UniqueID      INNER JOIN ZAMCustomer_Staff s ON r.StaffID = s.UniqueID      INNER JOIN ZAMRentalGarments g ON g.StaffRentalID = r.UniqueID      INNER JOIN [REMOTE_SERVER].[REMOTE_DB].dbo.t_ProcessIT_Garment kg on g.BarcodeRFID = kg.IdGarment      INNER JOIN ZAMKannChanges log on log.GarmentID = g.UniqueID      WHERE log.ActionType = 'I' ;  

I tested using IdGarment = IdGarment to keep things simple. The execution plan shows it uses Remote Query for the first query, but Remote Scan for the second query, which has 100% of the cost.

The table joins are identical in both queries, why is it using Remote Scan for the second query, and how do I fix this?

Mysql - How to optimize retrival time in a table

Posted: 30 Mar 2013 06:04 PM PDT

I have query like this! which has 200 million Records in a single table.. I am using BTree Indexes in my table...

mysql> select COUNT(DISTINCT id) from [tablename] where [columname] >=3;
+------------------------------+
| COUNT(DISTINCT id) |
+------------------------------+
| 8242063
+------------------------------+
1 row in set (3 min 23.53 sec)

I am not satisfy with this timing ..! how can I reduce the result time less than 30sec. Kindly give me any suggessions! It will be more helpful to me!

thanking you!

MySQL specific database configuration file

Posted: 30 Mar 2013 01:58 PM PDT

In MySQL's configuration file I've globally disabled autocommit as so.

[mysqld]  autocommit=0  

I need to turn MySQL's autocommit on for a specific Ruby on Rails database though. It could be for the user or the database itself (doesn't matter). Thinking it would look something like this.

[mysqld]  autocommit=0  execute_sql="Custom SQL to set autocommit for a database"  

SQL Server 2005 Replication

Posted: 30 Mar 2013 04:58 PM PDT

I am in the process of creating Replication between 2 Remote Servers, server 1 is the Distributor and Publisher and server 2 is the Subscription.

server 1 windows 2003 server 192.168.10.1 connected by vpn SQL Server 2005 domain1.local

server 1  windows 2003 server  192.168.10.1 connected by vpn  SQL Server 2005  domain1.local  

server 2 windows 2003 server 192.168.10.6 connected by vpn SQL Server 2005 domain2.local

server 2  windows 2003 server  192.168.10.6 connected by vpn  SQL Server 2005  domain2.local  

When I setup up Replication everything looked fine until I looked at the sync status and it said:

The Agent could not be started    An exception occurred while executing a transact-sql statement or batch    sqlserveragent error request to run job  server1-username blah blah blah  

From user sa refused because the job is already running from a request by user sa changed database context to technical error 22022.

I have cleared jobs in the server agent as well as restarted the service.

Could this be something to do with authentication between two non trusted domains as I can browse and even control each sql server via SQL studio but just not setup replication?

Yes I can manage each SQL Server in SSMS and we are using merge with snapshot.

Mysqldump tables excluding some fields

Posted: 30 Mar 2013 12:58 PM PDT

Is there a way to mysqldump a table without some fields?

Let me explain:
I have a MySQL database called tests. In tests I have 3 tables: USER, TOTO and TATA. I just want to mysqldump some fields of table USER, so excluding some fields like mail, ip_login, etc.

How can I do this?

How to do something like UPDATE DELAYED in MySQL

Posted: 30 Mar 2013 02:58 PM PDT

I have an averages table that should keep track of an average value over time. I don't want to have a row for each value, just a single row that continuously updates the average. What I've come up with is this:

set @value=4;  set @name="myAverageValue";  UPDATE `timing` SET    `max` = greatest(`max`,@value),    `average` = `average` + ((@value - `average`) / (`count` + 1)),    `count` = `count` + 1  WHERE `name` = @name  

Many clients may be doing this at the same time, and I don't want there to be any locking issues. I don't care what order the updates are run in, since in the end it will all end up the same. I just want to have a query that sends the UPDATE to the database, and it will process it eventually, similar to an INSERT DELAYED. Does UPDATE LOW_PRIORITY do this, or does that cause the client to wait until it is available?

SQL Server 2012 database backup successfully report but no backup file

Posted: 30 Mar 2013 07:58 PM PDT

enter image description here

I have created maintenance plan in SQL Server 2012. And every day, the maintenance plan should backup the database. There is no database backup file when I look in the folder where the backups must be stored. But SQL Server logs history about this maintenance plan are successful.

What is the problem?

I am using SQL Server 2012. The operating system is Windows Server 2008 R2.

Thank you for paying attention.

enter image description here

Sybase SQL Anywhere 12 - Get all indexes which are unique -> ambigious sysindexes error

Posted: 30 Mar 2013 06:58 AM PDT

we are using a Sybase SQL Anywhere 12 db.

In the db there are indices, which are unique, but shouldn't be unique.

Therefore I search for a quick way to list all tables with unique primary keys.

I tried

SELECT z.name FROM sysobjects z JOIN sysindexes ON (z.id = i.id) WHERE type = 'U'  

The result was an error message: Anweisung konnte nicht ausgeführt werden. Tabellenname 'sysindexes' ist mehrdeutig SQLCODE=-852, ODBC 3-Status="42S13" Zeile 1, Spalte 1

Roughly translated: sysindex is ambiguous.

I found on internet the query:

select 'Table name' = object_name(id),'column_name' = index_col(object_name(id),indid,1),  'index_description' = convert(varchar(210), case when (status & 16)<>0 then 'clustered' else 'nonclustered' end  + case when (status & 1)<>0 then ', '+'ignore duplicate keys' else '' end  + case when (status & 2)<>0 then ', '+'unique' else '' end  + case when (status & 4)<>0 then ', '+'ignore duplicate rows' else '' end  + case when (status & 64)<>0 then ', '+'statistics' else case when (status & 32)<>0 then ', '+'hypothetical' else '' end end  + case when (status & 2048)<>0 then ', '+'primary key' else '' end  + case when (status & 4096)<>0 then ', '+'unique key' else '' end  + case when (status & 8388608)<>0 then ', '+'auto create' else '' end  + case when (status & 16777216)<>0 then ', '+'stats no recompute' else '' end),  'index_name' = name  from sysindexes where (status & 64) = 0  order by id  

Which looked what i wanted. But there was still the same result of ambigious sysindexes.

What dows ambigious indexes mean in this context? Will/Can this cause any error in future?

As workaround I used sybase central (which by the way opens always on first monitor, not on the one where it was closed - ugly behaviour), and found that a item indices showed what i searched for.

But I still want to know how a programmatically solution looks like.

Replication master binlog rotation when network is unavailable

Posted: 30 Mar 2013 03:58 PM PDT

I recently experienced an issue where the binlog file in master rotated because network connectivity between the slave and master was unavailable.

After solving the network issue, the slave was not able to follow the master as it was doing through previous binlog and position.

That was solved by purging the binlog to last binlog in master and pointing the slave to that last binlog and previous binlog's position which was following.

I am wondering if this issue is normal?

Issues installing SQL Server 2008 on Windows Server 2008 R2

Posted: 30 Mar 2013 09:58 AM PDT

I am trying to install SQL Server 2008 on Windows Server 2008 R2 Cluster. While installing the first node, I keep on getting the following error:

TITLE: Microsoft SQL Server 2008 Setup
The following error has occurred:
Attempted to perform an unauthorized operation.
Click 'Retry' to retry the failed action, or click 'Cancel' to cancel this action and continue setup.

Has anyone seen this issue before? I have uninstalled it multiple times and tried re-installing it but same results.

Take individual MySQL database offline

Posted: 30 Mar 2013 06:58 PM PDT

MySQL does not have any SQL commands or internal mechanisms for

  • making an individual database unavailable / offline
  • moving an individual database

Thtis being the case, how can you take an individual database offline?

Choosing shard key and friendly URL Ids for my MongoDB

Posted: 30 Mar 2013 08:58 PM PDT

I have decided to use MongoDB as my Database for a web application. However, I have some difficulties to get started and I hope that you can help me out with a few questions.

I am developing my application in ASP.NET and with MongoDB as the back-end. I intend to start with a single server + 1 replication but wanted to built it right so I won't have problem sharding the database in the future if I have to.

One of my biggest problems is choosing the right shard key and friendly URLs for my website.

I have a folders collection and files as embedded collection inside the folders collection. Each user can create any number of folders and add files to it. Each folder belongs to one user. I wanted to know what is the best shard key for this type of collection? Many queries will query by the user, getting the folder and its items by querying the folders collection by its unique id. I will also use the id in the URL to get the folder and its filers: ex. mywebsite.com/folder/[the_id_of_the_folder]

I will also will use paging in my application, so I need to query the data (also in a sharded environment) and get for example: the last 10 records, page 2 with 10 records - all ordered by the last time the were inserted/updated

  • So my first question is what is the best shard key to use for a single machine, but considering that I will have to shard in the future
  • Does the shard key has to be the primary unique id of the document in MongoDB?
  • How can I generate more user friendly URLs - I prefer a numerical value instead of GUID (is there option to convert it?)

Help will be very appreciated, as I am stuck and can continue until I solve this.

Mongo connection failing with 'Transport endpoint is not connected'

Posted: 30 Mar 2013 08:58 AM PDT

I have a 2 server installation.

A web server with Apache and a DB server with MongoDB.

I am load testing it, and on ~300 RPS I am getting this error:

PHP Fatal error: Uncaught exception 'MongoConnectionException' with message 'Transport endpoint is not connected'.

The only thing I am noticing is that right before the fail, I am getting a lots of connections on Mongo:

insert  query update delete getmore command flushes mapped  vsize    res faults locked % idx miss %     qr|qw   ar|aw  netIn netOut  conn       time        0    659      0      0       0       1       0   208m  1.28g    40m      0        0          0       0|0     0|0    62k   217k   486   03:57:20   

Almost 500 connections here... but never more than that!

Mongo is 2.0.3. PHP is 5.3.x (latest of Debian install...)

Help!

Modify DEFINER on Many Views

Posted: 30 Mar 2013 05:06 PM PDT

I have am having problems backing up my databases after an update. I have been poking around on my system trying to figure out why. One query I ran returned this result.

Got error: 1449: The user specified as a definer ('cittool'@'%') does not exist when using LOCK TABLES  

After some investigation it appears that the definer for these views is an old developer account that has been purged from the system. The databases and views with this problem are used very infrequently, and most being kept around for archival purposes.

There is about 40 views with a definer that no longer exists. Is there an easy way to change the definer to a different account on everything at once? Is there a way to get mysqldump to simply dump all the views out to a file so I could edit that file and recreate the views?

SQL: SELECT All columns except some

Posted: 30 Mar 2013 05:09 PM PDT

Is there a way to SELECT all columns in a table, except specific ones? IT would be very convenient for selecting all the non-blob or non-geometric columns from a table.

Something like:

SELECT * -the_geom FROM segments;  
  • I once heard that this functionality was deliberately excluded from the SQL standard because changing adding columns to the table will alter the query results. Is this true? Is the argument valid?
  • Is there a workaround, especially in PostgreSQL?

Search This Blog