Search This Blog

Loading...

Wednesday, June 5, 2013

[how to] In the Vertica database, what is a namespace?

[how to] In the Vertica database, what is a namespace?


In the Vertica database, what is a namespace?

Posted: 05 Jun 2013 09:09 PM PDT

In the Vertica database, what does the term "namespace" mean?

I have reviewed the entire Vertica documentation and cannot find what this means.

Why does `pg_lsclusters` not list my Postgres cluster?

Posted: 05 Jun 2013 09:11 PM PDT

  1. I apt-get installed Postgres 9.2.4 (using PPA).
  2. I used pg_dropcluster --stop 9.2 main to remove the default database cluster, because I intended on creating a new cluster on a separate SSD volume (a Rackspace block storage volume) that was just mounted.
  3. I created that new cluster whose data resides on my SSD volume and started it (ps shows all the normal Postgres processes running for that new cluster and I'm able to open a Postgres shell and run SQL - i.e., it's running fine). Now when I run pg_lsclusters there are none listed.

The main cluster listed just fine before I deleted it. Why won't my new cluster show up, even when it's running? I've even rebooted the server (just in case).

Need to replace only first occurrence of string

Posted: 05 Jun 2013 08:47 PM PDT

I hope I worded the question right. Here are the specifics:

This is regarding a MySQL database. I've inherited several hundred posts with custom fields and a unique excerpt. At this point I think I can solve the problem by inserting the <!--More--> tag into each post at a specific point. I need to do a search and replace with the following parameters:

1 - Search each post_content field for the first occurrence of </em> 2 - Insert the <!--More--> tag right after the </em>

Unfortunately each post has multiple <em> tag pairs, and if a post has more than one <!--More--> tag in it, WordPress just ignores it. So I need to so some sort of search and replace, but I have had no success after two days of trying every possible code snippet, SQL query and plugin I could find.

Keeping version history of functions in PostgreSQL

Posted: 05 Jun 2013 07:45 PM PDT

For many of my tables, I've added auditing triggers (based on this wiki page). It works very well and has helped me several times figure out who changed what in the system. We have a Python/Django application that sits on top of the data and that code is tracked in Git. However, there seems to be one area of our system where changes are not tracked very well. And that's the function/triggers, etc in PostgreSQL. I wish there was a way I could add a similar audit capability to the schema as I have with the data itself.

How do DBA track these changes? Note: I'm in a position where there is more than one person with sufficient privileges to "CREATE OR REPLACE" a function, so I can't necessarily count on a person to write a script and check it into Git and I certainly can't force them. If possible, it needs to be automatic.

I've thought about writing a python script to "create" a script file and write it to a file and then programmatically commit changes to a git repo.

Another option would be to add some tables to my audit schema and just query each night and look for changes and right them to a table. Certainly could work, but not quite as nice as being able to do a diff/blame via git.

This may not be the greatest question because I'm not exactly sure what I'm looking for and there may not be an exact "right" answer, but I would like to know what people do. I've done some "googling" on the topic, but I'm not finding a lot (maybe I'm just using the wrong terms).

Finally, I'm currently using PostgreSQL 9.1.

Insert into table from a query to a linked database hangs

Posted: 05 Jun 2013 08:50 PM PDT

I have a schema with some database links to other schemas that we use to pull data from. We pull the data into a staging table, massage it, and then do some merges into our schema/database.

The data in the linked schema is a little much to pull into test systems, so I want to only get the first 10000 records, which is easy enough.

select * from   (      SELECT TRIM(r.R_ID), 1, TRIM(r.R_CODE), TRIM(r.R_MISC4), trim(rc.DESCR)      FROM sradmin.RACEWAY@sr25 r      LEFT JOIN sradmin.RWAYCODE@sr25 rc ON r.R_CODE = rc.R_CODE  )  Where Rownum <= 10000;  

This returns very quickly as in under 2 seconds.

However when I try to insert this into my staging table, it causes SQL Developer to hang. Not a windows hang, the Script Output window just shows "ScriptRunner Task" with the moving bar back and forth showing that it's doing something.

INSERT INTO STG_TRANSIT(TRANSIT_ID, SHIP, TRANSIT_TYPE, COMPARTMENT_ID, DESCRIPTION)  select * from   (    SELECT TRIM(r.R_ID), 1, TRIM(r.R_CODE), TRIM(r.R_MISC4), trim(rc.DESCR)    FROM sradmin.RACEWAY@sr25 r    LEFT JOIN sradmin.RWAYCODE@sr25 rc ON r.R_CODE = rc.R_CODE  )  Where Rownum <= 10000;  

When it does this I can see that there are DW locks, but nothing else is using the linked databases that I'm trying to query.

Any help would be greatly appreciated as I am very much a novice with Oracle.

Reset IDENTITY value

Posted: 05 Jun 2013 08:04 PM PDT

I have a table with an IDENTITY column. While developing I delete the rows from time to time and add them again. But the IDENTITY values always kept increasing and didn't start from 1 when I added them again. Now my id's go from 68 -> 92 and this crashes my code.

How do I reset the IDENTITY value?

What are some good packages to install before configuring postgres on Ubuntu?

Posted: 05 Jun 2013 02:14 PM PDT

I can find a hundred examples of compiling Postgres 9.2 from source. None of them make mention of any options to pass to ./configure that are a good idea to always include. The only packages I have to install to get Postgres installed from source are libreadline6-dev libghc-zlib-dev, but I would like my Postgres database to run as smoothly as possible, and aside from postgresql.conf that means having the proper tools and libraries installed so that Postgres can work more efficiently. Any tips (e.g., should I install libssl-dev, etc.)? I speaking in terms of a general source install of 9.2.4 (vs for any specific application type).

Having trouble connecting tungsten replicator from EC2 to RDS

Posted: 05 Jun 2013 01:08 PM PDT

I am testing using Tungsten Replicator to help migrate large databases to RDS. I have no trouble setting up the service on the EC2 "master". I am using a second EC2 instance as a kind of relay server host to an RDS target. I tested connection to mysql on rds from both ec2 (fine) and can ssh between the ec2 servers fine. the replicator service status is good on the ec2 master but perpetually "GOING-ONLINE:SYNCHRONIZING" state on the relay slave -> rds. Does anyone have experience trying this?

In creating view use SQL Security definer or invoker?

Posted: 05 Jun 2013 06:03 PM PDT

CREATE VIEW syntax is

CREATE      [OR REPLACE]      [ALGORITHM = {UNDEFINED | MERGE | TEMPTABLE}]      [DEFINER = { user | CURRENT_USER }]      [SQL SECURITY { DEFINER | INVOKER }]      VIEW view_name [(column_list)]      AS select_statement      [WITH [CASCADED | LOCAL] CHECK OPTION]  

For SECURITY you can set DEFINER or INVOKER. When does it make sense to select "definer"? As far as I understand that means check the security permissions of me, the person who is creating the view. If I have full permissions across our database, then I just created a view that gives any invoker full access. That sounds wrong. Wouldn't you always want the security to be the person who is actually using the view?

SQL Server 2008 log file growing and wont shrink. Hard Drive Space running out [duplicate]

Posted: 05 Jun 2013 02:24 PM PDT

This question already has an answer here:

I have a SQL Server 2008 database which has a log file (.ldf) with a size of 360GB and growing every day.

The .mdf file is only 25GB. The hard drive is 950 GB and has 150GB free space. I want to shrink this log file so that it doesn't overrun the hard drive on the server.

I used dbcc shrinkfile('my database log file name', 0) but unfortunately, it doesn't shrink the file. Now I know I need to take a log backup and then run the shrink command for the file to shrink.

My problem is that I don't have enough hard drive space to take a log backup.

Is there anyway I can shrink this file?

The database is on FULL RECOVERY MODE with mirroring ON. Please advise. Any tips to avoid this problem in the future would also be appreciated.

Too Many database connections on Amazon RDS

Posted: 05 Jun 2013 12:38 PM PDT

We are having problems with users running queries/views in Drupal that occasionally cause our site to freeze. The freeze occurs because the query causes the number of database connections to go up to 400+ and basically anytime the site goes over 100 database connections the site slows down terribly and just doesn't respond.

We are running Amazon RDS using MySQL Red Hat Linux

We have a large enough EC2 on the front end app server, and a large enough RDS.

The way we are fixing this issue now is to find the offending query, and kill it. Once the query is killed...our database connections drop to around 20 which is the normal amount you see when monitoring the site statistics.

Is there a way to stop the offending query and kill it before it runs too long and consumes the connections? I am trying to automate the killing of the bad query before it happens, or at least realize after 30 seconds its a bad query and kill it.

Relational database primary key requirements

Posted: 05 Jun 2013 01:12 PM PDT

I am trying to put together an ER Diagram to design a fairly simple asset tag tracking/inventory management system for our servers/switches/UPS/etc. The top-level entity corresponds to the business location, followed by entities for Room, Rack, RackU, and finally device. Now, I know that primary keys are supposed to be unique, but is that unique over the entire database, or unique for the individual branch of the system?

By that I mean, there are one to many rooms in a location, and one to many racks in each room. do the primary keys for the racks in one room in a given location need to be unique from those of either another room in the same location, or even from a room in a different location?

I probably didn't describe this very well, so feel free to ask for clarifications on or at any point.

mysql replication goes out of sync for some tables

Posted: 05 Jun 2013 11:40 AM PDT

We are running mysql 5.1.61 on redhat systems and have the following setup

One master and four slaves replicating from the master, we recently added a new slave for replication and over a few days we have started noticing that on the newly added slave some tables ( not all ) loose some records , this happens only on this slave and it is not regular , over a period of 3 weeks this issue seems to have happened on 5-7 days .

We use statement based replication. I am not sure why this happens on only one slave. There seems to be no error in the mysql error logs. The only difference between the old slaves and the new slave is that the new slave has a slightly lower ram than the other ones but the new slave is not being used for anything right now.

Is there a way to trouble shoot this issue to see why this happens on only one slave ?. Could it be network related or anything else ? Any pointers on where to start looking at ?

Here is the memory info Old slave

            total       used       free   Mem:      24731204   24641572      89632     

New slave

Mem:      16431192   10112880    6318312   

Find procedures that haven't been called in <n> days

Posted: 05 Jun 2013 02:12 PM PDT

We are deleting old stored procedures and tables.

How can I know what procedures haven't been called recently?

dm_exec_procedure_stats and dm_exec_query_stats aren't reliable, since they only return procedures in the plan cache.

Force View's query plan to update?

Posted: 05 Jun 2013 01:50 PM PDT

I have a View whose query plan appears to be cached, is there a way to force the View's plan to be recalculated on each access?

What is the best way to migrate multiple databases?

Posted: 05 Jun 2013 08:21 PM PDT

I want to migrate my database server to a new server. Right now I have a database server with Windows Server 2008 and now I am migrating to a new, separate server with Windows Server 2012.

There are around 50 to 100 databases.

What is the best way to migrate the database server without affecting clients (meaning no downtime)?

Updating an FTS indexed column returns "Invalid InnoDB FTS Doc ID"

Posted: 05 Jun 2013 10:26 AM PDT

Environment:

  • Ubuntu 12.04 (and 13.04)
  • MySQL 5.6.11

I have a table which has a full text index on it (real table has much more columns and rows):

DROP TABLE IF EXISTS articles;  CREATE TABLE articles (    FTS_DOC_ID BIGINT UNSIGNED NOT NULL AUTO_INCREMENT,    id INT NOT NULL ,    title VARCHAR(200),    body TEXT,    UNIQUE KEY (FTS_DOC_ID)  ) ENGINE=InnoDB;  CREATE FULLTEXT INDEX idx on articles (title);    INSERT INTO articles(id,title,body)     VALUES (9, 'MySQL Tutorial','DBMS stands for DataBase ...');  

MySQL documentation suggests to create the FTS_DOC_ID (with the right syntax) to prevent a full table rebuild.

So far all is good and I can query using the MATCH...AGAINST to use the FTS index. However when I need to update an indexed column:

UPDATE articles set title = 'New MySQL Tutorial'  WHERE id=9;  

I get a:

Error code 182, SQL state HY000: Invalid InnoDB FTS Doc ID  

If I manually take care of this column like this:

UPDATE articles a1, (SELECT MAX(FTS_DOC_ID)+1 AS ftsid FROM articles) a2  set title = 'New MySQL Tutorial', FTS_DOC_ID=ftsid  WHERE id=9;  

Then the update is done. But this is not acceptable because I have several processes in parallel that update this table (though all different rows) and the risk is to get the same ftsid in different processes. Note that updating the body column which is not indexed by the FTS index doesn't have this behaviour. I.e.:

UPDATE articles set body = 'Info: DBMS stands for DataBase ...' WHERE id=9;  

successfully update the database.

Is it the expected behaviour? Or is it a bug?

I found a bug reported in the MySQL buglist about the opposite case (cannot update a non-fts indexed column but can on an fts indexed one) but not this case.

Synchronization between master and backup server

Posted: 05 Jun 2013 08:25 PM PDT

I am searching for an option to synchronize data between production server and backup server automatically. Does somebody know a method, how I can do this:

  • if I delete, insert or update a record it should update it should do the same action for the backup server

  • if I select some entries it should select it from the primary server

  • if the primary server is not available it should select, insert, update and delete on the backup server
  • if the backup server is available again, it should write the data back to the primary sql server

Can I restore an uncompressed differential backup after a compressed full backup?

Posted: 05 Jun 2013 06:22 PM PDT

If we enable compression on database backups (because the database backup size is huge) but don't enable it on differential which are relatively small in size, will a restore work? Because the full backup is compressed but the differential isn't compressed.

Cannot Connect to Oracle Database running on Windows 8 Hyper V Virtual Machine

Posted: 05 Jun 2013 02:48 PM PDT

I have oracle server (standard edition) installed on a guest VM (Windows 8 Hyper V). I have mapped a host entry to the VM as "brettvm". I am able to log into the VM and connect to the default instance using sqlplus. So far so good.

EDIT: Here's the listner.ora file on the server. I'm starting to suspect the trouble may lie here:

LISTENER =    (DESCRIPTION_LIST =      (DESCRIPTION =        (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))        (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))      )    )    ADR_BASE_LISTENER = c:\oracle  

The tnsnames.ora file on the server looks like this:

LISTENER_ORCL =    (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))      ORCL =    (DESCRIPTION =      (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))      (CONNECT_DATA =        (SERVER = DEDICATED)        (SERVICE_NAME = orcl)      )    )  

The Problem: When I attempt to connect from the HOST machine, sqlplus prompts for a password and then hangs. Here is my tnsnames.ora on the host machine:

orcl.brettvm =    (DESCRIPTION =      (ADDRESS = (PROTOCOL = TCP)(HOST = brettvm)(PORT = 1521))      (CONNECT_DATA =        (SERVER = DEDICATED)        (SERVICE_NAME = orcl)      )    )  

Some things I have tried:

  • Verifed that "brettvm" is in my hosts (pingable and browseable)
  • Verified connectivity between the host and VM over port 80 (browser / telnet)
  • Verified connectivity between the host and vm over port 1521 (telnet). Note that I opened this port up on the VM through windows firewall.
  • Verified the database is up and running via sqlplus on the host (see above).
  • Turned off windows firewall on the VM

I'm going to start digging through the installation guide...

Any other advice? Thanks.

MySQL+Web-App Performance Issues using different servers

Posted: 05 Jun 2013 10:52 AM PDT

We are having a performance issue with our MySQL servers that does not make any sense. I have read countless articles from many people (mostly Percona) and have made my.cnf tweaks. We have even manage to squeeze out another 30% more TPS thanks to those Percona articles. However, our problem is with our in-house web-app (a Tomcat/Java/Apache model). It performs poorly when connected to certain servers - the better hardware servers.

Here is the symptom:

If we point our test application server (Ubuntu, Apache, Tomcat, Java) to server MYSQL02, the applications performance is acceptable. However, if we point the application to MYSQL01 or MYSQL03 (and these two boxes are idle!) the application performance is poor. There are high latencies. Example:

Retrieving an account (via our web-app) from MYSQL02 takes  2 to 3 seconds. On MYSQL01 or MYSQL03 it takes 12 seconds or longer.  

We cannot figure out why! The MySQL servers and MONyog do NOT report any problems! If we execute the statements (100's of them) manually they return instance results and their explanations show they are all using indexes. We do NOT get any slow query, deadlock, or contention notifications.

Here is some basic information about our MySQL systems. They are all DEDICATED MySQL servers:

PROD (current production, not in replication farm, standalone)

Best/newest hardware configuration (8 core/ 20GB / 15K RPM HW RAID 10)  MySQL: 5.0 (RHEL)  We are trying to migrate from this MySQL 5.0 to our MySQL 5.5 farm (see below).  This server will then be reloaded with MySQL 5.5 and return as the primary  production server.  

MYSQL01

Second best hardware configuration (8 core / 12GB RAM / 15K RPM HW RAID 10)  Supermicro X8FAH; 2xXEON X5550 @ 2.66GHz  2x LSI MegaRAID SAS 9260-8i  MySQL: 5.5.31 (Ubuntu)  OS: Ubuntu 12.04 LTS x64  

MYSQL02

Worst/oldest hardware configuration(4 core / 12GB RAM / 7.2K RPM SW {mdadm} RAID 5)  Supermicro X2DBE; 2xXEON 5150 @ 2.66GHz  MySQL: 5.5.31 (Ubuntu)  OS: Ubuntu 12.04 LTS x64    This is our EDI workhorse. It spends most of it's day importing data from  our old mainframe system. It replicates the data to 01 and 03.  

MYSQL03

Third best hardware configuration (8 core / 12GB RAM / 7.2K RPM HW RAID 10)  Supermicro X7DGB8-X; XEON E5410 @ 2.33GHz  MySQL: 5.5.31 (Ubuntu)  OS: Ubuntu 12.04 LTS x64  

We used sysbench to test and tweak all the above systems and here are the test results with notes.

NOTE: TPS = Transactions Per Second

Results before applying any new tweaks:

PROD: 1,179.72 TPS <- Expected  MYSQL01: 442.92 TPS <- Why?  MYSQL02: 543.22 TPS <- Expected  MYSQL03: 904.22 TPS <- Surprising  

Results after my.cnf tweaks:

MYSQL01: 655.00 TPS <- 32% improvement. Not bad but still very poor compared to its siblings.  MYSQL02: 754.81 TPS <- 28% improvement. This was unexpected.  MYSQL03: 969.59 TPS <- 07% improvement.  

We are unsure why MYSQL01's performance is so poor. We can only summarize that there is an OS, RAID CARD or BIOS setting(s) that may be improperly set. I am leaning towards the RAID Card/Configuration. They only way to know for sure is to shutdown this server and scrutinize the configuration. A reload may be necessary. However, since it is our ultimate plan to make the current PROD hardware the primary production MySQL server then we may leave MYSQL01 alone for now and re-purpose the hardware after migrating to the 5.5 farm. However, we can't migrate until we figure out why our application is behaving so poorly on certain hardware.

Anyone have any suggestions?

Database running out of space

Posted: 05 Jun 2013 01:09 PM PDT

My database has 16MB of space left.

I used to just truncate as I was taught but I found these links that advise against truncating:

http://www.sqlskills.com/blogs/paul/why-you-should-not-shrink-your-data-files/

http://blog.sqlauthority.com/2011/01/19/sql-server-shrinking-database-is-bad-increases-fragmentation-reduces-performance/

Is there anything else I can do on my database to reduce the size other than deleting table records? I am new to the DBA forum and I probably should have looked around for other questions before posting but I am desperate as I am worried about my database going down.

SQL Server database schema (and likely, some data changes) - how to auto-distribute over many database instances

Posted: 05 Jun 2013 10:24 AM PDT

Our development involves a SQL Server database (also might be Oracle or Postgres later) and we would sometimes make database schema changes or some other interventions in database.

What solutions exist to create a "patch" or "script" to distribute these changes on other installations of same database (we do not have direct access to these)? It needs to alter database schema and execute SQL and/or other complex, pre-programmed database data alterations as defined by person who initializes/designs change. On other instances, a system admin should be able just run some program/press button, so these changes would be applied automatically.

In addition, it is plus if such solution can take database snapshot and derive "difference" on contents of particular table that would be then distributed.

The solution can be commercial.

Thanks in advance!

MySQL slap with custom query

Posted: 05 Jun 2013 06:24 PM PDT

I want to conduct stress test on our MySQL DB. I have the list of queries i need to execute. I have tried using Apache JMeter for this but it is very time consuming. Is it possible to run mysqlslap with custom .sql file containing INSERT, UPDATE, SELECT queries on specified MySQL database?

Overview of how MongoDB uses its various threads

Posted: 05 Jun 2013 01:24 PM PDT

On one instance I have MongoDB using ~85 threads. In lieu of having time to investigate directly, I am curious:

  1. What are the threads used for? Do they fall into a few high-level usage categories?
  2. How can I control/influence the number of threads used? E.g. max connection params, etc.

MySQL information_schema doesn't update

Posted: 05 Jun 2013 08:24 PM PDT

I have a database, say abc, in mysql server. It has only one table named test. test uses innodb engine and I've set innodb_file_per_table to true.

After I run the query delete from abc.test, I want to calculate the database size of abc. Here is the query I use:

SELECT      table_schema "name",      sum( IF(engine = "MyISAM", data_length + index_length -  data_free,      data_length + index_length)) "size"  FROM information_schema.TABLES  where table_schema like "abc";  

The strange thing is that I find the database size doesn't decrease at all, however the data in "test" is gone.

I've done this kind of test many times, this strange behavior happens sometimes.

I'm using percona mysql server 5.5.29-rel29.4.

Can anybody tell me what is wrong?

Update:

Actually, I use another thread to check the database size periodically.

How to succesfully run a batch file in an SQL Agent job?

Posted: 05 Jun 2013 05:24 PM PDT

I have a SQL Agent Job which generates a specific report in PDF-file and then copies the PDF to a network directory and then deletes the PDF file in the source directory.

The SQL Jobs consists of 2 steps: 1. Generate the report 2. Copy the report to the network location.

For step 2 I made a bat-file which handles the copying and removal of the pdf file.

The bat-file is as follows:

set source_dir=C:\Reports\Energie\Uur  set dest_dir=\\KWS2-WEB-SERVER\Share\Reports\Uur    C:\Windows\System32\Robocopy.exe %source_dir% %dest_dir% /MOV /Z  

However, when I run my the Job, it hangs on the second step. The status just stays on "Executing".

This is the line which I stated in the step (location of the bat-file to execute):

cmd.exe /c "C:\Reports\rapport_uur_copy.bat"  

My job-settings are as follows:

Step 1

Type: Operating system (CmdExec) On Success: Go to the next step

On Failure: Quit the job reporting failure

Step 2

Type: Operating system (CmdExec)

On Success: Quit the job reporting success

On Failure: Quit the job reporting failure

Some facts:

  • I have read/write permissions on the network directory
  • I run the steps as the Administrator-account (the logged in user, default)
  • Step 1 succeeds
  • I run Windows Server 2008 R2 Standard
  • I have SQL Server 2008 R2
  • When I run the script from the cmd manually, it works (under Administrator account).

MySQL partitioned tables?

Posted: 05 Jun 2013 03:24 PM PDT

I have a database that supports a web application with several large tables. I'm wondering if partitioned tables will help speed up certain queries. Each of these tables has a colum called client_id. Data for each client_id is independent from every other client_id. In other words, web queries will always contain a where clause with a single client_id. I'm thinking this may be a good column on which to partition my large tables.

After reading up on partitioned tables, I'm still a little unsure as to how best to partition. For example, a typical table may have 50 million rows distributed more or less evenly across 35 client_ids. We add new client_ids periodically but in the short term the number of client_ids is relatively fixed.

I was thinking something along these lines:

CREATE TABLE foo (      id INT NOT NULL PRIMARY KEY,      ... more column defs here...      client_id int      )  PARTITION BY KEY(client_id)  PARTITIONS 35;  

My question. Is this an optimal strategy for partitioning these types of tables? My tests indicate a considerable speedup over indexing on client_id, but can I do better with some other form of partitioning (i.e. hash or range)?

Is there a SQL Server equivalent to "OVERRIDING USER VALUE"

Posted: 05 Jun 2013 01:50 PM PDT

While googling I found about the OVERRIDING USER VALUE parameter for an INSERT for IBM's iSeries.

Is there an equivalent command for Microsoft SQL Server versions 2005 or newer to allow you to insert into a table that has an IDENTITY column and use the auto-assigned value instead of the user value passed in?


In this instance, the main goal is to help convince a superior from moving us off of using a GUID as the clustered primary key (the schema was developed pre-SQL 2000 where that was a decent idea, vs. now where it is a horrible idea) to adding an identity column to the tables and moving the clustered primary key to that and converting the old index to a non-clustered unique index.

The biggest push back is:

There are a lot of places in the legacy code that use things like Insert into XXXX Select * from YYYY where ... where the number of columns is different based on run-time adjustable settings on the client's front end. Using a identity column would break those queries whereas GUIDs are unique and won't have the issue and it is too much work to rewrite all of the queries to use named columns due to the fact that the column names are variable per installed client.

So that is why SET IDENTITY_INSERT ON is not a option, as this will be a primary key column. I was hoping to make the argument that if we could use the SQL Server equivalent of OVERRIDING USER VALUE I could use an argument like "We can just add this small bit to the insert statements and we don't need to include the column names."

MySQL user defined rollback procedure

Posted: 05 Jun 2013 04:24 PM PDT

I'm attempting to write my own mini-rollback procedure. I have a table that tracks any updates or deletes to another table using a trigger. I am attempting to make it possible to restore one or more of these tracked changes through the use of a procedure. However, I'm receiving a syntax error with the following:

-- UNDO_CHANGES_PROCEDURE - This copies the values of the work log track table back into the relevant record in the work log table if an accidental edit or deletion is made.  DROP PROCEDURE IF EXISTS UNDO_CHANGES_PROCEDURE;    DELIMITER $$    CREATE PROCEDURE UNDO_CHANGES_PROCEDURE(ID INT(6))  BEGIN  DECLARE var_trig CHAR(8);    SET var_trig = (SELECT TRIGGER_TYPE FROM WORK_LOG_TRACK WHERE WORK_LOG_TRACK.WORK_LOG_EDIT_NUMBER = ID);    IF var_trig = 'Update' THEN  UPDATE WORK_LOG SET ENTRY_NUMBER = WORK_LOG_TRACK.ENTRY_NUMBER_FK, EMPLOYEE_ID_FK = WORK_LOG_TRACK.EMPLOYEE_ID_FK, WORK_ORDER_NUMBER_FK = WORK_LOG_TRACK.WORK_ORDER_NUMBER_FK, ENTRY_TIME = WORK_LOG_TRACK.ENTRY_TIME, WORK_DONE = WORK_LOG_TRACK.WORK_DONE WHERE WORK_LOG_TRACK.WORK_LOG_EDIT_NUMBER = ID AND WORK_LOG.ENTRY_NUMBER = WORK_LOG_TRACK.ENTRY_NUMBER_FK;  ELSE  INSERT INTO WORK_LOG(ENTRY_NUMBER, EMPLOYEE_ID_FK, WORK_ORDER_NUMBER_FK, ENTRY_TIME, WORK_DONE) VALUES (WORK_LOG_TRACK.ENTRY_NUMBER_FK, WORK_LOG_TRACK.EMPLOYEE_ID_FK, WORK_LOG_TRACK.WORK_ORDER_NUMBER_FK, WORK_LOG_TRACK.ENTRY_TIME, WORK_LOG_TRACK.WORK_DONE) WHERE WORK_LOG_TRACK.WORK_LOG_EDIT_NUMBER = ID;  END IF;  END;  $$    DELIMITER ;  

The syntax error comes in in regards to my update statement, any help or suggestions would be appreciated.

No comments:

Post a Comment