Tuesday, March 26, 2013

[how to] Update oracle sql database from CSV

[how to] Update oracle sql database from CSV


Update oracle sql database from CSV

Posted: 26 Mar 2013 08:43 PM PDT

I tried google at first but no luck. Is it possible to update tables from csv file? I am using SQL developer and I am using a script to export edited rows to a csv file. I would like to update the edited rows through that csv file from a client. I don't want to import the whole file as the client already has a mirror table, I just would like to update the data from what it is in the csv file. Is this possible?

If not what would be the best approach?

SQL Server Restore from one database to another

Posted: 26 Mar 2013 05:58 PM PDT

One of our devs backed up a dev database, and then restored it in production. It's a new database for a new app that was deployed last night intentionally to prod.

Now in the backupset table (msdb.dbo.backupset) on prod, I can see a record for the dev database, with a backup start date time of when the restore was done.

Record from prod backupset table.....

name: DatabaseName_UAT-Full Database Backup

server_name: COMPNAME-SQLDEV02

machine_name: COMPNAME-SQLDEV02

I would not expect to see this record.... can anyone explain why restore would insert into the backupset table on prod?

Can I delete this record from the msdb.dbo.backupset table? Or not such a good idea?

Thanks heaps.

MongoDB: move documents before capping

Posted: 26 Mar 2013 03:51 PM PDT

The cappedCollection concept works well for most of my projects where cleaning old data without care makes sense.

For another projects, I need a more complex and safe concept. The requirement is nearly the same as logrotate. The data is appended to the main collection, without compression/compact and no index except a timestamp for simple queries by time. => The focus is on writing and persistent data.

Similar to the logrotate rules, I'd like the main collection not to become too large => capped by size; if possible, capping by timestamp might be a plus.

This sounds like a cappedCollection, but I do not want any data loss when it's capped. The old data should be stored into another db's collection that must be compact:true and a non-capped collection. It's name depends on the current month and makes sure that there will be max 12 "archive" collections per year.

Example:

liveDB.mainCollection_capped grows and starts capping.

Before removing old documents, these are savely moved into archiveDB.compactArchiveCollection201303.

No data is lost and the main collection remains small and fast. Storing the data in another database avoids db locks, e.g. repairDatabase tasks on an archive file will not affect or delay the main collection.

Is there a good practice or how to achieve this - as reliable and automated as possible - without writing all the data transfer for a cronjob which handles the data transfer but should never ever be missed because data is lost if capping starts before old data is copied into the archive.

How can I improve my table design for different types of an entity?

Posted: 26 Mar 2013 05:30 PM PDT

Consider an accounting system as an example. I have an Entity called Client. Client can be of different types, with different fields applicable to different types. I consider creating separate tables for different types of Client, each having fields applicable to the respective type and have one master table referencing all of them and have fields applicable to all types.

Currently, I come up with the following design:

enter image description here

But I don't think my design is efficient enough (or even correct and free of errors). What would you suggest? Also, if this is important in any way, I am planning to utilize MariaDB.

FOR XML is generating a empty first line

Posted: 26 Mar 2013 07:09 PM PDT

I'm parsing with flash a XML file generated by this code:

:XML ON    USE MyDatabaseName   GO   SET NOCOUNT ON     SELECT * FROM ProgramacionDia as programa order by hora   FOR XML AUTO,  ROOT ('toflash'),   ELEMENTS     SET NOCOUNT OFF  

But I get a XML file with the first line empty. Removing this first empty line in the generated XML works ok with flash, but with the generated XML no.

How can I remove that line? Is my script wrong? I have no much idea about this code.

I'm running SQL Server 9.0.

Identifying which values do NOT match a table row

Posted: 26 Mar 2013 04:26 PM PDT

I would like to be able to easily check which unique identifiers do not exist in a table, of those supplied in a query.

To better explain, here's what I would do now, to check which IDs of the list "1, 2, 3, 4" do not exist in a table:

  1. SELECT * FROM dbo."TABLE" WHERE "ID" IN ('1','2','3','4'), let's say the table contains no row with ID 2.
  2. Dump the results into Excel
  3. Run a VLOOKUP on the original list that searches for each list value in the result list.
  4. Any VLOOKUP that results in an #N/A is on a value that did not occur in the table.

I'm thinking there's got to be a better way to do this. I'm looking, ideally, for something like

List to check -> Query on table to check -> Members of list not in table

Differences Between Two Different Create Index Commands

Posted: 26 Mar 2013 01:43 PM PDT

Are there differences between these two scripts? Or would all of the extra tokens/attributes (ie: NONCLUSTERED, WITH..., etc...) for the 1st script be defaults in SQL Server 2008 for the 2nd script?

1st script:

CREATE UNIQUE NONCLUSTERED INDEX [DEID_MAP_IDX1] ON [dbo].[DEID_MAP]  (      [VISIT_NUM] ASC  ) WITH     (PAD_INDEX  = OFF,      STATISTICS_NORECOMPUTE  = OFF,      IGNORE_DUP_KEY  = OFF,      ALLOW_ROW_LOCKS = ON,      ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]  

2nd script:

CREATE UNIQUE INDEX [DEID_MAP_IDX1] ON [DEID_MAP]   (      [VISIT_NUM] ASC  );  

FYI: there is ETL code that drops the index with this script before doing a bulk data load, and then finally applying re-creating the index with the 2nd script above.

DROP INDEX [deid_map_idx1] ON [deid_map] WITH ( ONLINE = OFF );  

EDIT:

After applying the simple index above (2nd script), I got this:

SQL Server Management Studio > expanded table > expanded folder "Indexes" > right click index > selected "Script Index as.." > selected "CREATE TO" > selected "New Query Editor Window" > got the following.

CREATE UNIQUE NONCLUSTERED INDEX [DEID_MAP_IDX1] ON [dbo].[DEID_MAP]   (      [VISIT_NUM] ASC  ) WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]  GO  

So it appears that these are added in addition by running the simple statement:

SORT_IN_TEMPDB = OFF

DROP_EXISTING = OFF

ONLINE = OFF

MySQL Pivot Table Naming Conventions

Posted: 26 Mar 2013 01:03 PM PDT

I have some tables (users, jobs, customers) that are each tied to 'groups'. As each table is linked to 'groups', I feel inclined to call the actual group tables user_groups, job_groups and customer_groups; However, if the tables were just 'users' and 'groups', my pivot table would normally be called 'user_groups'.

How would you name these tables?

  1. Users, user_groups and ...
  2. Jobs, job_groups and ...
  3. Customers, customer_groups and ...

If I end up with something like user_groups_pivot, would it be acceptable to call other pivot tables something like: users -> user_roles (the pivot) -> roles rather than users -> user_roles_pivot -> roles?

I'm very fond of the idea of names being predictable.

SQL Server TDE Stuck at Encryption_State 2 and 0% on a Tiny DB

Posted: 26 Mar 2013 03:12 PM PDT

I'm having a heck of a time with this issue and can't figure out what is wrong. I'm not sure how long this tiny DB has been in 'encryption_state = 2' from query:

SELECT * FROM sys.dm_database_encryption_keys  

but it won't budge past 0%. State 2 means that it is currently trying to encrypt the DB. No other DB has any issues, encrypted or unencrypted.

Command:

ALTER DATABASE x SET ENCRYPTION OFF  

Result:

Msg 33109, Level 16, State 1, Line 2
Cannot disable database encryption while an encryption, decryption, or key change scan is in progress.
Msg 5069, Level 16, State 1, Line 2
ALTER DATABASE statement failed.

Running:

ALTER DATABASE x SET ENCRYPTION ON  

actually returns a command completed, but percentage still stays at 0%.

There's no DB corruption from DBCC CHECKDB. There's no locking/blocking going on (not that the logical blocking would affect TDE since it affects the DB on the page level). I'm at a loss short of calling MS on this. Anyone have any ideas? I'm going to try to restore this to a different DB server and test unencrypting the backup there.

Thanks.

SELECT DB_NAME(database_id), percent_complete, *   FROM sys.dm_database_encryption_keys  go  

Edit & update:

Restored to a diff server with the cert, it goes to 1.187865% for 'percent_complete' then immediately reverts back to 0% in about 1 seconds time. Firing up profiler now to catch something in the background perhaps, checking to see if extended events would help.

Oh boy, profiler shows error 824 suspect DB page. DBCC CheckDB consistently shows no errors. Time to brush up on my CHECKDB internals, I know Paul Randal blogs about error 824. Will update this for others who might have this issue.

Optimize Query Execution based on two datetime columns (MySQL)

Posted: 26 Mar 2013 12:22 PM PDT

I've been struggling all day long against this.

So I've got a very busy database (still on development) and we've got records being inserted very frequently. The record logs have a start time and a end time.

So if I want to select something between col1(datetime) and col2(datetime) mysql can't use indexes properly because it will search the indice for col1 but will never look into col2. The database engine is INNODB. What happens is for example that mysql will search 80 thousand rows when the interval requested should only return two rows.

My biggest problem is that I'm trying to do some aggregate functions on this time ranges and its taking a very long time when it should be really fast considering how many rows it is actually counting.

Also note that i cant do dateStart between col1 and col2 nor dateEnd between col1 and col2 because dateStart can be lower than col1 and dateEnd can also be lower than col2.

Lets assume this sample data:

     col1      |     col2  ---------------+---------------  date 10:20:00  |date 10:21:00  date 10:21:00  |date 10:22:00  date 10:22:00  |date 10:23:00  date 10:23:00  |date 10:24:00  date 10:24:00  |date 10:25:00  date 10:25:00  |date 10:26:00  date 10:26:00  |date 10:27:00  

If I need the rows that range between 10:21:30 and 10:25:30 I need to do something like this: '10:25:30' <= col1 AND '10:21:30' >= col2. So how do I index this columns properly? Mysql only picks up one of the date columns in the indice.

Thanks in advance

Invalid rowid error

Posted: 26 Mar 2013 01:01 PM PDT

I'm trying to see, how UPDATE Lock helps to minimize error while dml (delete/update) operations.

declare   cursor update_lock is select empno from emp where deptno=&no for update of sal;   num number;  begin   --for i in update_lock loop   --end loop;   open update_lock;   loop     fetch update_lock into num;     exit when update_lock%notfound;     dbms_output.put_line(num);   end loop;   update emp set sal=sal+10 where current of update_lock;   close update_lock;  end;  

I'm using very simple code to check, how does it works. But, it showing Invalid ROWID. Can anyone help me?

Postgres RIGHT JOIN with custom array

Posted: 26 Mar 2013 05:46 PM PDT

I'm using Postgres 9.1 and want to get a result with some blanks where there is no data. My query looks like the following:

SELECT institution_id FROM ... WHERE institution_id IN (1, 3, 4, 5, 7, 9)  

The ... is not important to this question, it's just important that it returns a result with the institution_ids in the array (1, 3, 4, 5, 7, 9) and it includes those institutions with no data. Here is an example of the current output:

days    treatments    institution_id  266    6996    4  265    5310    1  267    3361    5  260    2809    3  264    5249    7  

An example of the output I want is:

days    treatments    institution_id  266    6996    4  265    5310    1  267    3361    5  260    2809    3  264    5249    7                 9  

I know I can achieve this by using the following query:

SELECT *  FROM (         SELECT institution_id         FROM ...          WHERE institution_id IN (1, 3, 4, 5, 7, 9)       )  RIGHT JOIN generate_series(1,9) ON generate_series = institution_id  WHERE generate_series IN (1, 3, 4, 5, 7, 9)  

However, this is extra work because generate_series(1,9) creates institution_ids I'm not interested in, it requires that I know the max institution_id a priori, and it introduces an unnecessary WHERE clause. Ideally I'd like a query like the following:

SELECT *  FROM (        SELECT institution_id        FROM ...        WHERE institution_id IN (1, 3, 4, 5, 7, 9)       )  RIGHT JOIN (1, 3, 4, 5, 7, 9) ON generate_series = institution_id  

Where (1, 3, 4, 5, 7, 9) is just an array that Postgres will use for the JOIN command. I've also already tried [1, 3, 4, 5, 7, 9] and {1, 3, 4, 5, 7, 9} both to no avail.

Any ideas?

How can I tell if a SQL Server backup is compressed?

Posted: 26 Mar 2013 10:52 AM PDT

We have recently upgraded from SQL Server 2005 to SQL Server 2012. Under SQL Server 2005 there is no option to create compressed backups as there is in 2012.

If you attempt BACKUP DATABASE ... WITH (COMPRESSION); to a file that has already been initialized without compression, the BACKUP DATABASE command will fail with the following error message:

ERROR MESSAGE : BACKUP DATABASE is terminating abnormally.  ERROR CODE : 3013  

How can I tell if an existing backup file is initialized for compressed backups?

Run Multiple Remote Jobs

Posted: 26 Mar 2013 02:04 PM PDT

I need to manually run a job on more than 150 sql server instances (sql server 2000, remote) from a sql server 2005 instance (the local server). The job is the same on all these instances. The job just calls a stored procedure without parameter, which is also the same across all the instances. These jobs are on a schedule. But now they want me to manually run the job for all the instance or for specified instances upon request.

What is the best practice for this? I have tried openrowset to call the remote stored procedure. But each run of the job takes couple of minutes, so if I use a loop to run all these jobs, it will run one by one and that's a long time. Ideally, it should be able to run the stored procedure on each instance without waiting for it to finish. More ideally, it should be able to run the job on each instance without waiting for it to finish, so it can leave a record in the job history on each instance.

And the stored procedure is from a third party so it can't be altered.

Stored Procedures under Source Control, best practice

Posted: 26 Mar 2013 04:13 PM PDT

I am currently using Tortoise SVN to source control a .NET Web Application. What would be the best way to bring our SQL Server stored procedures into Source Control? I am currently using VS 2010 as my development environment and connecting to an off-premise SQL Server 2008 R2 database using SQL Server Data Tools (SSDT).

What I have been doing in the past is saving the procs to a .sql file and keeping this files under source control. I'm sure there must be a more efficient way than this? Is there an extension I can install on VS2010, SSDT or even SQL Server on the production machine?

Secure Linked Server - Non privledged user possible? Registry corruption?

Posted: 26 Mar 2013 04:39 PM PDT

Is it possible to use a non privledged Windows domain account to impersonate itself in a linked server?

And why would it be unable to read the registry for available network protocols?

Overview: Only way I am able to have a scheduled job utilize a linked server is when the local account is mapped to a remote SQL account. Unable to use 'Impersonate.'

Details:

  • Two SQL 2008 R2 Std instances on Win Server 2008 R2 x64
  • One default + one named
  • I'll use Server_A_Default + Server_A_Named to refer to the instances
  • Each instance has it's own AD service account for MSSQL + Agent (4 unique AD accounts in use on server)
  • Port hard coded for Named instance Server_A_Named
  • SPNs created for the 2 MSSQL accounts.
  • SPNs match the default and hardcoded named instance port respectively

Within the named instance (Server_A_Named):

  • Created a linked server on Server_A_Named to Server_B. We'll call the linked server SAN-B.

In SAN-B, I've used SQL Nativue Client 10.0 + OLE DB Provider for SQL

Under the Security for SAN-B, I have 3 accounts:

  • NonPrivADuser
  • ADuserSysAdmin
  • LocalSQLuser

For logins not defined, connections will not be made.

As ADuserSysAdmin, I can click on test connection and it works.

Only way to get linked server to work for NonPrivADuser is to have it map to a local SQL account on Server_B NonPrivADuser has access on Server_B's database as well.

This is the error that NonPrivADuser receives while trying to access the linked server using 'impersonate':

Executed as user: DOMAIN\NonPrivADuser. SQL Server Network Interfaces: Error getting enabled protocols list from registry [xFFFFFFFF]. [SQLSTATE 42000] (Error 65535) OLE DB provider "SQLNCLI10" for linked server "SAN-B" returned message "Login timeout expired". [SQLSTATE 01000] (Error 7412) OLE DB provider "SQLNCLI10" for linked server "SAN-B" returned message "A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online.". [SQLSTATE 01000] (Error 7412). The step failed.

I fired up procmon on Server_A while trying to use the linked server, SAN_B.

SQLAGENT.EXE can read HKLM\SOFTWARE\Microsoft\MSSQLSERVER\Client\SNI10.0 SQLSERVR.EXE receives a BAD IMPERSONATION on the same key.

I fired up regedit and 'users' has read permissions on that key.

Separate SQL Server data from schema

Posted: 26 Mar 2013 05:37 PM PDT

I am facing a kind of strange request:

We have application installations world wide. One of the countries that we wish to do business in has some strict laws regarding the handling of the data, such that it would be advantageous for us to store the data within the boundaries of that country.

So far, nothing weird, right? We will have an instance of our SQL Server database hosted within the borders of said country.

Here's the part that is outside of my knowledge: Someone in management heard that some other firms do this by keeping the schema and indexes in a local location, but storing the data in an encrypted form in the other country.

The only thing that I could think of that might support this would be to put the tables that contain sensitive data in a separate file or file group from the rest of the database. However, in this case, there would be an ocean between the file and the server! I can't imagine that we would get good performance from this sort of arrangement.

Is there anyone out there who has had experience with this sort of request? What technologies can I look at to accomplish this?

Booking system structure

Posted: 26 Mar 2013 09:31 AM PDT

I am making a room booking system for a friends business.

Background: 3 rooms, multiple clients, bookings available 9-5, bookings last 1 hour.

For the database, is this too simple?

Booking record table

Reference | Client ID | Room ID | Timestamp

Client table

Client ID | Name | Phone | Email

Room table

Room ID | Name | Sink | Window | .....

Thanks for your help

Nathan.

Reindexing and its effects on SQL cache

Posted: 26 Mar 2013 01:37 PM PDT

Currently looking into reindexing our sql database, and cannot find any information on whether or not the query cache would be effected. Any help or information would be great. We are using SQL Server 2005 as our DBMS.

SQL Server 2008 search joining query

Posted: 26 Mar 2013 09:03 PM PDT

I have two table as one is the message table and another one is messageUser table. Now i need to check before insert a new row.

for example,

M_Message
MessageId
=========
               1
               2
               3

M_MessageUser
MessageId | MemberId | BusinessId
=========|=========|=========
               1 |               1 |               0
               1 |               0 |               2
               2 |               1 |               0
               2 |               0 |               2
               2 |               3 |               0
               2 |               4 |               0
               3 |               1 |               0
               3 |               0 |               2
               3 |               0 |               4

if a member create a new message, i would like to check the user in this message is there exist before. If yes, then attach the message to previous conversation else create a new conversation.

Scene 1
Member 1 sent a message to Business 2, from the table we know that there have a previous conversation which is Message 1

Scene 2
Member 1 sent a message to Business 2 & Member 3, from the table we know that there is no previous conversation

I've tried before using UNION, IN for the checking query but basically is just get back all the list. Is there anyone can give me a help? Thanks.

UPDATE

I can solve scene 1 by using but failed to suit scene 2

    SELECT MessageId FROM M_MessageUser      WHERE (MemberId IN (0,1) AND BusinessId IN(0,2))      GROUP BY MessageId      EXCEPT      SELECT MessageId FROM M_MessageUser      WHERE (MemberId NOT IN (0,1) OR BusinessId NOT IN(0,2))      GROUP BY MessageId;  

Unique index on 2 columns in mysql

Posted: 26 Mar 2013 03:49 PM PDT

I have one table in mysql named 'UserFriends' where I am updating my websites user's friends details.

here is the schema of the table (UserFriends)

id  int,  Userid int,  friendid int,  createdate timespan  

now, I want to create unique index on userid & friendid. that i have created unique index well. so, right now i am not able to insert same userid and friendid as duplicate. but if i am inserting same value in opposite field it accept without generating error.

example :

insert into userfriends ( userid, friendid )  select 1, 2  --- insert perfect  insert into userfriends ( userid, friendid )  select 1, 2  --- show error because unique index comes in a picture  

now i am inserting

insert into userfriends ( userid, friendid )  select 2, 1  --- records insert here (i don't want this)  

How do i prevent this?

loading a csv file which is on local system in to Mysql DB which is on remote server

Posted: 26 Mar 2013 12:17 PM PDT

Can we directly load a CSV file ( which is on the local system) on MYSQL DB ( which is installed on the Remote server ) ?

'load data infile into table name' command can only be used for loading in local system only.

Why would I NOT use the SQL Server option "optimize for ad hoc workloads"?

Posted: 26 Mar 2013 09:43 AM PDT

I've been reading some great articles regarding SQL Server plan caching by Kimberly Tripp such as this one: http://www.sqlskills.com/blogs/kimberly/plan-cache-and-optimizing-for-adhoc-workloads/

Why is there even an option to "optimize for ad hoc workloads"? Shouldn't this always be on? Whether the developers are using ad-hoc SQL or not, why would you not have this option enabled on every instance that supports it (SQL 2008+), thereby reducing cache bloat?

Avoiding performance hit from GROUP BY during FULLTEXT search?

Posted: 26 Mar 2013 08:51 AM PDT

Is there any clever way to avoid the performance hit from using group by during fulltext search?

SELECT p.topic_id, min(p.post_id)   FROM forum_posts AS p   WHERE MATCH (p.post_text) AGAINST ('baby shoes' IN BOOLEAN MODE)  GROUP BY p.topic_id  LIMIT 20;  

In this example it's fetching the lowest post_id for unique topic_ids that match the text.

With the group by to find the min, it's taking 600ms in a million row database, with about 50K rows examined.

If I remove the MIN but leave the GROUP BY, it's the same slowness, so it's the GROUP hit.

I suspect this is because it can only use one index, the fulltext ?

key: post_text | Using where; Using temporary; Using filesort    Query_time: 0.584685  Lock_time: 0.000137  Rows_sent: 20  Rows_examined: 57751  Full_scan: No  Full_join: No  Tmp_table: Yes  Tmp_table_on_disk: No  Filesort: Yes  Filesort_on_disk: No  Merge_passes: 0  

Without the GROUP BY it's 1ms so this has to be filesort speed?

(I've removed ORDER BY and everything else to isolate where the hit is)

Thanks for any insight and ideas.

(using MyISAM under mariadb if it matters)

AWS performance of RDS with provisioned IOPS vs EC2

Posted: 26 Mar 2013 10:22 AM PDT

Has anyone done a performance comparison of AWS RDS with the new provisioned IOPS vs EC2? I've found plenty of non-high IOPS RDS vs EC2 but nothing with the new high IOPS feature in RDS.

sp_startpublication_snapshot Parameter(s)

Posted: 26 Mar 2013 02:51 PM PDT

I am creating a stored procedure that:

  1. Restores a DB from a .bak giving the .mdf and .ldf a new name (so we have have several copies of the same DB up
  2. (If specified in the SP's parameter) Creates three merge replication publications
  3. (What I need help doing) Generating the snapshots for the three publications using sp_startpublication_snapshot

Here is my new brick wall... On this DB server, I have a 'shell' db that they will be running the SP from, that has a history table so I can keep track of who created/deleted databases using my SP's... The only parameter for sp_startpublication_snapshot is @publication... I can give it the publication name, but since I am not running it from the publishing database, how do I specify the publishing database?

i.e.: the publication shows up as:

[WC48_Database1]: upb_Inspection_PrimaryArticles  

but I am running the script from the database [WC_QACatalog]

Any ideas about how to accomplish this?

Thank you, Wes

Binlog has bad magic number

Posted: 26 Mar 2013 08:51 PM PDT

I keep getting this error whenever I start MySQL.

121028  1:38:55 [Note] Plugin 'FEDERATED' is disabled.  121028  1:38:55 InnoDB: The InnoDB memory heap is disabled  121028  1:38:55 InnoDB: Mutexes and rw_locks use Windows interlocked functions  121028  1:38:56 InnoDB: Compressed tables use zlib 1.2.3  121028  1:38:56 InnoDB: Initializing buffer pool, size = 16.0M  121028  1:38:56 InnoDB: Completed initialization of buffer pool  121028  1:38:56 InnoDB: highest supported file format is Barracuda.  121028  1:38:57  InnoDB: Waiting for the background threads to start  121028  1:38:58 InnoDB: 1.1.8 started; log sequence number 3137114  121028  1:38:58 [ERROR] Binlog has bad magic number;  It's not a binary log file that can be used by this version of MySQL  121028  1:38:58 [ERROR] Can't init tc log  121028  1:38:58 [ERROR] Aborting    121028  1:38:58  InnoDB: Starting shutdown...  121028  1:38:58  InnoDB: Shutdown completed; log sequence number 3137114  121028  1:38:58 [Note] C:\PROGRA~2\EASYPH~1.1\MySql\bin\mysqld.exe: Shutdown complete  

I have already tried this.

I have an EasyPHP 12.1 setup on Windows 7x64 PC.

[SQL Server] Mixed joins

[SQL Server] Mixed joins


Mixed joins

Posted: 26 Mar 2013 01:33 AM PDT

I have a table (tblTimesheetWOErrors) containing an error list with (ErrorID, WONbr, TimesheetID, Username, Date). The WONbr refers to a unique (Non-PK) index on the tblWorkOrder table, and may be invalid. I have a Timesheet Master that identifies the Vendor as well. Problem is, I can't seem to get the mixed joins to work properly. Either I get no rows returned, or only valid WOs returned...that's what is happening here. I read up on mixed joins, and followed the advice I found to eliminate them by using "sub-selects". Herewith my SQL. Can anyone help me here?[code="sql"]SELECT CASE WHEN v.[Vendor Name] IS NULL THEN 'No Vendor' ELSE v.[Vendor Name] END AS TD, e.TimesheetID AS TD, e.UserName AS TD, e.WorkOrderNbr AS TD, format(e.ErrorDate, 'd') AS TD, CASE WHEN w.WOExpiry IS NULL THEN 'Invalid WO' ELSE format(w.WOExpiry, 'd') END AS TDFROM tblTimesheetWOErrors e join (select e1.TimesheetID, wo.* from tblTimesheetWOErrors e1 left join tblWorkOrders wo on e1.WorkOrderNbr=wo.WONbr ) w on e.TimesheetID=w.TimesheetID join (select m1.tsid, v1.* from tblTimesheetMaster m1 join tblVendor v1 on m1.TSVendorID=v1.ID) v on e.TimesheetID=v.TSID[/code]tblTimesheetWOErrors:ID UserName ErrorDate WorkOrderNbr TimesheetID23 jamesshaffer 2013-03-23 00:00:00.000 TIALLOC 502524 nfox 2013-03-24 00:00:00.000 [b]655487[/b] 501325 jamesshaffer 2013-03-23 00:00:00.000 TIALLOC 502526 nfox 2013-03-24 00:00:00.000 [b]655487[/b] 501327 jamesshaffer 2013-03-23 00:00:00.000 TIALLOC 502528 nfox 2013-03-24 00:00:00.000 [b]655487[/b] 5013(Bold WONbrs are invalid)Results:wConstruction Inc 5013 nfox 655487 3/24/2013 Invalid WOwConstruction Inc 5013 nfox 655487 3/24/2013 Invalid WOwConstruction Inc 5013 nfox 655487 3/24/2013 Invalid WOwConstruction Inc 5013 nfox 655487 3/24/2013 Invalid WOwConstruction Inc 5013 nfox 655487 3/24/2013 Invalid WOwConstruction Inc 5013 nfox 655487 3/24/2013 Invalid WOwConstruction Inc 5013 nfox 655487 3/24/2013 Invalid WOwConstruction Inc 5013 nfox 655487 3/24/2013 Invalid WOwConstruction Inc 5013 nfox 655487 3/24/2013 Invalid WO

[MS SQL Server] Running SQL Profiler Trace on a specific table

[MS SQL Server] Running SQL Profiler Trace on a specific table


Running SQL Profiler Trace on a specific table

Posted: 26 Mar 2013 04:18 AM PDT

Is there a way to run a SQL Trace on a specific table? I've got a table in the production environment, which keeps changing values in certain column, and I'm trying to find what is causing the change.

SQL Job Agent History format

Posted: 25 Mar 2013 11:55 PM PDT

Is there any way to fix the formatting? I'm just getting lines of text without carriage return/line feeds. It makes it very difficult to read. I know I can log to a file, but it'd be nice to be able to view it without having to map a drive and dig for the log.

Script for database level object permissions ?

Posted: 26 Mar 2013 02:02 AM PDT

Hello,I need to find a script that will list all permissions to public ONLY on a database level ( i.e select , execute ... ) like this vulnerability below.[Object Name:system_sql_modules] [Granted To:public] [Granted By:dbo] [Database:ABC] [Permission:SELECT] [Schema Name:sys] [State:GRANT] [Class:OBJECT_OR_COLUMN]much appreciated !JR

Log Backup

Posted: 25 Mar 2013 08:34 AM PDT

After I take the log backup, shouldn't I see the decrease in size of my transaction log file? Do I need to do anything to see the decrease in size of my transaction log file?

Column encryption and decription........

Posted: 26 Mar 2013 01:34 AM PDT

Hello experts,I need to encrypt 4 col. in sql server 2008 and decrypt again. Do I have to do one column at a time or all together, if somebody give me more info with steps I would really appreciate.

Catching culprits of high tempdb growth

Posted: 25 Mar 2013 11:48 PM PDT

Hello,We are facing issues with tempdb on our SQL server 2008 clustered instance. The SQL version is Microsoft SQL Server 2008 (SP1) - 10.0.2714.0 (X64) May 14 2009 16:08:52 Copyright (c) 1988-2008 Microsoft Corporation Enterprise Edition (64-bit) on Windows NT 5.2 (Build 3790: Service Pack 2) Once or twice a month, tempdb grows to unexpectedly high values and log file growth fills up the drive. The problem is that it is happening at Random times and there is no trend. We are looking to find out what is causing the tempdb to fill. Is there a script that I can deploy in my environment using which I can find out the culprits that are growing my tempdb? I tried searching through default trace files but those are not very helpful. This has been a headache since last 3 months or so and everytime we have to restart SQL server during production hours to solve the problem. Any help would be appreciated.Thanks in advance!!

starting sql server

Posted: 25 Mar 2013 06:15 PM PDT

Is this possible to restart the sql server in single user modeif yes is there any impact on server

SQL SERVER 2008 PRO installation

Posted: 25 Mar 2013 08:55 PM PDT

Hello,All.can anyone please help.I have visual studio 2008 pro installed in my system.while sql server 2008 pro installation i stuck-up in ,installation rules getting an error to install visual studio 2008 SP1.so i installed that one.and restarted the system.and then again i run the sql server 2008 setup,but still showing the same error.i checked whether SP1 is installed,and it is done for sure.i have sql server 2008 express in my system.will that be a prblm?can anyone please help me in this,coz i need this urgently to start my project..please..

HIgh CPU usage by SQLserver.exe

Posted: 25 Mar 2013 06:49 AM PDT

HI All,We have moved one of our 2005 database to 2008 R2 windows server- with 16GB RAM and 4CPU's.The database size is of 2.7GB.I see that the CPU utilisation by this Single database is very high and reaching to 100% most of the times. Sometimes the website crashes and CPU drop down.How can i trace and find the exact problem with that database.I have done- Rebuild of indexes/Update stats, Queries are going for parallelism- so changed the MAx degree of parallelism to 2.Can any one please suggest how can i trace which session is consuming more CPU. Thanks.

MDW and upload schedules from different instances

Posted: 25 Mar 2013 11:08 AM PDT

I have recently set up MDW across our SQL estate. There are approximately 40 instances being monitored all sending their data to a cental sysutility_mdw database. I started to run into issues with uploads failing and timing out so I staggered the upload schedules. Each set of upload jobs from an instance still kicked off at the same time (every 15 mins) but only happening once a day. However, wheh I now look at the reports a No Data Availble message appears despite the upload completed without errors.I then staggered one set of uploads to start 2 minutes apart to upload every 15 mins and every 17 mins and this report is now displaying data again.If the upload only happens once a day, where has the data gone? Each of these is running in cached mode, so I assumed all data for last 24 hours would just be uploaded, but this is not the case?Can anyone shed any light on this?In terms of locking issues on the sysutility_mdw database, what would be the best upload schedule? If I have 10 instances, should the upload times be staggered from each instance so data is hitting the sysutility_mdw database at different times, or is it safe to just stagger the upload times between jobs on the one instance, but upload the data at the same time from all instances? Or should each instance have it's own MDW database?Hope this makes sense?

[Articles] A Good Security Response

[Articles] A Good Security Response


A Good Security Response

Posted: 25 Mar 2013 11:00 PM PDT

Evernote recently had a security incident and forced all users to reset their passwords. Many people thought this was a good response to a security incident. Would your company act in a similar manner?

sqlsourcecontrol Database source control in just 5 minutes
It takes just 5 minutes to connect your SQL databases to source control. Got 5 minutes to spare? Get started now.

[SQL 2012] Setup Tran Replication from 2012 to 08r2

[SQL 2012] Setup Tran Replication from 2012 to 08r2


Setup Tran Replication from 2012 to 08r2

Posted: 26 Mar 2013 01:56 AM PDT

Is it possible to setup replication from SQL 2012 (Pub) to 08r2 (Sub)

ErrorLog Found in sql server 2012

Posted: 25 Mar 2013 06:42 PM PDT

Hi , I found the below error in my error log in sql server 2012 . Kindly advice. Error: ----- An error occurred in Service Broker internal activation while trying to scan the user queue 'msdb.dbo.InternalMailQueue' for its status. Error: 1222, State: 51. Lock request time out period exceeded. This is an informational message only. No user action is Can anyone let me the impact of this as it is an production server. Kindly let me know it ASAP. RegardsVishal

SQL 2012 Cluster Multiple Instances Procedure !

Posted: 25 Mar 2013 08:40 PM PDT

Hello,Kindly help me if I need different sources for each instance or can I use the same for all the instances I want to install like, SAN Driver, IP Address, Service Account?Kindly give me the detailed procedure to follow as in Best Practise.Regards !

ReadOnly intent through SSMS

Posted: 25 Mar 2013 05:00 PM PDT

I know that you have to specify ;ApplicationIntent=ReadOnly when you want to connect to a read-only replica in an availability group through SSMS, but I wondered if there was a way to configure this somehow so you didn't have to remember it every time?Unfortunately you don't get the connection properties box when registering a server so I'm not sure if this is possible. We have some users who are complaining that they have to specify the option.Is this possible?

[T-SQL] I think I already know the answer but...

[T-SQL] I think I already know the answer but...


I think I already know the answer but...

Posted: 26 Mar 2013 12:48 AM PDT

I suspect the answer is going to be "No" but thought I'd check with more experienced heads than my own.I have several third party applications that I'm supporting and cannot change the T-SQL they are using but I CAN optimize the hell out of their database if necessary. My question is this:There are several tables that are being hit with "Select * from..." and those queries return incredibly slowly. There are no indexes or primary keys on these tables, but there are a few candidates should that route be helpful. So the question is.. would an index or primary key speed up a select * statement? If so, why?

Extract Saturday through Friday as the weekly date range

Posted: 25 Mar 2013 05:40 AM PDT

I have data with attendance hours/dates for every day of the month/year. I need to create a query where I can select a range of dates, such as Jan. 1 2013 to March 31, 2013, but output should be a weekly total of attendance hours for each Saturday through Friday weekly range only within the specified beginning and end dates above. Can this be done?

Alpha Numeric number Generation

Posted: 25 Mar 2013 05:49 AM PDT

Hello, Need help with in writing a Stored Procedure for Auto Alpha Numeric Generation. If I pass an Alpha numeric in the format mentioned below , I should get next number based on the following logic: Format : It consists of 6 digit Alpha Numeric Code , where E is constant. For ex : E00001 SP should generate like this:E00001 to E99999 when it reaches to E99999 then 9 on left will be replaced with "A"- EA0001EA0001 - EA9999EB0001 - EB9999EZ0001 - EZ9999 when it reaches EZ9999 it should be incrmented to E0A001 Thanks, Nick

how to populate month value equally to all business days value and help me to write Stored procedure for below scenario?

Posted: 25 Mar 2013 11:08 PM PDT

Hi Every Body, i am strucked up in creating stored procedure.... My scenario is:I Have a table Sales_Month with values like this : State District Division Month_Number Total35 3 15 1 692.166635 3 15 2 692.166635 3 15 3 692.166635 3 15 4 692.166635 3 15 5 692.166635 3 15 6 692.166635 3 15 7 692.166635 3 15 8 692.166635 3 15 9 692.166635 3 15 10 692.166635 3 15 11 692.166635 3 15 12 692.166635 4 15 1 786.916635 4 15 2 786.916635 4 15 3 786.916635 4 15 4 786.916635 4 15 5 786.916635 4 15 6 786.916635 4 15 7 786.916635 4 15 8 786.916635 4 15 9 786.916635 4 15 10 786.916635 4 15 11 786.916635 4 15 12 786.916635 8 15 1 1318.7535 8 15 2 1318.7535 8 15 3 1318.7535 8 15 4 1318.7535 8 15 5 1318.7535 8 15 6 1318.7535 8 15 7 1318.7535 8 15 8 1318.7535 8 15 9 1318.7535 8 15 10 1318.7535 8 15 11 1318.7535 8 15 12 1318.7535 9 15 1 623.666635 9 15 2 623.666635 9 15 3 623.666635 9 15 4 623.666635 9 15 5 623.6666I want to create a store procedure to get output as State District Division Month_Number Total Business Days Sales per day Date35 3 15 1 692.1666 22 0.0000 1/1/2013 Holiday35 3 15 1 692.1666 22 31.4621 1/2/201335 3 15 1 692.1666 22 31.4621 1/3/201335 3 15 1 692.1666 22 31.4621 1/4/201335 3 15 1 692.1666 22 0.0000 1/5/2013 Sat'Day35 3 15 1 692.1666 22 0.0000 1/6/2013 Sun'Day35 3 15 1 692.1666 22 31.4621 1/7/201335 3 15 1 692.1666 22 31.4621 1/8/201335 3 15 1 692.1666 22 31.4621 1/9/201335 3 15 1 692.1666 22 31.4621 1/10/201335 3 15 1 692.1666 22 31.4621 1/11/201335 3 15 1 692.1666 22 0.0000 1/12/2013 Sat'Day35 3 15 1 692.1666 22 0.0000 1/13/2013 Sun'Day35 3 15 1 692.1666 22 31.4621 1/14/201335 3 15 1 692.1666 22 31.4621 1/15/201335 3 15 1 692.1666 22 31.4621 1/16/201335 3 15 1 692.1666 22 31.4621 1/17/201335 3 15 1 692.1666 22 31.4621 1/18/201335 3 15 1 692.1666 22 0.0000 1/19/2013 Sat'Day35 3 15 1 692.1666 22 0.0000 1/20/2013 Sun'Day35 3 15 1 692.1666 22 31.4621 1/21/201335 3 15 1 692.1666 22 31.4621 1/22/201335 3 15 1 692.1666 22 31.4621 1/23/201335 3 15 1 692.1666 22 31.4621 1/24/201335 3 15 1 692.1666 22 31.4621 1/25/201335 3 15 1 692.1666 22 0.0000 1/26/2013 Sat'Day35 3 15 1 692.1666 22 0.0000 1/27/2013 Sun'Day35 3 15 1 692.1666 22 31.4621 1/28/201335 3 15 1 692.1666 22 31.4621 1/29/201335 3 15 1 692.1666 22 31.4621 1/30/201335 3 15 1 692.1666 22 31.4621 1/31/2013Like that i want to display for all State-Ditrict-Divison Combination.If Date is either Holiday or weekend(Sat'Day and Sun'Day) then "salesperday" should be zero. I have created following view for DatesCREATE VIEW [dbo].[view1]AS SELECT Date_id ,DateKey,DayKey ,DateMonthId ,Year_Month_Id ,ClosingDateMonthId ,Day_Ind = 1 ,WeekEnd_Ind = CASE WHEN DATENAME(DW,DATEKEY) IN ('Saturday','Sunday') THEN 1 ELSE 0 END ,CASE WHEN Holiday_Ind = 1 AND DATENAME(DW,DATEKEY) NOT IN ('Saturday','Sunday') THEN 1 ELSE 0 END AS Holiday_Ind FROM DimDate Note: from the above view we have to get business days and Holidays (including weekends). we have already another table which contains all the columns ([StateId] [int] NULL,[District] [Division] [Month_Number] [Total] [BuisDays][SalesPerDay] [Date] All columns has values except for (BusiDays,SalesPerday,Dates: these are nulls). I want update that table with "Busidays,salesperday,Dates" values from this query.Please! Please! Please! Kindly help me to create stored procedure. i am trying from morning onwards.

Return 1 record with data from multiple tables

Posted: 25 Mar 2013 12:44 PM PDT

All,I have data in numerous tables. The first 2 tables are simple:tblProducts-----------------ProductIDProductNameCategoryIDtblCategories----------------CategoryIDCategoryNameEach product links to a category. That part is easy as I can return the values I need from both tables for the ProductID passed in, like this:ProductID | ProductName | CategoryID | CategoryName--------------------------------------------------------------13 | Magnets | 1 | Misc.However, I have a third table that will always have 2 records for every Product:tblProductPricing----------------ProductPricingIDProductIDProductQuantity (int)ProductCostEvery product is sold in quantity of 1 and some other quantity (let's say 10). The cost for 1 is $1, the cost for 10 is $8 because there's a discount for buying in bulk.How can I get the details of the Product plus the 2 records from tblProductPricing in a single record, like this:ProductID | ProductName | CategoryID | CategoryName | ProductSize1 | ProductCost1 | ProductSize2 | ProductCost2--------------------------------------------------------------13 | Magnets | 1 | Misc. | 1 | $1 | 10 | $8I can't figure it out without seemingly terrible code and I'd like to know if there's an easy way to do this.Thanks,Mark

how to shows results from three queires to one please?

Posted: 25 Mar 2013 11:10 PM PDT

Hello AllI have below 3 sql queries which gives results on order by accounts, Now the question is how to shows 1strecord from first query, then after that second query results (if sameaccount no exists) then third query results (if same account no exists),so the expected results willshows like it is coming from single query but it will use 3 queriesNow rsults look like belowFirst Query Results103455,3000,'Clev',200,'14/3/2012','Sr Surgant Gen','16/12/2012','pention premium'103457,2000,'Silv',990,'13/3/2012','Cye Mers','17/11/2012','RollOver'103461,1400,'Clev',110,'23/3/2012','Prince Chau','12/11/2012','Savings'103462,5500,'Disl',450,'30/3/2012','NULL','12/10/2012','pention premium'103463,3770,'Goat',220,'17/3/2012','Manhodu Kim','19/11/2012','Benefits'103464,3890,'Perk',320,'26/3/2012','Silvra Gen','10/12/2012','pention premium'Second Query Results103455,'ZZZ',2000,5.54,'Y','14/3/2014','Active'103463,'YYY',3000,3.54,'N','17/3/2014','Active'103464,'XXX',5000,6.45,'N','16/5/2014','Active'103464,'XXX',4000,3.45,'Y','16/3/2014','Active'Third Query Results103455,'ZZZ',2000,5.54,'Y','14/3/2012','INActive'103457,'YYY',3000,3.54,'N','17/3/2012','InActive'103461,'XXX',5000,6.45,'N','16/5/2012','InActive'103461,'XXX',4000,3.45,'Y','16/3/2012','InActive'103461,'XXX',4000,3.45,'Y','16/3/2012','InActive'expected Results is (just one results, break by acct No across the three queres as below)Expected Results (One results)103455,3000,'Clev',200,'14/3/2012','Sr Surgant Gen','16/12/2012','pention premium'103455,'ZZZ',2000,5.54,'Y','14/3/2014','Active'103455,'ZZZ',2000,5.54,'Y','14/3/2012','INActive'103457,2000,'Silv',990,'13/3/2012','Cye Mers','17/11/2012','RollOver'103457,'YYY',3000,3.54,'N','17/3/2012','InActive'103461,1400,'Clev',110,'23/3/2012','Prince Chau','12/11/2012','Savings'103461,'XXX',5000,6.45,'N','16/5/2012','InActive'103461,'XXX',4000,3.45,'Y','16/3/2012','InActive'103461,'XXX',4000,3.45,'Y','16/3/2012','InActive'103462,5500,'Disl',450,'30/3/2012','NULL','12/10/2012','pention premium'103463,3770,'Goat',220,'17/3/2012','Manhodu Kim','19/11/2012','Benefits'103463,'YYY',3000,3.54,'N','17/3/2014','Active'103464,3890,'Perk',320,'26/3/2012','Silvra Gen','10/12/2012','pention premium'103464,'XXX',5000,6.45,'N','16/5/2014','Active'103464,'XXX',4000,3.45,'Y','16/3/2014','Active'Queries--------SELECT AcctNoGiaaBalanceOffice,totalbalItemDate ContactLastContactedPilPlanFROM SemerSouther order by AcctNoSELECT AcctNoVirtualName,Capbal,VadareRate,PilPlan,LoadDate, 'Active' StatusFROM TableClevland where LoadDate > GETDATE() order by AcctNoSELECT AcctNoVirtualName,Capbal,VadareRate,PilPlan,LoadDate'INActive' as StatusFROM TableClevland where LoadDate <= GETDATE() order by AcctNoPlease assist meThanks in advanceDhani

Reorder a custom order column

Posted: 25 Mar 2013 06:02 PM PDT

Hi,Been stuck on this for a little while and hoping that someone can help me figure out my logic problems.So my main issue is when i want to change the ordering and move and earlier number to a later number (Think numbers 1 to 8 and i want to move number 2 to the number 6 position)The other side works (moving 6 to 2).The initial column is only there so i can verify my code - it does not exist in the real table.There is only one order column in the real table.The changed column is where im hoping to see the correct reordering done.As this is required to work over multiple tables im hoping it would be suitable to convert into an iTVF or something similar.Some guidance on how to do it and then use it would be great.[code="sql"]IF OBJECT_ID('TempDB..#temp13','U') IS NOT NULL drop table #temp13select *into #temp13from ( select 1 "changed", 1 "initial" union all select 2,2 union all select 3,3 union all select 4,4 union all select 5,5 union all select 6,6 union all select 7,7 union all select 8,8) bdeclare @startNum as int = 1declare @oldnum as int = 3if @startNum < @oldnumbegin update #temp13 set changed = -1 from #temp13 where changed = @oldnum update #temp13 set changed = changed+1 from #temp13 cross join Tally where N >= @startNum and N = changed and N <= @oldnum update #temp13 set changed = @startNum from #temp13 where changed = -1endelsebegin --unsure what needs to go here --tried multiple things but none have worked so far endselect *from #temp13[/code]Expected output where @startNum < @oldnum (startnum 2 oldnum 4)changed initial1 13 24 32 45 56 67 78 8Expected output where @startNum > @oldnum (startnum 4 oldnum 2)changed initial1 14 23 32 45 56 67 78 8So after shuffling the numbers around i would also need to ensure that they are numbered 1 to x.If something isnt clear or more info is required please let me know and i will do my best.

Query help

Posted: 25 Mar 2013 12:09 PM PDT

[code="other"]I need a query to get the expected outputTable:Studentstguid stuName studwor stid------------------------ ----------------- ----------------- -----------642-4d5d-9af0-4c7a18dd ChrisName Chris 2554171-8655-2de255b88e08 ChrisCity SAN City 1792a0d-4100-bd1c-343882 ChrisCounty Wendy 17948f0-b455-5207b187e639 ChrisphoneNumber This is a test phone 1794d5d-9af0-4c7a18ddd7b2 ChrisDName WTS Test 1804041-ba50-1085acf7d86c ChrisDType This is for Dtpetest 180Expected output:ChrisName ChrisCity ChrisCounty ChrisphoneNumber ChrisphoneNumber ChrisDType---------- ------------ ------------- ----------------- ---------------- ---------- Chris SAN City Wendy This is a test phone WTS Test This is for DtpetestThanks for help in advance.[/code]

help with inventory of database logins and permissions

Posted: 25 Mar 2013 07:13 AM PDT

I ran this script to get a list of all database logins, but it does not give the same results as when I expand Database->Security-> Logins.SELECT name, type_desc FROM sys.database_principalsI've been tasked with inventorying names of all databases, logins, and corresponding permissions. Which will be best approach?

Splitting rows in HTML from sp_send_dbmail

Posted: 25 Mar 2013 06:45 AM PDT

Greetings,I would like to send a daily email that shows in HTML the changes of our staff from the previous day. We have a table called EmployeeHistory that includes records for changes to our Employee tables with the date of the change along with the EmpId, Name, etc.I have not done much work with HTML but I have the following code that selects records that have changed into a temp table called #History and then formats it into HTML and sends the email using sp_send_dbmail:--SelectCurr.EmpID as CurrEmpID,Curr.FirstName as CurrFirstName,Curr.LastName as CurrLastName,Curr.JobTitle as CurrJobTitle,Curr.DivisionID as CurrDivisionID,Curr.DepartmentID as CurrDepartmentIDCurr.DateChanged as CurrDateChanged,Prev.EmpID as PrevEmpID,Prev.FirstName as PrevFirstName,Prev.LastName as PrevLastName,Prev.JobTitle as PrevJobTitle,Prev.DivisionID as PrevDivisionID,Prev.DepartmentID as PrevDepartmentID,Prev.DateChanged as PrevDateChangedinto #Historyfrom EmployeeHistory as Currjoin EmployeeHistory as Prevon Curr.EmpID = Prev.EmpID ANDDATEDIFF(day,Prev.DateChanged, Curr.DateChanged) = 1 AND (Curr.FirstName <> Prev.FirstName orCurr.LastName <> Prev.LastName orCurr.JobTitle <> Prev.JobTitle orCurr.DivisionID <> Prev.DivisionID orCurr.DepartmentID <> Prev.DepartmentID)order by Curr.LastName, Curr.FirstNameDECLARE @HTML NVARCHAR(MAX);SET @HTML = N'

Staff Data changes

' + N'' + N'' + N'' + N'' + CAST ( ( SELECT td = CurrDateChanged, '', td = CurrEmpID, '', td = CurrLastName, '', td = CurrFirstName, '', td = CurrJobTitle, '', td = CurrDivisionID, '', td = CurrDepartmentID, '', td = PrevDateChanged, '', td = PrevEmpID, '', td = PrevLastName, '', td = PrevFirstName, '', td = PrevJobTitle, '', td = PrevDivisionID, '', td = PrevDepartmentID, '' from #History order by CurrLastName, CurrFirstName for XML PATH('tr'), TYPE ) as NVARCHAR(MAX) ) + N'
Date ChangedEmpIdLast NameFirst NameJob TitleDivisionDepartment
' ; EXEC msdb.dbo.sp_send_dbmail @profile_name = 'Gmail', @recipients = 'XXXX', @subject = 'Staff Data changes', @body = @HTML, @body_format = 'HTML' --For each employee that has changed since yesterday, I would like the current data formated onto one row of the HTML and the previous data formatted onto a second row.What is the syntax needed to separate this data into two rows? Thank you for your help.gmrose

PK insertion error

Posted: 25 Mar 2013 07:37 AM PDT

Hello everyone.I hope someone can confirm or deny my thinking here an offer some advice. Here is my issue.I do a daily load into a staging table.The staging table then loads the data into the main table.On successful completion the staging table is truncated at end of my SP ready for next load.Next day runs again , etc , etc.Now getting a error.error 2627 - PK violation.Violation of PRIMARY KEY constraint 'PK_DandD1'. Cannot insert duplicate key in object 'XXX.DandD'. The duplicate key value is (4714280, 1117455).I have a Composite PK which is made up of the 2 values listed.On checking the destination table I can see an entry already for that PK , so my insertion from the staging table fails.My questions are, and I hope i know the answer already , just need it sanity checked.1. If my insertion fails for 1 row, all insertions contained in the staging table will fail. ( the rest do not seem to voilate the PK)2. If I remove the 1 row in the destination table , all my subsequent insertions from staging table that do not voilate the PK will work ? - then investigate violation reasons etc.Does that make sense ?

Odd behavior

Posted: 25 Mar 2013 07:58 AM PDT

I have the follwing line generating a where condition for me in my dynamic sql statement[code="sql"](select cast(sys.all_columns.name as nvarchar) + ' = ''1900-01-01 00:00:00.000'' or ' from sys.all_columns where sys.all_columns.object_id = ac.object_id and sys.all_columns.system_type_id = 61 for xml path ('')) as Conditions[/code]which generates something like [quote]if exists (select top 1 * from [Address] where createdDT = '1900-01-01 00:00:00.000' or lastModifiedDT = '1900-01-01 00:00:00.000')[/quote]Seem good so far?If I change the code to a less than or equal to like this:[code="sql"](select cast(sys.all_columns.name as nvarchar) + ' <= ''1900-01-01 00:00:00.000'' or ' from sys.all_columns where sys.all_columns.object_id = ac.object_id and sys.all_columns.system_type_id = 61 for xml path ('')) as Conditions[/code]I get THIS in my now NON-Executable statement[quote]if exists (select top 1 * from [Address] where createdDT lt;= '1900-01-01 00:00:00.000' or lastModifiedDT = '1900-01-01 00:00:00.000')[/quote]What am I missing?

Problem with CAST to VARCHAR with SUBSTRING Function

Posted: 19 Jan 2013 01:24 AM PST

I'm haveing trouble with a simple CAST to VARCHAR Statement.[code="sql"]SELECT CASE WHEN CAST(LEFT(Customer.STARTDATE, 2 AS VARCHAR(2))) = '98' THEN CAST(substring(Customer.STARTDATE, 3, 2) AS VARCHAR(2)) WHEN LEFT(CAST(Customer.STARTDATE, 2) AS VARCHAR(2)) = '99' THEN CAST(substring(Customer.STARTDATE, 3, 2) AS VARCHAR(2)) END AS LossMoFROM Customer[/code]Any help would be greatly apreciated.

First Stored Procedure

Posted: 20 Jan 2013 11:33 AM PST

I have written my first stored procedure to try to pass a table name in a variable to SSRS.USE XXXXSET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGOCreate Procedure link_table_sp@link varchar(50) outputASBeginset @link = (SELECT '['+SCHEMA_NAME(schema_id)+'].['+name+']' AS SchemaTable FROM sys.tables where name like '%link%')ENDThen in SSRSEXEC link_table_sp('SELECT * FROM' + @link)then try to call it in SSRSEXEC link_table_sp('SELECT * FROM' + @link)But it says incorrect syntax near select.

Some guidance needed....

Posted: 25 Mar 2013 01:54 AM PDT

I am wondering how I should go about handling weighted by month calculations for a trending report. Currently, I just use (total/) * 12 to get the trending number.Info:I work at a law firm where all the attorneys have a target number for the year. Based on historical numbers by month, each month is weighted (the tendency is that they close more towards the end of the year so it is weighted heavier). As a stop gap (due to time limitations), I am going to use static numbers until I can create something to calculate the weight 'on the fly'.So my questions is, should I do the calculations in the expresion on report or with T-SQL in SELECT statement? How should I handle cumulative weighted total? These are the static percent numbers by month:(0.04, 0.06, 0.09, 0.08, 0.09, 0.08, 0.09, 0.09, 0.08, 0.09, 0.08, 0.13)

Search This Blog