Tuesday, October 1, 2013

[SQL Server 2008 issues] deleting all data\logins\views

[SQL Server 2008 issues] deleting all data\logins\views


deleting all data\logins\views

Posted: 30 Sep 2013 02:55 AM PDT

What is the best way to remove all data/views/logons from a database ?The scenario is this: We have a database called 'X' to hold out client data and to interact with our in house applications. Each customer has their own server with database 'X' installed. The database on each of the servers is the same format, that is, it has the same table structures/stored procs.I have to create a new database 'X' on a new server for a new customer - so I have restored one of our current clients database 'X' backups to the new server and now have the database but obviously with data that i need to get rid of completely whilst keeping the table structures, constraints, seeding values (if any present ). Also this has restored incorrect users under the Security\Users folder of the database. Do i just manually delete these ?I have used the following to disable referential integrity, truncate tables, and then renebale referential integrity[code="sql"]- disable referential integrityEXEC sp_MSForEachTable 'ALTER TABLE ? NOCHECK CONSTRAINT ALL' GO EXEC sp_MSForEachTable 'TRUNCATE TABLE ?' GO - enable referential integrity again EXEC sp_MSForEachTable 'ALTER TABLE ? CHECK CONSTRAINT ALL' GO[/code]If anybody can offer any advice of the above topic I would to hear from you ! :-)

Send SQL Server Dashboard reports via database mail

Posted: 13 Sep 2012 05:57 AM PDT

Hi All,Is this is possible to enable a job, which sends us the dashboard reports of a server thru mail ?Thanks in advance

SQL to list the headers in a table?

Posted: 22 Nov 2011 10:44 AM PST

Does anyone know offhand the SQL to list all the headers in a specified table?Thank you!

Error Message: BACKUP failed to complete the command BACKUP LOG msdb

Posted: 30 Sep 2013 02:20 AM PDT

Hi,I am getting following error message daily into my error log.I am running FULL Backup, Diff Backup an T-Log Backup.This message is every morning early I see and looks like during same time my T-log backup job also running.I have also checked that MSDB is in SIMPLE recovery mode and it's not part of my T-Log back up, it's only part of FULL BAckup - weekly.ERROR:BACKUP failed to complete the command BACKUP LOG msdb. Check the backup application log for detailed messages.

SSIS source file with time in the name

Posted: 26 Sep 2013 06:18 AM PDT

Hi everyone.I have been tasked with building some SSIS packages to load data from a csv into the database each day. There are four file sources. Three of them are very simple because the file name is source_date.csv. The file name is predictable so creating an expression that get the current day's file is easy. One of the files is source_date_time. Because the time stamp is always a few seconds different, I don't know how to create an expression to get the file by name and load it.What do people do when they need to have their package go out and grab the day's file if there is a time stamp in the name leading to an unpredictable file name?

Can you please let me know the link to download the tool named as - DBHammer?

Posted: 30 Sep 2013 06:08 AM PDT

Can you please let me know the link to download the tool named as - DBHammer?

Transfer Data

Posted: 30 Sep 2013 04:51 AM PDT

Howdy everybody,What is the best way to transfer 20GB in 3.122 km(three thousand kilometers) distance?1. Backup direct to target2. Backup local and copy to target3. Data Import/Export....what yours suggest??

How to DEFINE REGEX for CSV FILE

Posted: 30 Sep 2013 03:29 AM PDT

Im transfering data from csv file and it is fixed format and only date changes. so i want to split the date and name to endter into table20130930_Get_MY File Name.csvSo in table should enter like belowdate_name :20130930file name: Get_MY File Nameanyone have idea.regardsShuan..

How to run multiple stored procedures in parallel?

Posted: 30 Sep 2013 01:56 AM PDT

I have a stored procedure which runs in database DB1This stored procedure will have other stored procedures inside it which should run in parallel.Create procedure db1.proc1AS use db2call proc2use db3 call proc3 use db4 call proc4 Is there a way to run this stored procedure in parallel and even if the execution of call proc2 fails other two should run?Thanks

Trouble with nested CTE

Posted: 30 Sep 2013 03:27 AM PDT

I need to create a query that is comprised of subqueries, I am using a CTE, but one of the subqueries also is using a CTE...Can you nest CTE like this??[code="plain"]WITH CTE_Results AS(SELECT CASE WHEN HISTORY >= 2 AND ELA >= 4 AND MATH >= 4 AND SCIENCE >= 3 AND FL >= 3 AND VA >= 1 AND Prep >= 0 THEN 'Yes' ELSE 'No' END AS [On Target?] FROM ( SELECT COUNT(CASE WHEN CRS.U1 = 'A' AND (HIS.MK NOT LIKE '%FD%') THEN HIS.CN END) AS HISTORY, COUNT(CASE WHEN CRS.U1 = 'B' AND (HIS.MK NOT LIKE '%FD%') THEN HIS.CN END) AS ELA, COUNT(CASE WHEN CRS.U1 = 'C' AND (HIS.MK NOT LIKE '%FD%') THEN HIS.CN END) AS MATH, COUNT(CASE WHEN CRS.U1 = 'D' AND (HIS.MK NOT LIKE '%FD%') THEN HIS.CN END) AS SCIENCE, COUNT(CASE WHEN CRS.U1 = 'E' AND (HIS.MK NOT LIKE '%FD%') THEN HIS.CN END) AS FL, COUNT(CASE WHEN CRS.U1 = 'F' AND (HIS.MK NOT LIKE '%FD%') THEN HIS.CN END) AS VA, COUNT(CASE WHEN CRS.U1 = 'G' AND (HIS.MK NOT LIKE '%FD%') THEN HIS.CN END) AS Prep FROM dbo.CRS INNER JOIN dbo.HIS ON CRS.CN = HIS.CN INNER JOIN dbo.STU ON HIS.PID = STU.ID WHERE STU.ID = 4068968) AS derivedUNION ALLWITH cteSource(CN, U1)AS ( SELECT r.CN, r.U1 FROM dbo.SSS AS s INNER JOIN dbo.STU AS t ON t.SN = s.SN INNER JOIN dbo.CRS AS r ON r.CN = s.CN WHERE t.ID = 4068968 UNION ALL SELECT r.CN, r.U1 FROM dbo.HIS AS i INNER JOIN dbo.CRS AS r ON r.CN = i.CN WHERE i.PID = 4068968)SELECT CASE WHEN p.A >= 6 AND p.B >= 6 AND p.C >= 6 AND p.D >= 6 AND p.E >= 6 AND p.F >= 6 AND p.G >= 6 THEN 'Yes' ELSE 'No' END AS [On Target?]FROM cteSource AS sPIVOT ( COUNT(s.CN) FOR s.U1 IN ([A], [B], [C], [D], [E], [F], [G]) ) AS p;SELECT CONVERT(VARCHAR(5),SUM(CASE WHEN [On Target?] = 'Yes' THEN 1 ELSE 0 END)) + '/2'FROM CTE_Results[/code]

listing of months for a particular year

Posted: 26 Sep 2013 12:52 AM PDT

Dear All,Hope you are doing fine. What am I trying to do is get a stock count for each item per month for a particular year. I am trying to do a report on that.Table1: list all Purchase OrdersPoNum Date P001 2013-01-01 P002 2013-02-01 P003 2013-02-10P004 2013-03-01Table2: list items for each PoNumPoNum ItemRef QtyP001 I0001 10P001 I0002 5P002 I0003 15P003 I0003 20P004 I0003 5is it possible to have something like that?Year 2013Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec I000110 0 0 0 0 0 0 0 0 0 0 0I0025 0 0 0 0 0 0 0 0 0 0 0I0030 35 5 0 0 0 0 0 0 0 0 0Basically, I will need each item's qty per month for a year.I came up with something. But I have the item's qty for the month the items are in only. How do I get it 0 for the other month?I was reading on creating a CTE to list the months of the year. But I am stuck.Please advise.Thanks,Ashley

Passing external sproc params to internal variables - Explanation?

Posted: 30 Sep 2013 02:11 AM PDT

Sometimes I find the reason behind a poorly performing stored procedure lies in passing the external parameters to internal variables and then using those internal variables instead of referencing the external parameters.Can anyone explain why this is happening, or point me to some microsoft technet article on the subject? I need to explain this to the rest of my team, but I don't know exactly what's going on. I just know of the problem and solution. hah!

NOdeA is crashed ..on ctive active cluster.

Posted: 30 Sep 2013 02:56 AM PDT

Hi,We have active active sql 2008 r2 running on Win 2008 r2 cluster.Instance_A is running on Node1 Instance_b is runing on NOde2Due to Os issue , NOdeA Crashed and Wintel Team want to re-install the O.S and cluster on NODEA only.While building the SQL server in initial stage we have followed the below way:For Instance A, we have ran the Main setup on NodeA and then added node on NOdeBFor Inctanse B, we have ran Main setup on NodeB and then Added Node on NodeANOw Node A ,is down both sql i running on Node B.If Node A is re-building then ,how to add this node to sql instance ?As per sql prospect how we have to do installation and what are steps we need to follow.Please help me. It is very urgentYour help much appreciated...Thank you in advance....

SQL Server Database Backup Monitoring

Posted: 29 Sep 2013 11:53 PM PDT

Hi SQL Masters,Greetings!Do you have a script that will determine if the backup failed/succeeded?Sample output:Database Name Status Log Date--------------- ---------- --------------AdventureWorks2008 FAILED 20130125AdventureWorks2008 SUCCEEDED 20130126Something like that.....Any ideas SQL Masters?Thank you and Best Regards,

NUMA

Posted: 26 Jun 2012 09:46 PM PDT

In preparation for my MCITP exam tomorrow I have been reading through the books and just crossing the t's and dotting the i's. One thing which I just want to get right in my head is NUMA as I had a feeling that I had the right answers then changed them to wrong answers so this is more of a confirmation than an actual question.1. SQL ignores NUMA when Hard-NUMA is <=4 CPUs and at least 1 node has only 1 CPU.So.....1 physical proc with 4 cores, then SQL will ignore NUMA2 physical procs with 2 cores, ignore4 physical procs with 1 core ignoreAnything other than the above then SQL will use Hard-NUMA as long as its not interleaved memory configured[i]This is the one mainly confusing me[/i]2. Use typical use for Soft-NUMA is when there is no Hard-NUMA, but can be used to split Hard-NUMA into more NUMA nodes3. Soft-NUMA doesnt provide memory to CPU affinity4. Soft-NUMA can increase performance in relation to I/O as each NUMA node creates a new I/O path and new LazyWriter thread5. Instead of doing point 4, you could CPU affinity instead to spread workload across multiple CPU's6. Use SSCM to configure port to NUMA affinityAnd now to cross my fingers and hope tomorrow is a good day.

Using CASE Condition for Prefixing a field from a table

Posted: 30 Sep 2013 12:38 AM PDT

Hi All,I want to retain the same Employee Code when an employee is rehired and prefix with A,B,C based on no of times the employee has been rehired with CASE statementCASE WHEN LEFT(EMPLOYEECODE,1) = 'Z' THEN 'A'+EMPLOYEECODE ELSE CASE WHEN ISNUMERIC(LEFT(EMPLOYEECODE,1)) = 1 THEN 'A'+EMPLOYEECODE ELSE CHAR(ASCII(LEFT(EMPLOYEECODE,1))+1)+SUBSTRING(EMPLOYEECODE,2,99) END ENDand it is working fine with these parameters : Employee Code is 'A10010' then its returning B10010 and when it is 10010 it is returning A10010 which is correct but the challenge comes when the employee code is Z10010 then it should return AA10010 not AZ10010 ....how can i do that?? help

TSQL Recomile Problems when using subqueries

Posted: 29 Sep 2013 10:02 PM PDT

Good Day AllI have found a problem with a Simple TSQL script that is returning incorrect results due to a schema change that didn't cause the old plan to recompile.I will just post how to duplicate the problem and hopefully people can tell me if it is intended to function this way.You have an Instance of SQL, on that instance you have 2 Databases, TestDB1 and TestDB2.You have a table in each of these databases that are exactly the same (At the start).You have a query to select an ID that is in TestDB1 where the values exist in [TestDB2]. eg. Select AccountID from TestDB1.dbo.Table1 where AccountID in (Select AccountID from TestDB2.dbo.Table2)The above query runs and the expected results are returned.. all is wellThen someone changes the TestDB2.dbo.Table2 Column name changes from AccountID to AccountIDArchive (No idea why it changed but it does)At this point i would assume the schema change should invalidate the plan so that if you run the original query with the AccountID being specified in the subquery it would result in a error.. column cannot be found.If you run the subquery on its own, it does return an error but if you run the Original query Select AccountID from TestDB1.dbo.Table1 where AccountID in (Select AccountID from TestDB2.dbo.Table2)It does not give you an error, and runs successfullyObviously the Origional queries subquery had to be changed to AccountIDArchive to accommodate for the change but i was surprised to see it still using the old plan and that an invalid TSQL code could execute "successfully", or generating a working plan.Using option recompile with the origional query does not cause the query to error out but if you do change the subquery to a new invalid column name it does error outAny opinions to prevent this or explanation as to why this is happening would help :-)Regards

Monday, September 30, 2013

[how to] Monitor Usage on a SSAS 2012 Tabular Model

[how to] Monitor Usage on a SSAS 2012 Tabular Model


Monitor Usage on a SSAS 2012 Tabular Model

Posted: 30 Sep 2013 07:15 PM PDT

I have a SSAS 2012 tabular model and I am trying to figure out the most efficient way to track which users are accessing my model. My tabular model is on a server that is used by other databases, so system wide tracing and logging are not options that I'd like to implement as I don't want to affect the performance on the server for those other databases. I haven't been able to find out much while searching online and am still pretty new to tabular models.

Has anyone implemented something similar to what I am looking for or has any ideas?

Unattended SQL install on a remote machine

Posted: 30 Sep 2013 06:54 PM PDT

My product installation does an unattended install of SQL (SQLExpress by default). I use command line arguments rather than a configuration file, so I'm quite familiar with this process.

This unattended install occurs on the same machine that my install media is running on. However I need to look at installing the SQL instance on a separate machine to what the install media is running on. The target machine will be on the same network.

Is it possible to do this unattended? If so, is it simply extra command line arguments or is it a different process (i.e. a PowerShell script)?

(Note: can someone with sufficient rep please add the tag for me, as I cannot yet?)

SSDT/SSIS 2012 - How to automatically load some projects?

Posted: 30 Sep 2013 03:37 PM PDT

I did not use the default projects folder in SSDT, ie SSIS 2012. I used my own instead. Every time I open SSIS, I have to load the projects manually. Is there any way I can load them automatically ?

Audit table insert restriction

Posted: 30 Sep 2013 02:36 PM PDT

So I'm trying to create my own audit table for some user management analysis and stumbled across a little problem. I use this trigger

CREATE OR REPLACE   TRIGGER usr_t_audit_user_alnd      AFTER LOGON ON DATABASE      BEGIN          INSERT INTO usr_t_audit_user (username, session_id, logon_day, logon_time)          VALUES(UPPER(USER), SYS_CONTEXT('USERENV','SID'), SYSDATE, TO_CHAR(SYSDATE, 'HH24:MI:SS'));      END;  

to insert some standard informations into my table. It works fine, except the logoff time inside my other trigger isn't always that correct. But my actual problem is, that I don't want to insert all user logons. For example I only want to audit some chosen users. How can I achieve such a restriction? I can't put a WHERE-clause in my trigger. Anybody got some ideas?

partitioning by datetime vs date vs int performance

Posted: 30 Sep 2013 04:06 PM PDT

I am trying to determine which partitioning strategy will be most efficient for my queries on SQL Server.

We know we want daily partitions. The data has a natural DATETIME field in it, so my gut instinct is to partition on that value. However, we have discussed the possibility of adding a DATE column (with just the time information stripped out), or possibly coding the date as an integer (YYYYMMDD, e.g., 20130930) and using that as the partitioning column.

All queries on this data will be on a specific date (WHERE ItemTime = '2013-09-30') or range of dates (WHERE ItemTime BETWEEN '2013-09-15' AND '2013-09-30'). For now, we always query on a date, but future requirements may include time details (WHERE ItemTime BETWEEN '2013-09-29 20:30:00' AND '2013-09-30 10:15:45').

I have tested performance of each strategy on a few hundred thousand rows of data, and seen no real difference. The production deployment, however, will be in the hundreds of millions up to maybe a couple billion rows.

Is one of these strategies going to lead to more efficient queries than the others? Why or why not?

Thanks for your help.

[EDIT] The queries will be formatted by application code, so I'm not concerned about how to translate between DATETIME, DATE, and INT. I can assume the query will be properly formatted based on whichever partitioning scheme is chosen. I'm just trying to find out if partition elimination will be faster using one of these data types.

How to concatenate multi row sql output

Posted: 30 Sep 2013 05:39 PM PDT

Version Information:

Oracle Database 11g Release 11.1.0.7.0 - 64bit Production  HP Server Automation v9.14  

I am creating a report of all my HP-SA managed servers. The report includes data from multiple tables. With the exception of Networking information everything is included on one row. I'm trying to come up with a method to concatenate the NIC NAME, MAC, IP, DNS so that I don't get multiple rows for each active NIC.

I ran into roadblocks with sub select queries not knowing how to pass the D.DVC_ID from the outer query into the sub query. I briefly tried using WM_CONCAT and LISTAGG without success. Any help would be greatly appreciated

Posting the full SQL in case any other HPSA users out there could benefit from knowing where in the database to look for some basic hardware information:

SELECT D.SYSTEM_NAME AS "HOSTNAME",      D.DVC_DESC AS "OPSWARE NAME",      I.HW_ADDR AS "MAC ADDRESS",      I.SLOT AS "INTERFACE",      I.IP_ADDRESS AS "IP ADDRESS",      I.NETMASK AS "SUBNET",      D.DEFAULT_GW AS "GATEWAY",      D.DVC_MFG AS "SERVER VENDOR",      D.DVC_MODEL AS "SERVER MODEL",      CONCAT (          ROUND(MEM.QUANTITY / 1024 / 1024, 1),          ' GB'          ) AS "RAM",      CPU.CPU_MODEL AS "CPU TYPE",      COUNT(CPU.CPU_SLOT) AS "CPU QUANTITY"  FROM TRUTH.DEVICES D  INNER JOIN TRUTH.MEMORY_COMPONENTS MEM ON (D.DVC_ID = MEM.DVC_ID)  INNER JOIN TRUTH.CPU_COMPONENTS CPU ON (D.DVC_ID = CPU.DVC_ID)  INNER JOIN TRUTH.DEVICE_ROLES DR ON (D.DVC_ID = DR.DVC_ID)  INNER JOIN TRUTH.INTERFACES I ON (D.DVC_ID = I.DVC_ID)  INNER JOIN TRUTH.CUSTOMER_CLOUDS CC ON (DR.CUST_CLD_ID = CC.CUST_CLD_ID)  INNER JOIN TRUTH.DATA_CENTERS DC ON (DC.DATA_CENTER_ID = CC.DATA_CENTER_ID)  WHERE MEM.MEMORY_TYPE != 'SWAP'      AND I.ENABLED_FLG = 'Y'  GROUP BY D.SYSTEM_NAME,      D.DVC_DESC,      I.HW_ADDR,      I.SLOT,      I.IP_ADDRESS,      D.DVC_MFG,      D.DVC_MODEL,      MEM.QUANTITY,      CPU.CPU_MODEL,      I.NETMASK,      D.DEFAULT_GW  ORDER BY HOSTNAME ASC,      INTERFACE ASC  

With a sample result of:

HOSTNAME         OPSWARE NAME  MAC ADDRESS        INTERFACE                IP ADDRESS      SUBNET         GATEWAY       SERVER VENDOR  SERVER MODEL             RAM   CPU TYPE                                CPU QUANTITY    ---------------  ------------  -----------------  -----------------------  --------------  -------------  ------------  -------------  -----------------------  ----  --------------------------------------  ------------    WIN-6V18DDNQRL4  Server1       00:50:56:BB:66:A2  Local Area Connection    172.31.0.149    255.255.255.0  172.31.0.10   VMWARE, INC.   VMWARE VIRTUAL PLATFORM  4 GB  Intel(R) Xeon(R) CPU  E5345  @ 2.33GHz  2               WIN-6V18DDNQRL4  Server1       00:50:56:BB:03:D3  Local Area Connection 2  169.254.246.97  255.255.0.0    172.31.0.10   VMWARE, INC.   VMWARE VIRTUAL PLATFORM  4 GB  Intel(R) Xeon(R) CPU  E5345  @ 2.33GHz  2               

Desired Output would have everything to do with the server on one row similar to the following:

HOSTNAME         OPSWARE NAME  (MAC ADDRESS/INTERFACE/IP ADDRESS/SUBNET/GATEWAY)                                                                                                                 SERVER VENDOR  SERVER MODEL             RAM   CPU TYPE                                CPU QUANTITY    ---------------  ------------  ----------------------------------------------------------------------------------------------------------------------------------------------------------------  -------------  -----------------------  ----  --------------------------------------  ------------    WIN-6V18DDNQRL4  Server1       (00:50:56:BB:66:A2/Local Area Connection/172.31.0.149/255.255.255.0/172.31.0.10,00:50:56:BB:03:D3/Local Area Connection 2/169.254.246.97/255.255.0.0/172.31.0.10)  VMWARE, INC.   VMWARE VIRTUAL PLATFORM  4 GB  Intel(R) Xeon(R) CPU  E5345  @ 2.33GHz  2               

Thank you, Lars

Query to compare two subsets of data from the same table?

Posted: 30 Sep 2013 02:01 PM PDT

My problem:

I'm trying to compare to subsets of data within one table, and I've got two methods that partially work and one certainty that there's got to be a more correct way to do it.

The idea here is a table which contains datasets about the same systems over time, and I would like to compare them and especially to see when there are introductions or absences. Allow me to demonstrate with a simple test table:

mysql> select * from gocore;  +-----+------------+------+----------+----------+  | uid | server     | tag  | software | revision |  +-----+------------+------+----------+----------+  |   1 | enterprise | old  | apache   | 2.2.25   |  |   2 | enterprise | new  | apache   | 2.4.6    |  |   3 | enterprise | new  | tomcat   | 7.0.42   |  |   4 | enterprise | old  | geronimo | 2.1.7    |  +-----+------------+------+----------+----------+  

In this example there are two datasets - the tag "old" dataset, and the tag "new" dataset. Each reflects a data sample taken at some point in time. For server "enterprise", we have one software package that changed over time (apache), one software package that was introduced (tomcat), and one software package that became absent (geronimo).

My goal: A query that will allow me to summarize the state between "old" and "new":

+------------+----------+----------+----------+  | server     | software | revision | revision |  +------------+----------+----------+----------+  | enterprise | apache   | 2.2.25   | 2.4.6    |  | enterprise | geronimo | 2.1.7    | NULL     |  | enterprise | tomcat   | NULL     | 7.0.42   |  +------------+----------+----------+----------+  

It is important for my purposes to be able to see the 'NULL' cells above - I need to know when software has been added or removed from the system. TO BE CLEAR, the table above is not the result of a query - it's me using a text editor to fix up what I was getting to describe what I'm looking for. I need your help to figure out the query that would make that table :)

My Kludges:

If I perform a LEFT JOIN and use the WHERE clause to discriminate between "old" and "new" tags, I get results for only those entries that exist under both tags:

mysql> select old.server, old.software, old.revision, new.software, new.revision      -> from gocore as old left join gocore as new on old.software = new.software      -> where old.tag = 'old' and new.tag = 'new';  +------------+----------+----------+----------+----------+  | server     | software | revision | software | revision |  +------------+----------+----------+----------+----------+  | enterprise | apache   | 2.2.25   | apache   | 2.4.6    |  +------------+----------+----------+----------+----------+  

My next try was to create two views so that I could perform the JOIN without throwing the tag filter into the mix:

mysql> create view gc_old as select uid,server,tag,software,revision      -> from gocore where tag = 'old';  Query OK, 0 rows affected (0.00 sec)    mysql> create view gc_new as select uid,server,tag,software,revision      -> from gocore where tag = 'new';  Query OK, 0 rows affected (0.00 sec)    mysql> select * from gc_old;  +-----+------------+------+----------+----------+  | uid | server     | tag  | software | revision |  +-----+------------+------+----------+----------+  |   1 | enterprise | old  | apache   | 2.2.25   |  |   4 | enterprise | old  | geronimo | 2.1.7    |  +-----+------------+------+----------+----------+  2 rows in set (0.00 sec)    mysql> select * from gc_new;  +-----+------------+------+----------+----------+  | uid | server     | tag  | software | revision |  +-----+------------+------+----------+----------+  |   2 | enterprise | new  | apache   | 2.4.6    |  |   3 | enterprise | new  | tomcat   | 7.0.42   |  +-----+------------+------+----------+----------+  2 rows in set (0.00 sec)    mysql> select old.server, old.software, old.revision, new.revision      -> from gc_old as old left join gc_new as new      -> on old.software = new.software;  +------------+----------+----------+----------+  | server     | software | revision | revision |  +------------+----------+----------+----------+  | enterprise | apache   | 2.2.25   | 2.4.6    |  | enterprise | geronimo | 2.1.7    | NULL     |  +------------+----------+----------+----------+  2 rows in set (0.00 sec)  

That works better - I now see absence, not just change, but I still don't see introduction. And I had to go create views for this, which strikes me as inflexible as the dataset is added to over time.

My questions:

  1. How can I make sure that all the holes show up, as represented by the NULL cells in the "My goal" section above?
  2. Can I do it within a single query, avoiding the creation of views as a crutch?

Any help you can provide is greatly appreciated. Thanks!

MySQL create all rows for (over) normalized schema at start or as data is available

Posted: 30 Sep 2013 01:16 PM PDT

I am developing an app, and using MySQL and the db schema is provided. To me the db looks (over)normalized. Data for a single user will be spread over dozen tables, and around 150 rows in total. Some data will be available on user signup (around 40 rows), the rest will be added overtime as all the data is needed to properly use the app. I am creating a stored procedure for creating new users. So I was thinking that when creating a user should I create ALL the rows with either available data or NULL values? or just create the rows whose data is available? Will there be any performance bonus when all the rows for a user are together? for example 1 table will have exact 11 rows for each user, but when user joins there exactly 8 rows will be populated. The other 3 rows will be added after a while. So should I create those 3 rows with NULLs? In the app we will be displaying data for all the available rows together. So if all the rows a query have to return are together will it provide noticeable performance?

EDIT Its basically a match-making app. Tables are stripped down to give you overview.
User Table

CREATE TABLE IF NOT EXISTS `tbl_user` (  `id_user` int(10) unsigned NOT NULL AUTO_INCREMENT,  `email` char(60) NOT NULL,  ....  PRIMARY KEY (`id_user`),  UNIQUE KEY `email` (`email`),  );  

Thn there is a basic info table which has a dozen questions.

CREATE TABLE IF NOT EXISTS `tbl_criteria` (  `id_criteria` int(10) unsigned NOT NULL AUTO_INCREMENT,  `criteria_text` char(200) NOT NULL,  ....  PRIMARY KEY (`id_criteria`)  );  

and thn we have an intersect table

CREATE TABLE IF NOT EXISTS `tbl_user_criteria` (  `id_user` int(10) unsigned NOT NULL,  `id_criteria` int(10) unsigned NOT NULL,  `id_answer` tinyint(3) unsigned NOT NULL,  PRIMARY KEY (`id_user`, `id_criteria`)  );  

and there are like dozen more intersection tables.

MS Sql Server Management Studio gives error on start

Posted: 30 Sep 2013 01:41 PM PDT

When i try to connect my sa account on sql server management studio it gives me this error:

*ps: the error message is turkish but you will see english version in 3rd image

enter image description here

Then i clicked on "Get Help For" thing as its shown above. It showed me this:

enter image description here

So i clicked to "Yes" button. Then it opened web page below:

*ps: error message is the same in first image but its english.

enter image description here

After that, i tried to start sql server process in services windows regarding 3rd image. But it gave me this error message:

It means "Windows couldn't start SQL SERVER service on local computer. For more information see event logs. If this is not a Microsoft service, call provider and give this special error code: 3417"

I wanted to look system event logs from C:\Windows\System32\winevt\Logs directory but there was 84 different .evtx files, I couldn't know what to do. How can i fix this?

*ps: I'm also not little bit confused about this problem's scope. I also thought to post this on superuser and/or stackoverflow but this place looked more convenient to me. So, if i'm mistaken please warn me.

enter image description here

edit:

I opened Event Log application in Control Panel\Administrative Tools according to Jon Siegel's suggestions and filtered all event logs from SQL Server Reporting Services (MSSQLSERVER) source. There are 82 events from "SQL Server Reporting Services (MSSQLSERVER)" But they're all information, their event id is 0. And when i right clied on them and chose Event Properties, here is the dateils tab:

*ps: After EventData, It says "Service started successfully"

enter image description here

SQL Server full text indexing disabled

Posted: 30 Sep 2013 12:26 PM PDT

I have a SQL Server 2008 R2 server instance where I'm unable to use full text catalogs. It was upgraded from SQL Server 2008. When I go to the database properties of an existing database or try to create a new database the "Use full-text indexing" checkbox is disabled.

Some things that I have checked are:

  • Full text search service is installed and started
  • Named pipes protocol is enabled
  • Full text service is running under local system account
  • EXEC sp_fulltext_database 'enable' does run successfully

What else could go wrong, because I'm stuck at it?

How do I access the build configuration in SSDT, ie SSIS 2012? [on hold]

Posted: 30 Sep 2013 12:21 PM PDT

Oracle database on different disk

Posted: 30 Sep 2013 11:47 AM PDT

I have a VM with Oracle Server installed, now Oracle home is defined locally. The VM's disk is no longer expandable because of snapshots (VMWare Workstation claims expansions and snapshots don't play nice together). I had to create a new virtual disk to be able to host a dmp file given to me which is just over 70GB.

With that said, how can I import this dmp onto the new disk?

The current Oracle is C:\Oracle and the new drive is d:

I have minimal experience with Oracle so I would appreciate a detailed answer.

Thanks

Multiple versions of SQL Server on the same development machine

Posted: 30 Sep 2013 10:38 AM PDT

I am consolidating some SQL Servers to simplify administration. I am thinking about a new VM with SQL Server 2005/2008R2/2012 each as a separate named instance on the same machine. This machine will only be used in a 'sandbox' environment primarily to test restores of our backups. No applications will be hitting this.

Am I likely to run into any issues with multiple versions? I remember running into issues years ago with SQL 2000/2005 on the same box. Do you folks use separate machines for restore testing?

How to partition a table by timestamp, where data from every i'th day goes to partition i?

Posted: 30 Sep 2013 12:14 PM PDT

I'm looking into partitioning a table in my InnoDB database. I have a column corresponding to a UTC timestamp, and I want to partition around that.

The basic idea that I want to capture is a partitioning scheme where, with a fixed number of partitions, we cycle through them as follows:

Given 3 partitions (3 for simplicity)

  • Data with timestamp column from day 1 goes into partition 1
  • Data with timestamp column from day 2 goes into partition 2
  • Data with timestamp column from day 3 goes into partition 3
  • Data with timestamp column from day 4 goes into partition 1
  • ..... day 5 .... partition 2
  • ... rinse and repeat

Basically, extract the day out of the timestamp and put the row into partition DAY MOD N where DAY is the day that the timestamp corresponds to (filtering out hours/minutes/seconds) and N is the number of partitions.

This is probably easily done, but I can't find a similar example to emulate this with. What would the ALTER TABLE query be to partition in this fashion?

Update

By atxdba's suggestion, I tried partitioning by hash. I tried the following statement:

ALTER TABLE table_to_partition PARTITION BY HASH(DAYOFMONTH(FROM_UNIXTIME(my_timestamp))) partitions 8;  

This results in error code 1564: This partition function is not allowed. Looking at this list of limitations for partitioning, it doesn't appear that FROM_UNIXTIMESTAMP is supported for partioning a table, so a different transformation from timestamp to date is required.

I have to create INDEX but which? [on hold]

Posted: 30 Sep 2013 11:41 AM PDT

I have to create INDEX but which with this query ?

SELECT *  FROM TABLE  WHERE A = 1    AND B = '5427dbfac0a8280d74bc672c72636a4f'    AND C = 'BIL'    AND D = 'B'    and E = '01'  GROUP BY F,G,E,H,I  ORDER BY B,C,E,G,H,I;  

How can I synchronize data between two RDS MySQL databases

Posted: 30 Sep 2013 05:48 PM PDT

I have 2 Amazon RDS MySQL instances, one is MySql v5.5 and the new one is 5.6. I want to upgrade the 5.5 to the 5.6 but you cannot do upgrades at this time, so we are trying to migrate the data. I did a dump of the 5.5 and have imported the dump into the 5.6 but since I need to keep our down time to a minimum, I decided to try to do a data synchronization between the two.

What are my options for this?

  • I have a few tables that are huge! (1-8gb)
  • Tables are all innodb
  • Dump download took 3 hours
  • Dump Upload took 4+ hours

I need to try to stay under 5 hours total!

Wrong error in SSIS - [Execute SQL Task] Error: There is an invalid number of result bindings returned for the ResultSetType: “ResultSetType_Rowset”

Posted: 30 Sep 2013 02:50 PM PDT

I have an Execute SQL task which gives returns only one row with one column - a number. I set my result set to single row. Despite that, the task fails. Why ? How do I fix it ?

[Execute SQL Task] Error: There is an invalid number of result bindings returned for the   ResultSetType: "ResultSetType_Rowset".  

storing arrays or images in a database table

Posted: 30 Sep 2013 06:06 PM PDT

I'm a beginner with both database design and use of forums such as this one so forgive me if I forget to provide some relevant information below.

Basically I'm trying to store a set of arrays, in this case raster images, of 2500*2500 in a database in order to extract timeseries for individual pixels. I currently have roughly 1500 images and every day an image is added.

The original raw files of bytetype are only small, but my database table grows very quickly in the following form:

     pixelnr, date, rastervalue        PK clustered (pixelnr,date)  

I understand that this is because I need a pixelnr and date for every pixelvalue. Unfortunately I don't know how else to store the data in a database.

I've tried to optimize the table for select statements of timeseries per pixel with a clustered PK on pixelnr,date. The drawback is that inserts of new images take an increasingly long time.

There are only two things I do with this table: - bulk insert from a csv file. I convert a new image to a CSV file with the same form as the table. It will have all pixels, but only for a single date.

  • run simple select queries to derive time series for a number of pixels, usually up to 25000 pixels:

    SELECT Pix,Date,Val  FROM FullTable INNER JOIN PixelSubset ....  

I now have two questions:

1) Is there a more efficient way to store this kind of data? I've not found much useful on storing arrays in an RDBMS.

2) How to improve the performance of bulk inserts (all pixels for a single date) into the the table while keeping the clustered index optimised as described above.

I use SQL server 2012 developer edition on a PC with 8GB RAM and a RAID system. There's not much room for hardware changes....

Any pointers particularly to more efficient storage of such data in a RMDBS would be greatly appreciated!

How can I integrate a new system to a design scheme I previously created?

Posted: 30 Sep 2013 02:13 PM PDT

This is a follow-up to a question I wrote a while ago, for reference, you can look it up here.

I've developed a curriculum based database scheme that looks like this(Thanks Joel Brown) enter image description here

Now that a new educational system is introduced(while the old one is still not "deprecated") I have been looking for a way to integrate both curriculum systems together without creating a whole new different system; As the two systems are fundamentally different, this have been quite a challenge for me.

Here's a picture describing the new system . . .

enter image description here

(English level 1, 2, and 3 are classified as courses)

See, the new system removes the whole grades system. A jr. high school student can enroll in the exact same course(English level 3, for example) as a sr. high school student.

Is there a way to make those two systems work with my current design scheme? Or should I create a whole new scheme for the new system?

Targeting MSX with job on Multi-Server Administration

Posted: 30 Sep 2013 02:24 PM PDT

I am using Multi-Server Administration to execute jobs on multiple targets. This works well, but I am not able to enlist the master as a target. I receive the following error when attempting to enlist the MSX as a TSX:

Server 'XXXXX' is an MSX. Cannot enlist one MSX into another MSX. (Microsoft SQL Server, Error: 14299)

I would think that running these same jobs on the master would not require managing a local job as well as the multi-server jobs.

Any help is appreciated.

FileWatcher Troubleshooting Options

Posted: 30 Sep 2013 11:24 AM PDT

I have setup a Credential/Procedure/Program/File Watcher/Job to monitor a folder for new files, but it is not working. I am trying to figure out what I can check to troubleshoot why this isn't working. After scaling back the code to a bare minimum, here are somethings I have already done.

  • Verify that files are being created in the folder monitored - They are.
  • Verify that the procedure can be run by itself - It can.
  • Verify that the File Watcher shows up in DBA_Scheduler_File_Watchers and is enabled. - It does and is.
  • Verify that DBA_Scheduler_Job_Run_Details shows a successful execution - It does NOT show any entries for this job.
  • Check the alert log - Nothing interesting.
  • Check for other trace files - I am getting trace files mattching *_j000_*.trc, but they just have this: FILE_TRANSFER error is: with nothing after it.

Could someone explain what I am doing wrong or give me a way to troubleshoot this further? Here is my test code:

--Create Credential.  BEGIN      sys.dbms_scheduler.create_credential(          username => 'oracle',          password => 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxx',          database_role => NULL,          windows_domain => NULL,          comments => NULL,          credential_name => 'TestCredential'      );  END;  /      CREATE TABLE FilesFound(FoundDate Date);      CREATE OR REPLACE PROCEDURE TestProcedure(iFileInfo In sys.scheduler_filewatcher_result) AS  BEGIN     INSERT INTO FilesFound VALUES (sysdate);     COMMIT;  END;  /      BEGIN     DBMS_SCHEDULER.create_program(        program_name => 'TestProgram',        program_type => 'stored_procedure',        program_action => 'TestProcedure',        number_of_arguments => 1,        enabled => False);     DBMS_SCHEDULER.define_metadata_argument(        program_name => 'TestProgram',        metadata_attribute => 'event_message',        argument_position => 1);  END;  /        BEGIN  dbms_scheduler.create_file_watcher(     file_watcher_name => 'TestFileWatcher',     directory_path => '/u01/test',     file_name => '*.*',     credential_name => 'TestCredential',     destination => NULL,     enabled => False);  END;  /      BEGIN     dbms_scheduler.create_job(     job_name => 'TestJob',     program_name => 'TestProgram',     event_condition => NULL,     queue_spec => 'TestFileWatcher',     auto_drop => False,     enabled => False);       --Enable Everything.     dbms_scheduler.enable('TestProgram, TestFileWatcher, TestJob');  end;  /    --Set a one minute check interval.  BEGIN    DBMS_SCHEDULER.set_attribute(      'file_watcher_schedule',      'repeat_interval',      'freq=minutely; interval=1');  END;  /    --Create a file.  DECLARE    vFile utl_file.file_type;  BEGIN    EXECUTE IMMEDIATE 'create or replace directory TESTDIRECTORY as ''/u01/test''';    vFile := utl_file.fopen('TESTDIRECTORY', 'TestFileWatcher.txt', 'w', NULL);    utl_file.put_line(vFile, 'File has arrived '||SYSTIMESTAMP, TRUE);    utl_file.fclose(vFile);  END;  /      --Delay to give time for file to appear.  BEGIN     DBMS_LOCK.SLEEP(120);  END;  /    --Check the table.  BEGIN     FOR vx IN (select count(*) ct from sys.FilesFound) LOOP        If (vx.ct = 0) Then           DBMS_Output.Put_Line('Failed - No File Watcher Activity.');        Else           DBMS_Output.Put_Line('Success - File Watcher Activity Found.');        End If;     END Loop;  END;  /    --Cleanup.  EXECUTE dbms_scheduler.drop_job(job_name => 'TestJob');  EXECUTE dbms_scheduler.drop_program(program_name => 'TestProgram');  EXECUTE dbms_scheduler.drop_file_watcher(file_watcher_name => 'TestFileWatcher');  EXECUTE DBMS_SCHEDULER.drop_credential(credential_name => 'TestCredential');  drop table FilesFound;  drop procedure TestProcedure;  drop directory TestDirectory;  

What if I want to keep using MyISAM

Posted: 30 Sep 2013 10:31 AM PDT

In my current use of MyISAM the databases are extremely small and kept for a few weeks after which time they are deleted and completely rebuilt with a clean slate. The script uses MYISAM tables and other than lack of support in the latest MYSQL versions, I have never had a problem with the way they work.
Changing to INNODB is something that would be time consuming and provide a zero benefit. However, I am concerned that eventually there may come a time when either the Older MySQL version is not available or some other process forces the move.

My question is: Does anyone know of a specific reason to change from MyISAM to MyinnoDB just to keep up with MySQL.

If I can keep using what works for my own specific use, I see no reason to worry with testing and possible errors when there are more pressing issues that do need attention that have problems now. MyISAM does not have a problem at this time.

restrict user host settings to socket connection only

Posted: 30 Sep 2013 01:24 PM PDT

Is there a way to restrict a user's host setting so that (s)he can only connect by socket and not TCP/IP? I'd like to have this setting for the root account.

edit: As Abdul Manaf pointed out skip-networking can be used to turn off TCP/IP connectivity altogether. But can it be done on a user account basis?

Unable to create a new listener in oracle 11g

Posted: 30 Sep 2013 03:24 PM PDT

In net manager i get an error message "A valid logfile is required" when i click on listener and choose '+' button. and then it keeps on giving me that error when i click on any other tab or button in net manager.

Grant access to a table to all users

Posted: 30 Sep 2013 10:24 AM PDT

Is it possible to assign a grant to a table for all users, or a default permission so that when new users are created they will have the specific grants for that table to SELECT, UPDATE, INSERT and DELETE?

Performance difference between MySQL and PostgreSQL for the same schema/queries

Posted: 30 Sep 2013 07:24 PM PDT

I'm a newbie DBA, and I have experience in Microsoft SQL Server but I want to jump to FLOSS.

I'm starting a company, and we develop an app (PHP) with a Postgres backend, and we did some tests comparing with MySQL too. We observe that MySQL is twice as fast as PostgreSQL.

I did a tangible performance test:

  • Same columns in table with equivalent column datatypes.
  • Same number of rows.
  • Same indexes in both (primary key included).
  • The CPU load are idle and Postgres machine it's significantly better.
  • And the same query (obviously).

What am I doing wrong?

P.S: I read many "howtos" on performance tuning for database engines.
P.S(2): We're using InnoDB (one file per table) on the MySQL database.


Hi Mat!

I did the three common select (and hardest) queries.

The question about disk, certainly it's not the same; In Postgres it's a SSD (almost three time fastest).

MySQL cache data:

+------------------------------+----------------------+  | Variable_name                | Value                |  +------------------------------+----------------------+  | binlog_cache_size            | 32768                |  | have_query_cache             | YES                  |  | key_cache_age_threshold      | 300                  |  | key_cache_block_size         | 1024                 |  | key_cache_division_limit     | 100                  |  | max_binlog_cache_size        | 18446744073709547520 |  | query_cache_limit            | 1048576              |  | query_cache_min_res_unit     | 4096                 |  | query_cache_size             | 16777216             |  | query_cache_type             | ON                   |  | query_cache_wlock_invalidate | OFF                  |  | table_definition_cache       | 256                  |  | table_open_cache             | 64                   |  | thread_cache_size            | 8                    |  +------------------------------+----------------------+  

I don't know how to view this in PostgreSQL.

Thanks in advance.

NoSQL approach: design table "scheme"

Posted: 30 Sep 2013 01:49 PM PDT

We need to store simple log data in a database, but we also need to get and filter data in realtime, based on 'user', simple 'tag' (ie. redis, memcache, php, mysql, etc.) and timestamp. We need to scale horizontally and real fast data access on billions rows.

In a SQL approach, table can be like this:

ID | timestamp | tag | user       | log text  1  | 19543532  | 1   | root       | { text log }  2  | 19543532  | 3   | redis-user | { text log }  

where tag 1 and 3 are different and related to another table (ie. tag_id | tag_name). I think this is a relational approach and we can create three index (timestamp, tag and user) in order to speed up data access.

What is a good practice to reproduce this in a NoSQL database like DynamoDB (AWS) where we can create only HASH or RANGE index? Does a SQL database fit better than a DynamoDB?

My first attempt is:

First table: ID hash index

ID | log text  1  | { text log }  2  | { text log }  

Second table: USER, TIMESTAMP range index

user | timestamp | id  root | 123145122 | 1  redis| 123491241 | 2  

Third table: TAG index

tag        | id  debug      | 1  production | 2  

Thank you in advice!

User has no password set

Posted: 30 Sep 2013 03:07 PM PDT

I see these when running MySQLTuner.pl:

-------- Security Recommendations  -------------------------------------------  [!!] User '@debian' has no password set.  [!!] User '@localhost' has no password set.  ------------------------------------------------------------------------------  
  • Is it a security hole?
  • How to fix it?

Thanks

[MS SQL Server] SWITCH statement: required indexes in target table?

[MS SQL Server] SWITCH statement: required indexes in target table?


SWITCH statement: required indexes in target table?

Posted: 29 Sep 2013 05:43 PM PDT

hello,is there some information about which indexes have to be created on a table to which I want to switch a partition? Books online says:[i]The corresponding indexes, or index partitions, must also reside in the same filegroup.[/i]But it does not tell me wether the target table [b]has[/b] to have any indexes at all.After some testing I got the following:Test 1: A partitioned source table with a clustered PK. The target table, not partitioned, needs the same PK definition.Test 2: A partioned source table with a nonclusterd PK and an additional clustered index. In this case the target table, not partitioned, does not need any index or PK to perform the switch.Now I'd like to have the official rules what is neccessary for the target table. Did not find any useful on google.For people who like to evaluate my testing results, here is my testing skript:[code="sql"]-- create a test databasecreate database Test01;goUSE Test01GO-- create the parition function and schemeCREATE PARTITION FUNCTION [PFN_EventPartitionKey](smallint) AS RANGE RIGHT FOR VALUES (1309, 1310)GOCREATE PARTITION SCHEME [PS_EventPartitionKey] AS PARTITION [PFN_EventPartitionKey] TO ([PRIMARY], [PRIMARY], [PRIMARY] )GO-- create 2 data tables with different kind of PK and clustered indexcreate table dbo.PartitionTest1( Id bigint not null, PartitionKey smallint not null, Col1 varchar(50) null ) on PS_EventPartitionKey( PartitionKey )goalter table dbo.PartitionTest1 add constraint PartitionTest1_PK primary key clustered( Id, PartitionKey ) on PS_EventPartitionKey( PartitionKey )gocreate table dbo.PartitionTest2( Id bigint not null, PartitionKey smallint not null, Col1 varchar(50) null ) on PS_EventPartitionKey( PartitionKey )goalter table dbo.PartitionTest2 add constraint PartitionTest2_PK primary key nonclustered( Id, PartitionKey ) on PS_EventPartitionKey( PartitionKey )gocreate nonclustered index cix_PartitionTest2 on dbo.PartitionTest2 ( Col1 ) on PS_EventPartitionKey( PartitionKey )go-- now create the target tables, in which we want so switchcreate table dbo.PartitionTest1Archive( Id bigint not null, PartitionKey smallint not null, Col1 varchar(50) null ) on [PRIMARY]go--alter table dbo.PartitionTest1Archive-- add constraint PartitionTest1Archive_PK-- primary key clustered( Id, PartitionKey )-- on [PRIMARY]--go-- this raises an error. we have to create the PK first (the comment here above)alter table dbo.PartitionTest1 switch partition 2 to dbo.PartitionTest1Archive;gocreate table dbo.PartitionTest2Archive( Id bigint not null, PartitionKey smallint not null, Col1 varchar(50) null ) on [PRIMARY]go-- this worksalter table dbo.PartitionTest2 switch partition 2 to dbo.PartitionTest2Archive;go[/code]

Management Data Warehouse Purge

Posted: 29 Sep 2013 09:51 PM PDT

Does anyone know where the table core.source_info_internal gets the days_until_expiration value from.I have an MDW database which has grown to epic proportions.It looks like the MDW purge SQL agent job which executes the core.sp_purge_data is getting the valid_through date via the core.snapshots view which the purge is based on from the core.source_info_internal - days_until_expiration column. The days_until_expiration value is set to 730 for a number of collections. I cannot seem to find this value in the Data Collection retention days setting in the SQL agent job which are set to a more manageable no. of days. Could these values have been changed via the agent jobs but not yet persisted to the table until a restart of the agent service is executed?

drop auto stats

Posted: 29 Sep 2013 07:51 AM PDT

Dear Experts How to drop all auto sys statistics in the whole databaseThanks lot

[Articles] Hybrid Databases

[Articles] Hybrid Databases


Hybrid Databases

Posted: 29 Sep 2013 11:00 PM PDT

We are getting more and more types of data that we have to manage and store in our databases. Steve Jones notes that SQL Server can handle almost all your needs.

Search This Blog