Wednesday, July 31, 2013

[T-SQL] how to get members of date dimension from current month

[T-SQL] how to get members of date dimension from current month


how to get members of date dimension from current month

Posted: 30 Jul 2013 07:25 PM PDT

Hi ,my query is:select distinct calendarmonthkey, calendarmonth from dim.dateorder by calendarmonthkey[b]Result:[/b] 205801 jan 2058205802 feb 2058........198001 jan 1980[b]expected result is :[/b]201307 jul 2013201306 jun 2013201305 may 2013...........198001 jan 1980or else it can be starts with previous month(jun 2013).please suggest Regards,Niharika

SSIS ( cannot able to change datatype column using datatype conversion......

Posted: 30 Jul 2013 09:42 PM PDT

Hi, Source: multiple text fileRequirement: 1. Define age columns as small integer.2. Convert gross column from $ 667 00 to 667.00 and change datatype from string to money.3. Convert date format from 12122012 to 12/12/2012 and change datatype from string to date data type.4. Finally load all flat file to a db table in sql server using a single destination.For which I converted gross column using dervied column transformation and for date column using substring.Now i can't able to change gross and date columns datatype from string to money and date data type.Please help me to solve this issue....Thanks in advance

T-SQL subquery

Posted: 30 Jul 2013 05:37 PM PDT

Hello guys, I have performance question or what is better for serverI have select[quote]SELECT TOP 5 CONVERT(DATETIME,WOT.ORDER_DATE) ORDER_DATE, WOT.COMMIT_NO, WOT.BODY_NO, WOT.CAR_CODE, WOT.PROD_FLAG FROM ( SELECT TOP 2 WO.ORDER_DATE, WO.COMMIT_NO, WO.BODY_NO, WO.CAR_CODE + ' ' + WO.DRIVE_TYPE CAR_CODE , 'T' PROD_FLAG FROM TB_WORK_ORDER_TRSB1 WO LEFT JOIN TB_TRACKING_TRSB1 TR ON WO.ORDER_DATE = TR.ORDER_DATE AND WO.COMMIT_NO = TR.COMMIT_NO WHERE WO.ORDER_DATE <= @vLAST_ORDER_DATE AND WO.ORDER_DATE + WO.COMMIT_NO <= @vLAST_ORDERCOMMIT AND DATA_TYPE <> 'SD' ORDER BY WO.ORDER_DATE DESC, WO.COMMIT_NO DESC UNION ALL SELECT WO.ORDER_DATE, WO.COMMIT_NO, WO.BODY_NO, WO.CAR_CODE + ' ' + WO.DRIVE_TYPE CAR_CODE , CASE ISNULL(TR.WORK_COMPLETE,'') WHEN '' THEN 'F' WHEN 'F' THEN 'Y' WHEN 'T' THEN 'G' END PROD_FLAG FROM TB_WORK_ORDER_TRSB1 WO LEFT JOIN TB_TRACKING_TRSB1 TR ON WO.ORDER_DATE = TR.ORDER_DATE AND WO.COMMIT_NO = TR.COMMIT_NO WHERE WO.ORDER_DATE >= @vLAST_ORDER_DATE AND WO.ORDER_DATE + WO.COMMIT_NO > @vLAST_ORDERCOMMIT AND DATA_TYPE <> 'SD' ) WOT [/quote][b]And what do you think what is it better .. to each clausule WHERE add query below or OR add behind *TEMP* table WOT on end of query to WHERE ? [/b][quote]AND WO.ORDER_DATE+WO.COMMIT_NO IN ( SELECT distinct ORDER_DATE+COMMIT_NO FROM TB_MASTER_TRSB1 WHERE ORDER_DATE >= @vLAST_ORDER_DATE AND STATION_ID = @rSTATION_ID )[/quote]Here are result from statistics [quote]--- NEW with MASTER in each union CPU time = 0 ms, elapsed time = 0 ms.Table 'TB_TRACKING_TRSB1'. Scan count 2, logical reads 66, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'Worktable'. Scan count 1, logical reads 1153, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'TB_WORK_ORDER_TRSB1'. Scan count 2, logical reads 34, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'TB_MASTER_TRSB1'. Scan count 2, logical reads 364, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 31 ms, elapsed time = 26 ms. SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms. SQL Server Execution Times: CPU time = 31 ms, elapsed time = 26 ms. --- NEW with MASTER in TEMP WOT Table 'Worktable'. Scan count 1, logical reads 1678, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'TB_MASTER_TRSB1'. Scan count 2, logical reads 364, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'TB_WORK_ORDER_TRSB1'. Scan count 2, logical reads 31, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'TB_TRACKING_TRSB1'. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 31 ms, elapsed time = 25 ms. SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms. SQL Server Execution Times: CPU time = 31 ms, elapsed time = 26 ms.[/quote]What do you think about it ?

Need Help TSQL Code.

Posted: 30 Jul 2013 07:54 AM PDT

I have source file with columnFull NameJim k SmithRobert K LinChris Timand My destination hasFirstName and LastNameI want to parse FullNameI know how I can use SSIS Expression to parse the address, Here it isFirstName = LTRIM(RTRIM(TRIM(REVERSE(SUBSTRING(REVERSE([Full Name]),1,FINDSTRING(REVERSE([Full Name])," ",1))))))LastName = LTRIM(RTRIM(TRIM(SUBSTRING([Full Name],1,FINDSTRING([Full Name]," ",1)))))Is any one can help me to transfer in T-SQL?Thank You in advance.

Universal CSV-Importer

Posted: 30 Jul 2013 07:17 AM PDT

Is there a way, to import csv Files with different count of fields (all nvarchar (max)) with one Stored-Procedure in one fix table (with Import-ID) , without having to make much adjustment? The Files are flat Staging csv-Files. Maybe with pivot/unpivot Function etc...Field and Rowdelimiter are allways the same.Thank youKind RegardsNicole ;-)

[SQL Server 2008 issues] Blocking Query

[SQL Server 2008 issues] Blocking Query


Blocking Query

Posted: 30 Jul 2013 07:02 PM PDT

Hi Team,I used to run 'sp_who2' to check the locks, i found that some sp_id are showing in BlkBy column, and in command column it is showing as 'Select'I to find the full query, which is blocked.

Convert Varchar to Date??

Posted: 30 Jul 2013 02:00 AM PDT

HiI have a date field stored in a varchar as "Jul 24 2013 8:05AM"I would like to convert to a date field to do calculations like this "2013-07-24 00:00:00.000"Time is not important..Thanks in AdvanceJoe

Tool for User Administration for Level 1

Posted: 30 Jul 2013 05:16 PM PDT

Can you suggest me a tool for User administration like create user, assgn roles, permmissions for sql server.My goal is to assign these activites to my level 1 team who knows nothing at sql server.SQL Managemenet Studio I donot want to provide.

How to add an analysis server into SQL server 2008 R2?

Posted: 30 Jul 2013 08:49 AM PDT

I am very new to data warehouse. I just installed Sql server 2008 R2, but when I accessed it, there was no analysis server which is for data warehouse. I also downloaded the Aventure Works DW 2008 database sample, but I did not know where to attach it?Does anyone know about it?thank in advance.

Identifying SQL Agent Job Name based on the job id.

Posted: 30 Jul 2013 11:33 AM PDT

I'm trying to figure out which Job is running, based on this the following name. Where can I find this information?SQLAgent - TSQL JobStep ([b]Job 0xFB668E27919DA3489E3DD97061F25B31[/b] : Step 1)

Need a help in MCSA SQL Server exam 2012 70-461, 462, 463

Posted: 30 Jul 2013 09:32 AM PDT

Hello friends,after getting good experience at current workplace, I am planing to get certified in MS SQL SERVER 2008/2012.I went through the webpage that provide the exam information.I am looking for any preparation material, or web site that can help me with the preparation.because, I can not afford the books, which are mentioned in the website for preparation of the exam.Please let me know where I can start reading or get some practice materials.thanks in advance.Pratik.

Getting Back Previous Database

Posted: 30 Jul 2013 02:55 PM PDT

Dear,I have restored a database with an existing database. But there was some important information in my previous database. I forgot to backup the previous database.Now I want to get previous database. But how to get it?I am using SQL 2008 R2. Please help me.Regards,Akbar

SSIS Lookup Transform Bulk insert error - stumped!

Posted: 30 Jul 2013 01:36 PM PDT

Ok - I'm stumped. I have an SSIS package that reads our CDR data and i need to do an incremental load nightly to only add new records. I can successfully run the package and insert directly in the table with out the look up transform, but as soon as i try to see if any of the records are in the destination table already the lookup transform fails with the following error:[Error Rows [169]] Error: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80040E14.An OLE DB record is available. Source: "Microsoft SQL Server Native Client 10.0" Hresult: 0x80040E14 Description: "Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".".An OLE DB record is available. Source: "Microsoft SQL Server Native Client 10.0" Hresult: 0x80040E14 Description: "The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.".An OLE DB record is available. Source: "Microsoft SQL Server Native Client 10.0" Hresult: 0x80040E14 Description: "The Bulk Insert operation of SQL Server Destination has timed out. Please consider increasing the value of Timeout property on the SQL Server Destination in the dataflow.".I have sys admin rights on the server. I am running in admin mode with 32 bit enabled. Bulk Insert is allowed on the server and the package is ran locally on the server using the bulk insert operation.SSIS Data Flow Task:flat file source ->derive some columns->conditional split to find all non null values for insert ->lookup transform to see if the record exists->insert new records into OLE DB Destination TableI'm stumped on how to get the lookup transform to work. Is there another option to only insert new records via ssis efficiently?Thanks in advance for your help!Katie

Deploy multiple SSIS Packages in one batch file to SQL Server

Posted: 27 Mar 2011 07:08 PM PDT

Hi,I need to write a Batch file which can Deploy multiple SSIS packages kept at a location. The Batch file should iterate to check all the SSIS packages and Deploy them one by one.Server to deploy should be passed as parameter.

Output query result to excel sheet

Posted: 30 Jul 2013 12:29 AM PDT

Hi,I want to put results into excel from Sql script , In results it is having three different result sets returnedLike as below.Select * from table 1Select * from table 2Select * from table 3

Find out if index rebuilding/reorganizing is happening on a certain table

Posted: 30 Jul 2013 11:11 AM PDT

Dear Expert,I would like to know if there is a query to findout if index is current rebuilding/reorganizing for table X so that I can make the program wait for it to complete before issue certain command against the table for example SqlBulkCopy as SqlBulkCopy is fails consistently if re-index is happening at the same time. I know drop/recreate indexes would be a work around but with the way my application works, I cant afford to drop indexes or stop the re-indexing process either. Thanks for your help!

openquery single quote

Posted: 30 Jul 2013 09:44 AM PDT

Hi All, I have an application which automatically generates SQL - To connect my application to my SAAS database I am having to use a LINKED server. The problem I have is with single quotes as the SQL used has to be enclosed in single quotes. The string my application adds to the sql is: a.acctnum like 'ME4%' and e.entityid='100AAA' I need a way of taking this where clause and added an addtional set of single quotes.

Opening .dts package from SQL Server 2000 in Visual Studio 2008

Posted: 30 Jul 2013 07:39 AM PDT

I can't believe I'm having this much trouble with this, but I am stuck. I have a .dts package that was saved to a SQL Server 2000 database. Unlike what I'm used to, I cannot seem to access this file in Visual Studio 2008.I know that there will be a compatability issue with this, and the main thing I am trying to do is test out how to do this so that I will have a solution ready when I need to make these changes in the near future. Can anyone offer any advice? Can you tell me if what I'm seeing sounds correct (i.e. - I cannot track down an acctual .dts file. Instead I only see the package created on the server. I thought it was in SQL Server 2005 that first allowed that.)?Thank you,Jarid

CheckDB inconsistency error

Posted: 29 Jul 2013 11:36 PM PDT

hello there, I have an issue with CheckDB giving the error below.If DBCC printed error messages, contact your system administrator. [SQLSTATE 01000]checking xxxx [SQLSTATE 01000]Msg 8914, Sev 16, State 1, Line 1 : Incorrect PFS free space information for page (1:208) in object ID 60, index ID 1, partition ID 281474980642816,Is there anything i can do about that i have read that this is a bug.. Thank you in advance

Catch text before symbol

Posted: 30 Jul 2013 03:15 AM PDT

Hi geniuses,consider:[code="sql"]create table #mytable( Value varchar(50))INSERT INTO #mytable (Value) SELECT 'First.Second'[/code] [b]I need to isolate what's before the symbol: '.'[/b]I was able to catch whats after the symbol:[code="sql"] SELECT Value, (substring(Value,charindex('.',Value)+1,LEN(Value))) AS 'beforedot' FROM #mytable[/code]Thanks in advance!

SQL Native Client - How can you tell it's being utilised as a connection?

Posted: 29 Jul 2013 07:53 PM PDT

Hi All, Apart from turning it off with all of the associated protocols, how can you tell it's being used?Thanks in advance

ID and Exclude Top & Bottom 2.5 by Contract

Posted: 30 Jul 2013 12:04 AM PDT

Hi All, I'm trying to filter out my results the top and bottom 2.5 percent records by meeting lengths per contract.I currently have this query which ID's the top and bottom 2.5% but as a whole and not by Top 2.5 for Contract X and Top 2.5 for Contract Y etc.[code="sql"], x2 AS( SELECT caseref, [contract] FROM ( SELECT TOP 2.5 PERCENT caseref, [contract] FROM @avgep ORDER BY DATEDIFF(SECOND, casereceived, CaseEnd) ) AS lowest UNION ALL SELECT caseref, [contract] FROM ( SELECT TOP 2.5 PERCENT caseref , [contract] FROM @avgep ORDER BY DATEDIFF(SECOND, casereceived, CaseEnd) DESC ) AS highest)[/code]I imagine a CET is involved but I can't see the wood for the trees at the moment. Any help would be great.

multiple insert from one table depending on the number of rows in another table

Posted: 30 Jul 2013 01:17 AM PDT

Hi I have 2 tables as below which I import to sql server from supplied text files. I would like to repeatedly select / insert all the same rows from Table1 into a new table depending on the number of rows there are in Table2 adding the ID from Table2 each time per insert. As per the results table below. Both table1 and Table2 could have a variable number of rows. I'm not sure how to accomplish this at he moment, maybe to use a for each loop or a cursor. Any help would be appreciated. Table 1[b]chr start end[/b]chr1 14501 18250chr1 102501 103750Table 2 [b]ID[/b]100001100007result [b]chr start end ID[/b]chr1 14501 18250 100001chr1 102501 103750 100001chr1 14501 18250 100007chr1 102501 103750 100007

configure replication

Posted: 29 Jul 2013 09:13 PM PDT

How to configure replication in different domain ?

Non-Pageable Memory

Posted: 29 Jul 2013 10:17 PM PDT

Suddenly got struck in a confusion(I might be wrong too). I will really thank and appreciate any light sheds. AWE is non-pageable? right. It means it cant be hard-fault. Then why we need LOCK PAGE in MEMORY?

Migrating System Databases Between Clusters

Posted: 29 Jul 2013 10:26 PM PDT

Hi there,I have the need to migrate a 2008 R2 SQL Cluster onto a new site (unfortunately physically moving the kit is not an option). We will prepare a new cluster to be identical to the existing one in terms of SQL version, file structure etc.Once that is ready I would ideally like to backup and restore the system databases to the new cluster to make the process easier. Does anyone know if there are any special considerations when transferring system databases from one cluster to another, or is it exactly the same process as it would be if they were stand-alone servers?I'm aware moving system databases can sometimes be a bit of a minefield, but it would save a lot of work!Thanks,Matt

Find and update the Schema changes of a table?

Posted: 26 Jul 2013 12:02 AM PDT

Hi Friends, I have around 25 tables where i am frequently using to put the data from live server to my local . The schema's of the live database gets changed frequently. I mean we add one or more columns evertime. When i use my SSIS package , to pull the data from live server to my local, it is messing up. I have to drop the database and recreate it with new schemas and update the package as well. So what i am looking for isJust Curious..1. How do we create a script to update the schema ? I know there is Red Gate tool which does this comparision of the schemas. But is there a way to create some SQL statements to update the schema? (because I can't buy the red gate software. price is not affordable :-))2. How can we avoid the SSIS package getting this error? You know if we have added some fields in the table, then i have to update the package as well....and then using it.Any suggestions would be really appreciated. Sorry in case , if my questions are meaning less...

Table Partition

Posted: 29 Jul 2013 09:52 PM PDT

Hi All,I have partitioned VOICE_CALL table, size=130 GB(spill-ted into 10 ndf files) But before partition mdf files size 350 GB,After partition mdf file size 351 GB.10 Ndf files size 99 GB so total database size is 449 GB [b]Why mdf file size not decreased ? even-though I was moved data to ndf files ?[/b]Thanks,Ramana

SAN disk size expansion

Posted: 25 Jul 2013 04:04 PM PDT

Hi,We were trying to expand SAN disk size more than 2 TB as part our requirment. But it failed due to the limitation, NTFS can support maximum of 2 TB. Is it so? I am unaware such windows limitations! On the other way, how to work around and hot to expand a SAN disk more than 2 TB. Any suggesstion would be highly appreciated.My environment is: Windows Server 2008 R2, enterprise editionSQL Server 2008 R2, enterprise edition6 SAN disks are configured with >1 TB to < 1.5 TB. We want to expand those SAN disks to more than 3 TB each. Thanks!

Tuesday, July 30, 2013

[how to] PostgreSQL kill - Sighup pid

[how to] PostgreSQL kill - Sighup pid


PostgreSQL kill - Sighup pid

Posted: 30 Jul 2013 08:41 PM PDT

" To reload the configuration files, we send the SIGHUP signal to the postmaster, which then passes that on to all connected backends. That's why some people call reloading the server "sigh-up-ing".

As reloading the configuration file is achieved by sending the SIGHUP signal, we can reload the configuration file just for a single backend using the kill command.

First, find out the pid of the backend using pg_stat_activity. Then, from the OS prompt, issue the following:

kill -SIGHUP pid "

I dont understand bold words. Because we have many pid for backkend and if we kill "pid" , how can it get changes from reload configuration file (postgresql.conf) ?

Many thanks.

Need advice for table design for multi-user access

Posted: 30 Jul 2013 03:33 PM PDT

I have an inventory application that needs to support multi-user. As of right now, only one user can access and manage their items. I've gotten a lot of requests to support multiple users so they can all manage the same inventory.

I have a table called user_items that stores item information. To keep it simple, I'll include just the relevant column names.

mysql> select primary_item_id, user_id, item_name from users_item limit 1;  +-----------------+---------+-----------------+  | primary_item_id | user_id | item_name       |  +-----------------+---------+-----------------+  |             100 |       4 | Stereo Receiver |  |             101 |       5 | Couch           |  +-----------------+---------+-----------------+  

I've created a mapping table to map the items to users.

+-------------+-------------+----------------+------------+-----------+  | map_user_id | map_item_id | unique_item_id | item_owner | privilege |  +-------------+-------------+----------------+------------+-----------+  |           4 |       100   |              1 |          1 |      NULL |  |          13 |       100   |              1 |          1 |      NULL |  |           5 |       101   |              1 |          5 |      NULL |  +-------------+-------------+----------------+------------+-----------+  

The unique_item_id column is the item_id that's displayed to the users. So item #1 for user #4 is "Stereo Receiver." Item #1 for user #5 would be a couch. The item_owner field doesn't mean much for the time being. I'm not sure if I need it but it's there for now as I play with the schema and code.

Anyways, this works fine, except I need multiple users to track the same item(s). Instead of offering the opportunity to share items AND track their own items, my version of "multi-user" means they have to track the same exact number of items. If user #13 adds a new item, user #4 also has access to said item.

Any suggestions? I think I shot myself in the foot by offering unique ids for each item. It is what it is so now I have to work with what I have.

ORDER BY items must appear in the select list [...]

Posted: 30 Jul 2013 02:28 PM PDT

Using Microsoft SQL Server 2008, I get the following error.

Msg 104, Level 16, State 1, Line 43
ORDER BY items must appear in the select list if the statement contains a UNION, INTERSECT or EXCEPT operator.

The query is I am using is kind of complex, but the CASE statement in side of the ORDER BY clause can not see the aliased column name, here is a brief example.

SELECT 1 AS foo, 2 AS bar  UNION ALL  SELECT 10 AS foo, 20 AS bar  ORDER BY CASE WHEN foo = 2 THEN 1 END;  

In my production query the left-query needs to be ordered by the column [360_set] found in the table, and the right-query needs to be ordered as if [360_set] was null.

How do I fix this error, and why does this syntax generate an error?

Here is the version info,

Microsoft SQL Server Management Studio     10.0.5512.0  Microsoft Analysis Services Client Tools   10.0.5500.0  Microsoft Data Access Components (MDAC)    6.1.7601.17514  Microsoft MSXML                            3.0 6.0   Microsoft Internet Explorer                9.10.9200.16635  Microsoft .NET Framework                   2.0.50727.5472  Operating System                           6.1.7601  

is it possible to pass the message from sp_add_alert to the job?

Posted: 30 Jul 2013 02:41 PM PDT

sp_add_alert stored procedure can react to different system messages and execute a job in response. It also can notify a person with the message text by email, pager of net send.

But how do I pass the sysmessage message (corresponding to the event that caused the alert) not to a person but to the job that is executed in response to the alert?

Let's consider this message:

select [description] from sysmessages where msglangid = 1033 and error = 829  

This will yeild:

Database ID %d, Page %S_PGID is marked RestorePending, which may indicate disk corruption. To recover from this state, perform a restore.

I'd like to receive this message in the job, so the job knows what %d and %S_PGID caused a problem.

Looking for one value in multiple tables and still return rows if finds it in any of those tables

Posted: 30 Jul 2013 01:15 PM PDT

SELECT a.id, a.Nome, d.email, e.ddd, e.telefone     FROM cadClientes as a   join cadCliente_ParamsPF as b on a.id = b.idCliente   join Enderecos as c on a.id = c.idCliente   join Emails_Clientes as d on a.id = d.idCliente   join Telefones_Clientes as e on a.id = e.idCliente   join Contatos_Clientes as f on a.id = f.idCliente     WHERE idTipoCliente = 1  ORDER BY a.id  

based on that query, I am lokking for a way to query a name in the cadClientes.name and Contatos_Clientes.name and return rows if we find the name either in cadClientes or Contatos_Clientes.

The problem is if I don´t have any row in Contatos_Clientes that refers to a client in cadClientes Sql Server return 0 rows.

Is there a way to implement that in one query or I would have to use multiple queries?

SHRINKFILE best practices and experience

Posted: 30 Jul 2013 01:27 PM PDT

Preamble: In general it's a big no-no, but believe me that are rare cases when space is really needed. For example Express Edition is limited to 10GB. Imagine that you discovered that with a data type conversion (a blob column) you can free up significant amount of space. But after that the DB file still has the same size as we know, and the 10GB limit also didn't change magically. So some kind of SHRINK is needed. That was an example.

In my test environment I performed:

DBCC SHRINKFILE (Envision, NOTRUNCATE)  DBCC SHRINKFILE (Envision, TRUNCATEONLY)  

That did the trick (I know that that minimizes the free space, in real world I would leave free space). it took many-many hours to finish. As we know it's a single threaded process Strange behaviour DBCC Shrinkfile, "it works as a series of very small system transactions so there is nothing to rollback." - Paul Randal http://www.sqlservercentral.com/Forums/Topic241295-5-1.aspx. We also know that it messes up the index fragmentation big time http://www.mssqltips.com/sqlservertip/2055/issues-with-running-dbcc-shrinkfile-on-your-sql-server-data-files/ and I can confirm that. I didn't experience log file grow though described in http://www.karaszi.com/SQLServer/info_dont_shrink.asp

I issued some INDEX REBUILD and REORGANIZE and those finished within seconds.

My questions:

  1. What's the big deal about shrink if I can just fix the index fragmentation after the shrink within seconds? I don't understand. Here on DBA stackexchange the "shrink" topic (http://dba.stackexchange.com/tags/shrink/info) says "Pretty much the worst thing you could do to a SQL Server database. In short: It sacrifices performance to gain space." plus refers to another popular article about it. But index fragmentation can be fixed.
  2. Why I didn't experience any log file growth?
  3. What if REBUILD the indx first after the space free-up operation. Can that substitute the first DBCC SHRINKFILE (Envision, NOTRUNCATE) so I just need to DBCC SHRINKFILE (Envision, TRUNCATEONLY)? I have a feeling that the two work on different logical level but I have to ask this.

Bottom line: I promise I won't do shrink regularly or anything. But this is a situation where a cap is hit and shrink is needed.

How do I get let SQL Server 2005 know that I changed host name in Amazon AWS server?

Posted: 30 Jul 2013 12:46 PM PDT

I am using an Amazon AWS SQL Server 2008 R2 server for dev purposes. In order to follow a new naming convention, we changed the name of the host. I then tried to let SQL Server know about this server name change, with the usual:

EXEC sp_dropserver '<oldname>'   GO   EXEC sp_addserver '<newname>', 'local'   GO  

But then SQL Server complains that server oldname does not exist.

When I do this: select @@SERVERNAME

I get back a host name that starts with 'IP-' and is then followed by some hex. Apparently Amazon does some funky DNS aliasing of some sort behind the scenes and comes up with its own internal name, even if I am using oldname and SQL Server itself thinks its oldname.

How do I let SQL Server know that the name of the server is now newname?

thanks aj

ORA-03113: end-of-file on communication channel Can not connect

Posted: 30 Jul 2013 12:24 PM PDT

This is the log file:

Errors in file E:\ORACLEXE\APP\ORACLE\diag\rdbms\xe\xe\trace\xe_arc0_5024.trc:  ORA-00313: open failed for members of log group 3 of thread 1  ORA-00312: online log 3 thread 1: 'D:\ORACLEBACKUP\XE\ONLINELOG\O1_MF_3_8M981VNW_.LOG'  ORA-27041: unable to open file  OSD-04002: unable to open file  O/S-Error: (OS 3) Das System kann den angegebenen Pfad nicht finden.  Errors in file E:\ORACLEXE\APP\ORACLE\diag\rdbms\xe\xe\trace\xe_lgwr_4064.trc:  ORA-00313: open failed for members of log group 1 of thread 1  ORA-00312: online log 1 thread 1: 'D:\ORACLEBACKUP\XE\ONLINELOG\O1_MF_1_8M981W22_.LOG'  ORA-27041: unable to open file  OSD-04002: unable to open file  O/S-Error: (OS 3) Das System kann den angegebenen Pfad nicht finden.  Errors in file E:\ORACLEXE\APP\ORACLE\diag\rdbms\xe\xe\trace\xe_lgwr_4064.trc:  ORA-00313: open failed for members of log group 1 of thread 1  ORA-00312: online log 1 thread 1: 'D:\ORACLEBACKUP\XE\ONLINELOG\O1_MF_1_8M981W22_.LOG'  ORA-27041: unable to open file  OSD-04002: unable to open file  O/S-Error: (OS 3) Das System kann den angegebenen Pfad nicht finden.  Errors in file E:\ORACLEXE\APP\ORACLE\diag\rdbms\xe\xe\trace\xe_ora_664.trc:  ORA-00313: open failed for members of log group 1 of thread   ORA-00312: online log 1 thread 1: 'D:\ORACLEBACKUP\XE\ONLINELOG\O1_MF_1_8M981W22_.LOG'  USER (ospid: 664): terminating the instance due to error 313  System state dump requested by (instance=1, osid=664), summary=[abnormal instance termination].  System State dumped to trace file E:\ORACLEXE\APP\ORACLE\diag\rdbms\xe\xe\trace\xe_diag_3556.trc  Dumping diagnostic data in directory=[cdmp_20130730170815], requested by (instance=1, osid=664), summary=[abnormal instance termination].  Instance terminated by USER, pid = 664  

Has somebody an idea to solve the problem?

I am working on Windows Server. Need any more information?

Which all system parameters to be considered for standard Vacuum process

Posted: 30 Jul 2013 05:25 PM PDT

We want to run standard vacuum process on our production database which is over 100 GB and have millions of dead tuples.

Can anyone suggest what all system parameters we need to keep in mind for setting Cost-based Vacuum settings. I mean like CPU/IO/Memory/Disk.

We cannot run vacuum full as database should be up and running continuously so we just want to attain most appropriate value without affecting system much.

Calculated Measure to get only most current from one dimension on snapshot fact but keep other filters

Posted: 30 Jul 2013 03:11 PM PDT

I'm working on a tabular cube in SSAS 2012 SP1 CU4. I have 3 dimensions (Requisition, Requisition Status, Date) and 1 fact (Requisition Counts). My fact table is at the grain of requisitionKEY, RequisitionStatusKEY, SnapshotDateKey.

I have calculated measures that essentially get the lastnonempty value (like a semi-additive measure) for the given period whether it is Year, Month, Or Date:

Openings:=CALCULATE(Sum('Requisition Counts'[NumberOfOpeningsQT]),   Filter('Date','Date'[DateKey] = Max('Requisition Counts'[SnapshotDateKEY])))    

This works well until you throw Requisition Status into the mix. I have rows for each requisition for every day in 2013. For one of the requisitions, the Requisition Status was Pending for the first 6 months and then it changed to Approved on all data from July 1 to date. When I summarize the number of openings for the requisition at the month level for July, users see two rows: the sum of the openings for the last populated day in July that it was pending and the sum of the openings for the last populated day in July that it was approved.
Pivot Table

Although the total of 2 is correct, I need to change this calculation so that I only get the most current requisition status for the date period selected (Approved) and either show 0 or null or blank for the Pending approval line in the pivot table.

The Requisition Status table looks like this: Requisition Status

Update: Here is a link to a PowerPivot model I made that has some scrubbed data in it to help answer this question. This should better explain the table schemas. The NumberOfOpeningsQT field is basically the number of open positions they have for that job. Sometimes it is 1, sometimes is is more than 1. It doesn't usually change over time, but I guess it could. I'm trying to make the Openings calculation give me the desired answer. I threw some other calculations in there so show some things I had tried that had promise but that I couldn't get to work.

Need to install Oracle Express 11g Release 2 on a Windows 7 64-bit laptop

Posted: 30 Jul 2013 06:45 PM PDT

I need the Oracle 11g Release 2 sample schemas (HR, OE, etc.) in order to do most of the available online tutorials. I was hoping to install Oracle Express Edition on my Windows 7 laptop to get these; but I have never heard of anybody successfully installing Oracle XE on a 64-bit Windows platform.

Is there a version of Oracle XE 11g R2 available for Windows 7? And if so, could you please point me to it?

Thanks...

Help my database isn't performing fast enough! 100M Merge with 6M need < 1 hour!

Posted: 30 Jul 2013 05:45 PM PDT

I have a server right now receiving more raw data files in 1 hour then I can upsert (insert -> merge) in an hour.

I have a table with 100M (rounded up) rows. Table is currently MyISAM. The table has 1000 columns mostly boolean and a few varchar.

Currently the fastest way i've found to get the information into my DB until now was:

Process raw data into CSV files. Load Data In File to rawData Table. Insert rawData table into Table1. (on dupe key do my function) Truncate rawData Repeat. Worked fine until im merging 6M+ Rows into 100M rows and expecting it to take under an hour.

I got 16G of ram so I set my Key_Buffer_Pool to 6G. I have my query cache pool to 16M I have my query cache limit to 10M I would just replace the information however it has to be an Upsert, Update the fields that are true if exists and insert if it does not.

Things im looking into atm; - Possibly switching server table to InnoDB? |-> Not sure about the performance, as the insert into an empty table is fine, its the merge that's slow.

Maybe allowing more table cache? Or even Query Cache? mysql sql mysqli innodb myisam

Merge Code:

b.3_InMarket = (b.3_InMarket OR r.3_InMarket),

To compare my 2 bool columns.

Update

  • Ok I set Raid0
  • Changed my query to Lock Write on tables when inserting
  • When importing csv im disabling keys then re-enabling them before upsert.
  • Changed concurrent_insert to 2

How un-clustered is a CLUSTER USING table

Posted: 30 Jul 2013 12:51 PM PDT

I have some tables which benefit greatly from CLUSTER ON/CLUSTER USING in Postgres SQL:

# CLUSTER table USING index_name;  # ANALYZE VERBOSE table;  # CLUSTER VERBOSE;  

A maintenance task periodically runs CLUSTER VERBOSE to keep things fresh. But is there a test I can run to see how fragmented the table is, prior to running CLUSTER VERBOSE? Maybe something like:

# CLUSTER ANALYZE  table 40000 records. 4000 observed clusters, 5000 potential clusters (20% fragmentation)  

Note that I use CLUSTER so data accessed at the same time is "defragmented" into a small number of disk blocks. For example I have thousands of attributes that go with each page. a CLUSTER page_attribute USING page_id; puts all the attributes next to each other, greatly reducing disk load.

Bitmask Flags with Lookup Tables Clarification

Posted: 30 Jul 2013 07:45 PM PDT

I've received a dataset from an outside source which contains several bitmask fields as varchars. They come in length as low as 3 and as long as 21 values long. I need to be able to run SELECT queries based on these fields using AND or OR logic.

Using a calculated field, where I just convert the bits into an integer value, I can easily find rows that match an AND query, by using a simple WHERE rowvalue = requestvalue, but the OR logic would require using bitwise & in order to find matching records.

Given that I would need to work with several of these columns and select from hundreds of millions of records, I feel that there would be a huge performance hit when doing bitwise & operations to filter my SELECT results.

I came across this answer from searching and it looked like it may fit my needs, but I need some clarification on how it is implemented.

Is this as simple as creating a lookup table that has all possible search conditions?

Example for 3 bits using (a & b) (Edit: Wrong bitwise op)

001,001  001,011  001,101  001,111  010,010  010,011  010,110  011,011  011,111  etc  

The author mentions that it's counter-intuitive initially, but I can't help but feel I'm interpreting the solution incorrectly, as this would give me a single lookup table with likely billions of rows.

Any clarifications on the answer I linked above or other suggestions that would preserve the existing database are appreciated.

Edit: A more concrete example using small data.

Four flags, HasHouse,HasCar,HasCat,HasDog, 0000 is has none, 1111 is has all.

Any number of flags, from all to none, can be flipped, and results must be filtered where selection matches all (Using exact value comparison) or at least 1 (Using bitwise &).

Adding a single calculated column for each bitmask is ok, but adding a column for each bit for more than 100 bits, coupled with how to insert/update the data is why I'm trying to find alternative solutions.

SQL Server 2012 catalog.executions to sysjobhistory - any way to join them?

Posted: 30 Jul 2013 01:45 PM PDT

I have exhausted my resources and can't find a foolproof way to join the ssisdb.catalog tables to the jobs that run them. Trying to write some custom sprocs to monitor my execution times and rows written from the catalog tables, and it would be greatly beneficial to be able to tie them together with the calling job.

SQLite writing a query where you select only rows nearest to the hour

Posted: 30 Jul 2013 04:45 PM PDT

I've got a set of data where data has been taken approximately every minute for about three month and the time has been stored as a unix timestamp. There is no regularity to the timestamp (i.e. the zero minute of the hour may not contain a reading, 00:59:55 and the next measurement could be 01:01:01) and days may be missing.

What I need is the row nearest to the hour, with the timestep rounding to the hour, as long as the nearest value is not more than 30 minutes away from the hour.

Where a matching hour could not be found it would be helpful if the query could include a time but no value.

I realise I'm asking a lot, but this would be incredibly helpful Thanks for taking the time to read this. James

BTW, The table is just PK (autoincrement),timestamp,value, sensor id(FK). I've tried this to get the data out:

SELECT strftime('%S',time, 'unixepoch'),strftime('%M',time, 'unixepoch'),strftime('%H',time, 'unixepoch'), strftime('%d',time, 'unixepoch'), strftime('%m',time, 'unixepoch'), strftime('%Y',time, 'unixepoch'), value from Timestream where idSensor=359;  

Breaking Semisynchronous Replication in MySQL 5.5

Posted: 30 Jul 2013 08:46 PM PDT

I've set up Semisynchronous Replication between two MySQL 5.5 servers running on Windows 7.

My application is running and updating the database of the master server and same is being updated in the slave database server.

But due to some unknown reasons sometimes, Replication breaks.

On running the command:

SHOW STATUS LIKE 'Rpl_semi_sync%';  

It gives this status:

'Rpl_semi_sync_master_no_times', '0'  'Rpl_semi_sync_master_no_tx', '0'  'Rpl_semi_sync_master_status', 'ON'     <<-------------  'Rpl_semi_sync_master_timefunc_failures', '0'  'Rpl_semi_sync_master_tx_avg_wait_time', '338846'  'Rpl_semi_sync_master_tx_wait_time', '29479685'  'Rpl_semi_sync_master_tx_waits', '87'  'Rpl_semi_sync_master_wait_pos_backtraverse', '0'  'Rpl_semi_sync_master_wait_sessions', '0'  'Rpl_semi_sync_master_yes_tx', '3106'  

Ideally, in semi synchronization, when the sync breaks the status should come as OFF since master is not able to receive any acknowledgement from the slave. Please help us in this regard.

Thought about this SQL Server backup plan?

Posted: 30 Jul 2013 01:01 PM PDT

I just started a new job, and I'm reviewing the database maintenance plan. I've got quite a bit of experience writing SQL, but not much experience with DB administration. My last job was at a large company, and they didn't let regular people touch that sort of stuff.

We are locked into SQL Server 2000 (it's embedded in some quite old software and we can't upgrade yet). The current maintenance plan (Full Recovery model) does the following:

Every hour from 6am - 11pm:

  1. backup log Accounting to Accounting_Logs with noinit

Every night at 1am, this happens:

  1. backup Log Accounting WITH TRUNCATE_ONLY
  2. DBCC SHRINKDATABASE (Accounting, TRUNCATEONLY)
  3. backup database Accounting_ReadOnly to Accounting with init

Then at 3am:

  1. all the indexes are rebuilt

Is this a decent plan? Will this give us good backups that are easy to recover? I know I'm asking for a lot, but any thought/comments/suggestions would be appreciated.

Please let me know if you need more information. Thanks!

Can I use a foreign key index as a shortcut to getting a row count in an INNODB table?

Posted: 30 Jul 2013 12:33 PM PDT

I have a table that has a large number of rows in it.

The primary key (an auto-incrementing integer) is, by default, indexed.

While waiting for a row count to be returned I did an EXPLAIN in another window and the the results were as follows:

mysql> SELECT COUNT(1) FROM `gauge_data`;  +----------+  | COUNT(1) |  +----------+  | 25453476 |  +----------+  1 row in set (2 min 36.20 sec)      mysql> EXPLAIN SELECT COUNT(1) FROM `gauge_data`;  +----+-------------+------------+-------+---------------+-----------------+---------+------+----------+-------------+  | id | select_type | table      | type  | possible_keys | key             | key_len | ref  | rows     | Extra       |  +----+-------------+------------+-------+---------------+-----------------+---------+------+----------+-------------+  |  1 | SIMPLE      | gauge_data | index | NULL          | gauge_data_FI_1 | 5       | NULL | 24596487 | Using index |  +----+-------------+------------+-------+---------------+-----------------+---------+------+----------+-------------+  1 row in set (0.13 sec)  

Since the primary key is guaranteed to be unique, can I just take the number of rows from the EXPLAIN and use that as the row count of the table?

BTW, I believe the difference in numbers is due to the fact that more data is continually being added to this table.

Download SQL Server profiler for SQL Server Management Studio

Posted: 30 Jul 2013 01:02 PM PDT

How can I profile a SQL Server 2008 database to see code that's being executed on a particular database? I remember using the SQL Server profiler, but I don't see it in SQL Server Management Studio after downloading SQL Server 2008 R2 Express. Where can I download that tool and install it? Do I need the full version of SQL Server 2008 in order to see this option?

Slow queries on SQL Server [closed]

Posted: 30 Jul 2013 12:40 PM PDT

We have SQL Server 2005. Our main table is the archive table which has nearly 200 million rows in it. There are 2000 clients that connect a service so the service writes the information to the archive. We have also another service which gets the clients information from archive as batches and calculate some another information for each row and rewrite them as batches again.

On the webhand-side we have 100-200 users online at a time and most of the queries depends on archive table. I built all possible indexes on archive and I'm using .NET Framework 3.5. I am connecting the database with standard connection string.

The problem is when a user request for one day long report it returns in 10-15 seconds for 50 rows. The one month long reports take more time like 2-3 min for 5k-6k rows. I am not a DBA but we don't have one so i am expected to tackle this problem. Can you make any suggestions for my problem?

Thanks.

What is the easiest way to get started using databases with real data?

Posted: 30 Jul 2013 01:22 PM PDT

I have a project that could benefit from using a database, but I have no experience with databases, don't have access to a server, and have relatively little experience working with things living server-side.

If I'm going to have to tackle a learning curve, I'd prefer to learn something with broad applicability (such as SQL) but would settle for learning something like Access if it is sufficiently powerful for the task I'm currently trying to tackle. Of course, I'd also rather not drop $150 on Access if it can be helped since I'm just tinkering.

I've downloaded LibreOffice Base as well as something called SQLiteBrowser, but I wanted to check first before I invest time learning those particular applications and their flavors of SQL whether those tools will be sufficient for what I want to do.

I want to be able to:

  • import data from a CSV or from Excel
  • run queries that equate to "select x where this is that and this contains that and any of these contain that"
  • write(?) a new field which indicates those results which match a given query

Again, I'm willing to learn, but it would be nice not to have to learn a bunch of intermediate stuff about IT before I can focus on learning databases and, if necessary, the particulars of a given application.

How to retrieve the definition behind statistics added to tables

Posted: 30 Jul 2013 12:38 PM PDT

Is there a way to programmatically retrieve the definition of each STATISTICS added to table columns and indexes. For both user added and system created indexes. There are many STATISTICS like '__WA_Sys_*' that are added by Sql Server.

I need to re-write some of them and add more, but there are too many to do them manually with Management Studio.

Get failed SQL Server agent job?

Posted: 30 Jul 2013 01:38 PM PDT

How to get a list of failed job run last night? I only find the following powershell script. What's the SQL equivalence?

dir $psPath\Jobs | % { $_.EnumHistory() } | ? { $_.RunStatus -ne 1 }  

Where to start learning to understand SQL Server architecture and internals?

Posted: 30 Jul 2013 01:01 PM PDT

I have a basic knowledge of T-SQL and SQL Server components. My goal is to master my skills and learn everything about SQL Server to eventually become DBA in the future. I would like to understand deep SQL Server internals, how exactly everything works, when and why. Could you please suggest me a good place to start? IMHO it's just not possible by doing the programming work.

Upgrading Instances with Mirroring

Posted: 30 Jul 2013 12:24 PM PDT

If you want to upgrade or install a patch on two separate instances that house both the principal and mirrored database, how can you go about that? If the database that is being mirrored needs to be available 24/7 and you don't have a window to go offline, what is the best means to do this?

EDIT: this is with SQL Server.

Are heaps considered an index structure or are they strictly a table structure without index?

Posted: 30 Jul 2013 12:32 PM PDT

Inspired by this post: https://twitter.com/#!/SQLChicken/status/102930436795285505

Heaps: Are they considered an index structure or are they strictly a table structure without index?

The smallest backup possible ... with SQL Server

Posted: 30 Jul 2013 12:35 PM PDT

Daily we ship our SQL Server backups across the WAN. We need to minimize the size of these backups so it does not take forever.

We don't mind if our backup process takes a bit longer; as it stands we need to move 30gigs of compressed backup across the WAN that takes over 10 hours.

There are 2 options we have to get smaller daily backups.

  1. Log shipping, which would mean we would have to restructure DR process.
  2. Strip information out of the db and rebuild on the other side (drop non clustered indexes, pack clustered indexes at 100% - rebuild on the other side)

Both would involve a fair amount of work from our part. We are using SQL Server 2008 pro, all backups are compressed.

Are there any commercial products that can give us similar backup size to option (2)?

Is there a comprehensive script out there that will allow us to accomplish (2)? (handling indexed views, filtered indexes, foreign keys and so on)

Location of Maintenance Plan's Back Up Database Tasks information (SQL Server 2005)

Posted: 30 Jul 2013 05:52 PM PDT

I would like to know where in the database or on the file system the information about Back Up Database Task in the Maintenance Plans Tasks.

I can find the Job in msdb.dbo.sysjobs I can find the Subplan in msdb.dbo.sysmaintplan_subplans

But I need to find where and how the Task is being stored.

Any help will be greatly appreciated.

[SQL Server] Need assistance doing a PIVOT-like transformation on VARCHAR data.

[SQL Server] Need assistance doing a PIVOT-like transformation on VARCHAR data.


Need assistance doing a PIVOT-like transformation on VARCHAR data.

Posted: 30 Jul 2013 02:40 PM PDT

I am trying to write a query using the table below - this table comes from a vendor supplied system, so I can't modify it:[code="plain"][b]Item_ID, Tag[/b]ITEM1, BlueITEM1, WarmITEM2, GreenITEM3, CoarseITEM2, Fine[/code]There is a maximum of four Tag records for one Item. I want to get the result set below:[code="plain"][b]Item_ID, TAG1, TAG2, TAG3, TAG4[/b]ITEM1, Blue, Warm, NULL, NULLITEM2, Green, Fine, NULL, NULLITEM3, Coarse, NULL, NULL[/code]I have done this previously by creating a temp table with an ID column and the structure of the second table, inserting the select distinct Item_IDs, then using a while loop to iterate through the first table, updating the rows in the second where the second table's Item_ID matches the first's, but there isn't an existing Tag field with the value for that Item_ID.The problem with this solution is it means looping through the first table, then inside of that loop, looping through the second and updating where needed, which is very resource intensive. I've looked at the PIVOT command, but I can't find any samples with varchar values (the samples I've seen all have some sort of aggregation/count which I can't see how to adapt).Does anyone know any more efficient ways of doing the above transformation?

SQL Server 2005 and SQL Server 2012

Posted: 30 Jul 2013 09:56 AM PDT

About start Project with upgrade to IPv6. Need advice on how SQL Server is affected to the 2005 and 2005 if there is a difference. What need to be tested? and or changed? Know the question is vague, but any help will be appreciated. Thks in advance.

SQL Server 2005 system databases

Posted: 30 Jul 2013 09:50 AM PDT

SQL Server 2005;Was wondering if I can point the system databases to another set of system database files. Such as I have a set start up is pointing to on D:\ and I want them to point to a set on F:\. Is this possible and how or what steps should I take. If get them pointing to the F:\ I can delete the files on D:\. Any other info needed let me know.

need help with applying function within Query

Posted: 30 Jul 2013 12:48 AM PDT

Hi allIts my first post here! Glad to be here.Hopefully someone can help me with the following.If you look at the attach screenshot, you will see the 3 SQL table i am working with along with the query i will need to run on these tables. The problem is that in the "CuRRENTS" table, the first column is in "Ticks" format instead of DATETIME! so in my query i need to get the values from that column (Timestamp_ID) to DATETIME format. Now by looking at theat SQL Database, i saw 2 fucntions (see attached notepads) that may have been created to do exactly that: converting that column into DATETIME. The problem is that i dont know how i would use them into my query.Is there anyone that could help?thanks in advance

Search This Blog