Tuesday, June 18, 2013

[SQL Server 2008 issues] String or binary data truncation.

[SQL Server 2008 issues] String or binary data truncation.


String or binary data truncation.

Posted: 17 Jun 2013 05:27 PM PDT

Can someone help me with determining why my TestTable data does not roll over into my ResultTable data? I have reviewed all of the data types but cannot figure out why I am not getting any data in the ResultTable. I have tried to put together input data for this posting into the #mytable but cannot figure out how to completely populate this table. Anyhow here is what I have:[code="sql"]--===== If the test table already exists, drop it IF OBJECT_ID('TempDB..#mytable','U') IS NOT NULL DROP TABLE #mytable--===== Create the test table with CREATE TABLE #mytable ( Year varchar(4), Month varchar(2), Day varchar(2), Hour varchar(2), Minute varchar(2), Second varchar(2), Milisecond varchar(3), Ask float, Bid float, AskVolume float, BidVolume float, Item char(2) ) INSERT INTO #mytable (Year, Month, Day, Hour, Minute, Second, Milisecond, Ask, Bid, AskVolume, BidVolume, Item)SELECT '2013', '05','04', '10', '12', '45', '200', '1.23054', '1.23052', '1', '3', 'N'SELECT '2013', '05','04', '10', '12', '46', '400', '1.23055', '1.23052', '1', '3', 'N'SELECT '2013', '05','04', '10', '12', '46', '500', '1.23055', '1.23052', '1', '3', 'N'SELECT '2013', '05','04', '10', '12', '47', '400', '1.23055', '1.23052', '1', '3', 'N'SELECT '2013', '05','04', '10', '12', '49', '400', '1.23055', '1.23052', '1', '3', 'N'DECLARE @ResultTable TABLE (curRN bigint, prevRN bigint, curBid Decimal(6,5), prevBid Decimal(6,5), Item char, Concat Varchar(20));DECLARE @TestTable TABLE (Year Varchar(4), Month Varchar(2), Day Varchar(2), Hour Varchar(2), Minute Varchar(2), Second Varchar(2), Milisecond Varchar(3), Bid Decimal(6,5), Item Char(2), Concat Varchar(20));INSERT INTO @TestTable (Year, Month, Day, Hour, Minute, Second, Milisecond, Bid, Item, Concat)SELECT Year, Month, Day, Hour, Minute, Second, Milisecond, Bid, Item, (Year+Month+Day+Hour+Minute+Second+Milisecond) AS ConcatFrom #mytable;WITH cte AS (SELECT Year, Month, Day, Hour, Minute, Second, Milisecond, Bid, Item, Concat, RN = ROW_NUMBER() OVER (ORDER BY Year, Month, Day, Hour, Minute, Second, Milisecond, Bid, Item, Concat)FROM @TestTable)INSERT INTO @ResultTable (curRN, prevRN, curBid, prevBid, Item, Concat)SELECT cur.RN AS 'Cur-RN', prev.RN AS 'Prev-RN', cur.Bid AS 'Cur-Bid', prev.Bid AS 'Prev-Bid', cur.Concat, CASE WHEN (cur.Bid - prev.Bid) > 0 THEN 'U' WHEN (cur.Bid - prev.Bid) = 0 THEN 'F' WHEN (cur.Bid - prev.Bid) < 0 THEN 'D' ELSE 'N' END FROM cte cur JOIN cte prevON cur.RN = prev.RN + 1SELECT *FROM @ResultTable[/code] (249616 row(s) affected)Msg 8152, Level 16, State 14, Line 7String or binary data would be truncated.The statement has been terminated.(0 row(s) affected)

SQL Command

Posted: 17 Jun 2013 07:12 PM PDT

what is the command to know how many users are using table emp at this moment.

Need Sub-Query ?

Posted: 17 Jun 2013 07:11 PM PDT

Create Table Table1(Sno int,Sname varchar(20)); insert into table1 values(1,'a'), (2,'b'), (1,c), (2,d)select * from Table1Sno Sname-- ------1 a2 b1 c2 di want the output as following by using the sub-queries?Sno Name1 Name21 a c2 b d

SSRS - Login failed for user error

Posted: 26 Aug 2009 06:03 AM PDT

Hello! I hope that someone may be able to help as I've been unsuccessful in getting this to work.I have several reports (using SSRS 2008) that use the same shared data source. The data source is set up to use Stored Credentials with a windows account/password. When I updated the data source through Report Manager and entered my username/password, all is OK until I click the 'Use as Windows credentials when connecting to the database' box. At that point, I get an error 'The password is not valid. Please retype the password.' I know that my password is correct as I've been able to use it successfully in other places.When I run any reports using this data source, I get the following error (probably due to the fact that the report is trying to run as a SQL account -- * An error occurred during client rendering. o An error has occurred during report processing. (rsProcessingAborted) + Cannot create a connection to data source 'XXXX'. (rsErrorOpeningConnection) # Login failed for user 'XXXX\XXXX'.*where the X's are my data source and domain\account.I have even tried deleting and redeploying to no avail. The strange thing is, is that I have another data source that works in the same manner that is working OK...unless of course I click the above mentioned box off, then I get the same error about the password being invalid.Thanks!Danielle

Table Partition

Posted: 17 Jun 2013 06:41 PM PDT

Hi All,I have partitioned five tables with same date range for all tables(with single partition function and single partition scheme) . Now i want to remove partition from one table among the five tables(Remaining 4 tables should have partition) .How i can remove partition from that singe table with out losing any data. I have 20 millions data in that table.Note: While creating partition , is it possible to create non clustered index on table partitioned column. If it possible, is there any impacts on partition.

Partitioning in SQL Server 2008

Posted: 17 Jun 2013 01:27 AM PDT

Hi All, I have got few queries on partitioning in SQL Server.- Why should we go for partition in SQL Server? - How do we decide on which column should the table get partitioned?- Can I place few tables in db on partitioned, and rest on Primary? Or should I place the rest of the tables on NonPartition?- Once I partitioned my db, what should I do on my db server i.e., do I need to run any dbcc commands etc.?- If not partitioning, do we have any options through which we can improve the performance of an db?Regards,Sai Viswanath

CTE Problem :By Shubham Saxena

Posted: 17 Jun 2013 06:01 PM PDT

Dear friends,I need to concatenate a char with in CTE for ex:with cteexp(q,col) as (select cast(1 as int) p,'X' as col union all select cast(q as int)+1, cast(col as nvarchar(50))+CAST('p' as nvarchar(50)) p from cteexp where cast(q as int)<100 )select * from cteexpbut getting errorMsg 240, Level 16, State 1, Line 1Types don't match between the anchor and the recursive part in column "col" of recursive query "cteexp".Kindly, suggest where i am wrong and what is the solution. Regards,Shubham Saxena

Group by Rollup Output Format

Posted: 17 Jun 2013 04:08 PM PDT

Dear,I execute the following query and get the result.[code="sql"]select ISNULL([State],'Whole State') AS [State], ISNULL(City,'All City') AS City, SUM([Population]) AS [Population]from tblPopulationgroup by rollup([State], City)[/code][code="plain"]State City Population-------- ---------- -------------Delhi East Delhi 9Delhi North Delhi 5.5Delhi South Delhi 8Delhi West Delhi 7.5Delhi All City 30Karnatak Bangalore 9.5Karnatak Belur 2.5Karnatak Manipal 1.5Karnatak All City 13.5Maharastra Mumbai 30Maharastra Nagpur 11Maharastra Nashik 6.5Maharastra Pune 20Maharastra All City 67.5Whole State All City 111[/code]You See in [State] column, [b]Delhi[/b] is repeated 5 times, [b]Karnatak[/b] is repeated 4 times and [b]Maharastra[/b] is repeated 5 times.But I require that Delhi, Karnatak, Maharastra will appear once on the top row. My desired output would look like the following. [code="plain"]State City Population-------- ---------- -------------Delhi East Delhi 9 North Delhi 5.5 South Delhi 8 West Delhi 7.5 All City 30Karnatak Bangalore 9.5 Belur 2.5 Manipal 1.5 All City 13.5Maharastra Mumbai 30 Nagpur 11 Nashik 6.5 Pune 20 All City 67.5Whole State All City 111[/code]Please help me to do it.Rgds,Akbar

Need Create View Script for all tables

Posted: 21 Aug 2012 06:37 PM PDT

I manage a SQL 2008 R2 Production environment for an application. We have a new requirement to create views( simple--select * from tablename) for all existing tables (182 tables) in the database.We will then provide access to these views for Reporting Service users to create their reports. This is so as not to affect database performance for App users.[b]Can someone provide me a script which generates 'create view' scripts for all existing user tables in the database?[/b]Thanks in advance.

Trying to understand this SQL Query

Posted: 17 Jun 2013 09:46 AM PDT

Hi.I am fairly new to SQL Server and have come across this SQL Server Query at work and I am trying to understand their logic here.Is there a more tidy way to re-write this query, I am not even sure if it is return the desired unique results or if it is return a cartisian product.anyway professionals here is the code I am trying to work with[code]SELECT distinct a1.Netbios_Name0, c1.SerialNumber0, a1.Operating_System_Name_and0, b1.Publisher0, b1.DisplayName0, b1.Version0, b1.InstallDate0, c1.TimeStamp FROM v_R_System a1 inner join v_add_remove_programs b1 on a1.ResourceID = b1.ResourceIDinner join v_GS_PC_BIOS c1 on a1.ResourceID = c1.ResourceIDleft outer join v_GS_CCM_RECENTLY_USED_APPS d1on b1.ResourceID = d1.ResourceIDand b1.GroupID = d1.GroupIDGROUP BY a1.Netbios_Name0, c1.SerialNumber0, a1.Operating_System_Name_and0, b1.Publisher0, b1.DisplayName0, b1.Version0, b1.InstallDate0, c1.TimeStampORDER BY 1;[/code]

Regarding how many time an USP is executed.

Posted: 17 Jun 2013 08:16 AM PDT

Is there any way to find out how many times a USP was executed and how much time it took to execute? Please advise.Thanks,SueTons.

SS2008R2 - Are Compressed DB BAckups Restored AS Regular DB's or COmpressed DB's

Posted: 17 Jun 2013 08:01 AM PDT

Under SQL Server 2008R2, if you when backing up a DB you select the option to Compress The Backup, then when that DB is restored will it be restored such that it was the same size (roughly the same size) as the orginal DB the backup was made from or will it be rrestored as a compressed copy of the DB menaing it will be as small as the compressed backup?I have to sned a 240GB DB over the wire and so I wanted to compress the backup and ideally reduce the total size of teh backup that I have to send via secure ftp. Now that I've done this and started sending the DB I'm seeing related posts/comments on the web about this that lead me to bvelieve that the COmpress option is not for the compression of just the backup but of the DB. In other words if I have a 240GB DB that I backup (with the compress option) and comrpess dowen to 1/10th that size, then when that backup is restored the resulting DB is not about 240GB but is instead about 1/10th that size and thats because the restored DB is itself compressed.Thoughts?Thanks

dynamic sql question

Posted: 17 Jun 2013 06:47 AM PDT

I am new to dynamic SQL and I have the following from my sql profiler.my understanding was the syntax wasexec Stored_Procedure ParametersI don't understand what the bold text below is. They do not look like paramaters being sent into the stored procedure, but rather results coming back from the stored procedure. The rest of the items in the list look like parameters being passed into the stored procedure. Is that what the word output is for with each of these?Is there a syntax document online somewhere?How far off am I?declare @p1 intset @p1=1declare @p2 nvarchar(255)set @p2=N'ng_add_sig_events: (Success), Sig Event Added.'exec ng_add_sig_events [b]@po_result_code=@p1 output,@po_result_message=@p2 output,[/b]@pi_practice_id=N'0011',@pi_enterprise_id=N'00021',@pi_sig_event_id='A9D57824-638',@pi_source1_id='557D78F4C',@pi_source2_id=NULL,@pi_source3_id=NULL,@pi_source4_id=NULL,@pi_event_source_type=N'4',@pi_sig_id=N'38',@pi_sig_msg=N'Employer Added',@pi_pre_mod=N'<none>',@pi_post_mod=N'Foo',@pi_user_id=154,@pi_group_id=NULL,@pi_check_sig_admin_ind=N'N',@pi_create_timestamp_tz=0select @p1, @p2

Database Mail - Could not connect (no such host)!

Posted: 17 Jun 2013 03:16 AM PDT

Hi all,Tried to find a fix on here for my issue but everything I've found re the above problem me and my team have eliminated so was wondering if someone else could suggest something?I've just installed SQL Server 2008 R2 on a new virtual server (VM Ware) running Windows Server 2008 R2 Enterprise which was installed from an image of another virtual server which I also installed SQL Server on which performs perfectly.The SQL Server install has gone fine however, I've set up database mail and used exactly the same settings and configurations as the previous server but keep getting the following error/exception message when I try to send a test email through SSMS:"The mail could not be sent to the recipients because of the mail server failure.......Exception message: Could not connect to mail server (No such host is known)"Things we've looked into:Correct spelling of everythingTested port 25 is openNo antivirus on virtual server to block connectionAntivirus on Exchange server allows connections via port 25Other servers with the same database mail setup can send and receive test emailsCan telnet to Exchange server from virtual serverCan ping Exchange server from virtual serverWe're at a bit of a loss, can anyone suggest anything please? :ermm:

how to identify which user is running query and how long is it running?

Posted: 17 Jun 2013 06:44 AM PDT

how to identify which user is running query and how long is it running?

When compressing a DB, will it cause growth first?

Posted: 17 Jun 2013 04:16 AM PDT

I know the subject isn't the clearest, but I think it'd be too long to put the entire question...I'm looking at enabling compression on a table in one DB. the table is ~313 million records, and has one Clustered Index and 3 non-clustered. I'm planning to initially enable page-level compression on the CI, as I think that's going to give me the most savings (also backed up some by the estimate space saving SP)What I'm concerned about, is *how* the compression is accomplished. Not the mechanics of the compression (I know page level is actually row-level first, then the pages,) but the how of it being done. Basically, if I enable this feature, will SQL need to hit the transaction log hard, possibly causing it to grow?The DB in question is in Simple recovery, but the disks the DB and log live on are somewhat slim on space (because of this DB, actually) In a previous posting [url=http://www.sqlservercentral.com/Forums/Topic1452717-391-1.aspx](Here)[/url] I had laid out what I was planning to do to recover some disk space from this table. The table in question had some "issues" that myself and the Dev had to work out (no CI, no PK... And he was trying to delete a couple million rows and it was taking days...) which in resolving, we didn't notice the table was originally page-compressed...Yeah, long story...Thanks,Jason

Change Tracking and Log Shipping

Posted: 17 Jun 2013 05:37 AM PDT

I am trying to test, and it is not working so I think I have the answer.Here is what I am trying to do. Primary database setup Change Tracking. Log Ship this to Secondary server in read only mode. Roll transactions forward to Secondary once a day - use Change Tracking from Secondary Read-Only database for our ETL process.Thoughts?

Back up failed

Posted: 17 Jun 2013 12:30 AM PDT

Hi AllI have one backup database schedule Every night. It went well everyday but today it failed.I check backup location, there is backup file but Job History says it failed.[code="plain"]MessageExecuted as user:XXXXXXXXX. ...00.4035.00 for 32-bit Started: 3:00:04 AM Progress: 2013-06-17 03:00:07.73 Source: {B73199AA-F4BC-4962-AFC1-B033F62D7BCD} Executing query "DECLARE @Guid UNIQUEIDENTIFIER EXECUTE msdb..sp".: 100% complete End Progress Progress: 2013-06-17 03:00:07.93 Source: Maintenance Cleanup Task Executing query "EXECUTE master.dbo.xp_delete_file 0,N'\\ServerName".: 100% complete End Progress Progress: 2013-06-17 05:07:28.18 Source: Back Up Database Task Executing query "BACKUP DATABASE [AAAAA] TO DISK = N'\\ServerName-".: 100% complete End Progress Error: 2013-06-17 05:08:07.41 Code: 0xC0024104 Source: Reporting Task for subplan-{7F0A750F-B2C0-42DD-966F-7A70E4D2FC93} Description: The Execute method on the task returned error code 0x80131904 (Timeout expired. The timeout period elapsed prior to completion of the ... The package execution fa... The step failed.[/code] Please help me to understand.

How to find dependencies in dynamic sql queries

Posted: 17 Jun 2013 12:25 AM PDT

I think it is far fetched because after searching the internet for days I found nothing.I want to find out all the object referring the columns in a table. I have a fantastic working query for the same but it does not list objects with dynamic SQL.Is there any way with which we can find dependencies in dynamic sql queries?

How to find dependencies in dynamic sql queries

Posted: 17 Jun 2013 12:25 AM PDT

I think it is far fetched because after searching the internet for days I found nothing.I want to find out all the object referring the columns in a table. I have a fantastic working query for the same but it does not list objects with dynamic SQL.Is there any way with which we can find dependencies in dynamic sql queries?

generate scripts in sql server

Posted: 16 Jun 2013 09:39 PM PDT

I have some doubts related to SQL server 2008 R2 express 'generate scripts' option.1)If i select script entire database and all database objects then all objects such as database,permission,table structure,index are created?2)If i 'select only select specific database objects', and then select only tables(all tables) then does index is created or only table will be created? Also permissions for tables set?(when i tested clustered index are created with primary key,non clustered are not created. So i am not sure what does this option do)3) If i select both table and user in 'select only select specific database objects' then tables and its permissions with users created?

what is the use of power shell in sql server 2008

Posted: 16 Jun 2013 11:14 PM PDT

what is the use of power shell in sql server 2008

PhysicalDisk perfmon counters versus LogicalDisk counters

Posted: 16 Jun 2013 10:52 PM PDT

Hi,We have a Physical disk which is partitioned into 4 drives,D,E,F,J. I am monitoring individual logical disk idle time and physical disk idle time. These are a day's average values[code="plain"]Counter ValueLogicalDisk(D:)\%idle time 84.92676725LogicalDisk(E:)\%idle time 87.60788474LogicalDisk(F:)\%idle time 94.81640539LogicalDisk(J:)\%idle time 98.49689629PhysicalDisk(1 D: E: F: J:)\% Idle Time 74.4477885[/code]I noticed PhysicalDisk valueis much lower than average of LogicalDisk values,even though LogicalDisk values look good. PhysicalDisk value is suppose to be average of LogicalDisk values right? Why this variation caused?Thank you

script in sql server 2008 for generating .ndf files

Posted: 16 Jun 2013 10:18 PM PDT

can any one give me script for generating .ndf files in sql server 2008

How to remove comma and convert in to INT

Posted: 16 Jun 2013 09:40 PM PDT

Hi,How to remove comma and how to convert this column into [numeric](17, 2) this formate.I have conevrt my this column values in this format [numeric](17, 2) NULL now this column have nvarchar datatype in my source.Please sugget[code="sql"]drop table dataconvertgocreate table dataconvert(data nvarchar(15))insert into dataconvert values('$0')insert into dataconvert values('$40,000')insert into dataconvert values('$2,350')[/code]Regards,KRaj

How to connect with SQL Server using http based API

Posted: 10 Jun 2013 11:01 PM PDT

Hi,I am new to this. I want to connect SQL SERVER 2008 with a CRM called Salesforce (without using any existing tools).Some expert suggested me that it can be done using [b]http based API of SQL Server 2008[/b].I want to know whether http based API of SQL Server is available and how it can be used.Or is there any other way by which i can connect with SQL Server (by coding)[b]Please help.[/b]Thanks

Monday, June 17, 2013

[how to] Advice on NoSQL (Googel App Engine NDB) Schema?

[how to] Advice on NoSQL (Googel App Engine NDB) Schema?


Advice on NoSQL (Googel App Engine NDB) Schema?

Posted: 17 Jun 2013 08:52 PM PDT

Some background: I am very much a novice at anything database related, my knowledge caps at creating a very basic blog with MySQL. I am currently switching from PHP/MySQL to Python and as of now I am working in Google App Engine using their NDB Datastore, but eventually will move to a hosted service running MongoDB.

I am trying to setup a simple discography cataloging data structure, with Artist, Album, and Track, so far this is what I have been trying and it doesn't work, at all.

class Track(ndb.Model):      name = ndb.StringProperty(required=True)      album = ndb.StringProperty(required=True)      artists = ndb.JsonProperty(required=True)      # ...other keys...    class Album(ndb.Model):      name = ndb.StringProperty(required=True)      artists = ndb.JsonProperty(required=True)      tracks = ndb.KeyProperty(kind=Track, repeated=True)      # ...other keys...    class Artist(ndb.Model):      name = ndb.StringProperty(required=True)      albums = ndb.KeyProperty(kind=Album, repeated=True)      # ...other keys...  

Now... What I originally had tried with Track, but got an error was:

class Track(ndb.Model):      name = ndb.StringProperty(required=True)      album = ndb.KeyProperty(kind=Album)      artists = ndb.KeyProperty(kind=Artist, repeated=True)      # ...other keys...  

However, I get an errors like NameError: name 'Album' is not defined. Going with the album = StringProperty() and artists = JsonProperty() are making it extremely hard to search the database.

So, my question is... Looking at what I have now, I got the idea of reversing it and am wondering if it's a good idea to do so or not or find another way.

class Artist(ndb.Model):      name = ndb.StringProperty(required=True)      # ...other keys...    class Album(ndb.Model):      name = ndb.StringProperty(required=True)      artists = ndb.KeyProperty(kind=Artist, repeated=True)      # ...other keys...    class Track(ndb.Model):      name = ndb.StringProperty(required=True)      album = ndb.KeyProperty(kind=Album)      artists = ndb.KeyProperty(kind=Artist, repeated=True)      # ...other keys...  

Where Artist doesn't care about what Album it has, Album just provides the Artist.key for the artists it has. Album doesn't care about what Track it has, Track provides the Album.key for what album it is on.

So when searching I can do...

def get_album(album, artist):      artist_key = ndb.Key(Artist, artist.lower())        with_artist = Album.query(Album.artists == artist_key)      return with_artist.filter(Ablum.name_lower == album.lower())    def get_tracks(album, artist):      album_key = ndb.Key(Album, ':'.join([artist.lower(), album.lower()])        return Track.query(Track.album == album_key)  

Where Album.name_lower is a ComputedProperty(lambda self: self.name.lower()), which Artist and Track also have.

Need some good tutorial regarding Rman,datagaurd,rac and performance tuning

Posted: 17 Jun 2013 08:29 PM PDT

As i am very new in oracle database, i need some good tutorial,configuration file,book for ASM,omf(oracle managed file),data guard,Rac,troubleshhoting & performance tuning.Thanks.

To select from all tables inside the schema in PostgreSQL

Posted: 17 Jun 2013 07:19 PM PDT

Is it possible to select from all tables inside the schema? I got all the table names from

select table_name from information_schema.tables

but I am unable to use it to do my query.

Error while opening Oracle 11g Database

Posted: 17 Jun 2013 05:41 PM PDT

I have a problem with oracle 11g database,

I have shutdown my computer this morning normally, but in the evening when I tried to connect to my database I fond a problem. I cannot connect as normal user so I tried to connect as sysdba to shutdown the database and remount it (this works normally). Now when I try to open it (the database) I got this error :

ERROR in line 1 : ORA-00600: Internal error code, arguments : [kcratr_nab_less_than_odr], [1], [90], [13770], [13771], [], [], [], [], [], [], []

Can any one help me please ???

What are the basic job functions required of a remote DBA for Oracle and SQL Server?

Posted: 17 Jun 2013 05:38 PM PDT

When looking for a remote DBA to manage and support your database infrastructure what are the basic job functions that should be expected?

On the flip side what would be considered out side the core job functions for a remote dba?

I am looking for general high level functions that would be performed in a mixed server database environment (mostly sql server and oracle) containing custom and COTS (3rd party) applications/databases.

I am also looking for what would be appropriate Service Level Agreement for performing those various job functions. Of course this is also based on the type of system being managed for example out core operations database running 24x7 is a HA system so response time is key.

Full disclosure I am on team creating a Statement of Work for remote DBA services and I am trying to avoid technical managers from simply googling "DBA Job Functions" and throwing those into the SOW.

Hazards of Upgrading SQL Server Express 2008 to R2

Posted: 17 Jun 2013 06:44 PM PDT

A client of mine reached the 4GB database limit on a SQL Server 2008 Express. As a test I installed the R2 version on another machine and restored a backup of the database.

I'd like to upgrade the actual server itself, though I'm worried about what any of the pitfalls/consequences could be in addition to compromising the system stability.

This is because, additional critical applications (including an ERP I don't fully understand) are running on this machine which would require calling another company to re-install/setup.

I've given the client 3 options:

  1. Get new hardware in order to host the R2 server there (independent of current server).
  2. Use current setup that was a test.
  3. Take the risk and install the update to R2 on the actual server risking the 5% (or however much it is) risk of anything in the system going unstable.

I've also thought about cloning the whole computer using clonzilla (ghost) etc, but as I'm supporting remotely this would require I travel ~ 500km.

Also, this is a serious problem as they cannot add new records to this database.

Error while restoring a Database from an SQL dump

Posted: 17 Jun 2013 07:22 PM PDT

I am extremely new to MySQL and am running it on Windows. I am trying to restore a Database from a dumpfile in MySQL, but I get the following error:

$ >mysql -u root -p -h localhost -D database -o < dump.sql  ERROR: ASCII '\0' appeared in the statement, but this is not allowed unless option --binary-mode is enabled and mysql is run in non-interactive mode. Set --binary-mode to 1 if ASCII '\0' is expected. Query: 'SQLite format 3'.  

I tried $ > mysql -u root -p -h localhost -D database --binary-mode -o < dump.sql but this gave me the following ERROR at line 1: Unknown command '\☻'. It is a 500 Mb dump file, and when I view its contents using gVIM, all I can see is expressions and data which is not comprehensible. Also when I try to copy contents from the file to post here all I can copy is :SQLite format 3 This kind of seems strange.

SSIS Rollback Package

Posted: 17 Jun 2013 04:44 PM PDT

I have several data flow tasks in my package that depend on other data flow tasks. For testing reasons I want to run the entire package and then roll it back after it completes. How do I do it?

Does Database Detach or Offline Clear the Cache Quickest?

Posted: 17 Jun 2013 06:03 PM PDT

A buddy of mine told me today that instead of bouncing SQL Server, I could simply detach and then re-attach a database and this action would clear the given database's pages and plans from cache. I disagreed and provide my evidence below. If you disagree with me or have a better rebuttal, than by all means supply it.

I am using AdventureWorks2012 on this version of SQL Server:

  SELECT @@VERSION;  Microsoft SQL Server 2012 - 11.0.2100.60 (X64)  Developer Edition (64-bit) on Windows NT 6.1  (Build 7601: Service Pack 1)  

Having loaded the database, I run the following query:

Firstly, run Jonathan K's AW fattening script found here:

AW Get Fat

    ---------------------------  -- Step 1: Bpool Stuff?  ---------------------------  USE [AdventureWorks2012];  GO    SELECT       OBJECT_NAME(p.object_id) AS [ObjectName]     , p.object_id     , p.index_id     , COUNT(*) / 128 AS [buffer size(MB)]     , COUNT(*) AS [buffer_count]  FROM       sys.allocation_units AS a       INNER JOIN sys.dm_os_buffer_descriptors AS b             ON a.allocation_unit_id = b.allocation_unit_id       INNER JOIN sys.partitions AS p             ON a.container_id = p.hobt_id  WHERE       b.database_id = DB_ID()       AND p.object_id > 100  GROUP BY       p.object_id     , p.index_id  ORDER BY       buffer_count DESC;    

The result is shown here: enter image description here

Detach and re-attach the database and then re-run the query.

  ---------------------------  -- Step 2: Detach/Attach  ---------------------------  -- Detach  USE [master]  GO  EXEC master.dbo.sp_detach_db @dbname = N'AdventureWorks2012'  GO    -- Attach  USE [master];  GO    CREATE DATABASE [AdventureWorks2012] ON   (       FILENAME = N'C:\sql server\files\AdventureWorks2012_Data.mdf'   )      ,  (       FILENAME = N'C:\sql server\files\AdventureWorks2012_Log.ldf'   )   FOR ATTACH;  GO  

What is in the bpool now?

  ---------------------------  -- Step 3: Bpool Stuff?  ---------------------------  USE [AdventureWorks2012];  GO    SELECT       OBJECT_NAME(p.object_id) AS [ObjectName]     , p.object_id     , p.index_id     , COUNT(*) / 128 AS [buffer size(MB)]     , COUNT(*) AS [buffer_count]  FROM       sys.allocation_units AS a       INNER JOIN sys.dm_os_buffer_descriptors AS b             ON a.allocation_unit_id = b.allocation_unit_id       INNER JOIN sys.partitions AS p             ON a.container_id = p.hobt_id  WHERE       b.database_id = DB_ID()       AND p.object_id > 100  GROUP BY       p.object_id     , p.index_id  ORDER BY       buffer_count DESC;  

And the result: enter image description here

Are all the reads logical at this point?

  --------------------------------  -- Step 4: Logical Reads Only?  --------------------------------  USE [AdventureWorks2012];  GO    SET STATISTICS IO ON;         SELECT * FROM DatabaseLog;      GO  SET STATISTICS IO OFF;      /*  (1597 row(s) affected)  Table 'DatabaseLog'. Scan count 1, logical reads 782, physical reads 0, read-ahead reads 768, lob logical reads 94, lob physical reads 4, lob read-ahead reads 24.  */    

And we can see that the buffer pool was not totally blown away by the detach/attach. Seems like my buddy was wrong. Does anyone disagree or have a better argument?

Another option is to offline and then online the database. Let us try that.

    --------------------------------  -- Step 5: Offline/Online?  --------------------------------  ALTER DATABASE [AdventureWorks2012] SET OFFLINE;  GO  ALTER DATABASE [AdventureWorks2012] SET ONLINE;  GO    ---------------------------  -- Step 6: Bpool Stuff?  ---------------------------  USE [AdventureWorks2012];  GO    SELECT       OBJECT_NAME(p.object_id) AS [ObjectName]     , p.object_id     , p.index_id     , COUNT(*) / 128 AS [buffer size(MB)]     , COUNT(*) AS [buffer_count]  FROM       sys.allocation_units AS a       INNER JOIN sys.dm_os_buffer_descriptors AS b             ON a.allocation_unit_id = b.allocation_unit_id       INNER JOIN sys.partitions AS p             ON a.container_id = p.hobt_id  WHERE       b.database_id = DB_ID()       AND p.object_id > 100  GROUP BY       p.object_id     , p.index_id  ORDER BY       buffer_count DESC;  

It appears that the offline/online operation worked a lot better.

enter image description here

MySQL, optimize and replication

Posted: 17 Jun 2013 05:54 PM PDT

I have two MySQL (MariaDB actually) servers, a master and a slave with standard replication. Two times already in less than a month I've had my slave become unusable.

This is what I experienced:

  • Notice a drop in query per second on the slave with my monitoring tool
  • Connect to the server with the mysql client, run SHOW SLAVE STATUS \G
  • Notice everything looks normal (Waiting for master to send event, seconds behind master = 0, etc)
  • Try and stop the replication with STOP SLAVE
  • STOP SLAVE hangs
  • Impossible to connect to the server with a client
  • Execute SHOW SLAVE STATUS \G on another already connected client, and it hangs too.

After that, I saw the MySQL process using CPU, and I saw a lot of disk writes with iotop, so I let it do its stuff for a while.

After 15 minutes I killed -9 the MySQL process because I got frustrated. I then restart MySQL, and I can connect and run queries, good.

I execute SHOW SLAVE STATUS \G which now works, and it looks normal: seconds behind master = 0, waiting for events. Then I go on the master and execute CREATE DATABASE foobar to see if the replication works, and it doesn't.

Again, at that point I see a lot of disk writes so I assume the replication is catching up. Two hours later, I finally decide to check SHOW PROCESSLIST and all I see being executed is a OPTIMIZE TABLE table1 ... and then I remember that I had that exact same problem last time.

I kill the process with that query, then I reexecute SHOW PROCESSLIST and sure enough it's catching up with the master. SHOW SLAVE STAUTS \G reported 15k seconds behind master, which was coherent.

Anyway, after that wall of text, my questions:

  • why does a OPTIMIZE query block the replication catching up ?
  • can I prevent those OPTIMIZE queries ? I didn't put any cron to do that, and as far as I can see, nothing in /etc/cron.{d,daily,hourly,weekly} does this.

Thanks.

MyISAM key buffer

Posted: 17 Jun 2013 02:09 PM PDT

Good evening,

I want to know how can I use informations about MyISAM to calculate :

  • size of buffer,
  • size of buffer used,
  • Write ratio,
  • Read ratio,
  • and MyISAM key cache hit rate

and thank you.

Optimizing disk space for MyISAM tables: what is the benefit of ALTER and ANALYZE if you already do OPTIMIZE?

Posted: 17 Jun 2013 12:45 PM PDT

Per this post, in order to optimize the storage of a MyISAM table, you need to:

OPTIMIZE TABLE ;  ALTER TABLE ENGINE=MyISAM ;   ANALYZE TABLE ;  

It's obvious what OPTIMIZE does for disk space but what does ALTER and ANALYZE do when used after OPTIMIZE? I am asking specifically about disk storage, so for example, I don't care that ANALYZE improves join performance, etc.

Configure SQL Server Database for High Availability & Warehousing

Posted: 17 Jun 2013 11:09 AM PDT

I have 3 DB servers (DB1, DB2 & DB3 - all on Win 2008 R2 with SQL Standard 2008 R2 in the same datacenter). Let's assume AdventureWorks as an example database.

My goal: - Setup DB mirroring with automatic failover i.e. DB2 becomes mirroring server - Setup hourly transaction log backups for point in time recovery - Setup reporting/warehouse environment i.e. DB3 becomes the warehouse/reporting server

Setting up DB mirroring and hourly txn log backups is easy but my question is - What are my options for replicating the principal database to the DB3 server for reporting/warehousing needs? Can I use log shipping considering I do txn log backups?

The AdventureWorks database on DB3 needs to be in read-only mode.

EDIT: The reporting database can be up to a day behind.

Select query using a map [closed]

Posted: 17 Jun 2013 01:06 PM PDT

CREATE TABLE grid_rows(      [row_id] [int] NOT NULL,      [column_id] [smallint] NOT NULL,      [column_val] [decimal](18, 6) NULL     )  // (row_id, column_id) is a composite primary query  

For a given Map<column_id, column_value> what would be the select query to get the row_id?

Table values:

1 k1 v1  1 k2 v2  1 k3 v3  2 k1 v1  2 k5 v5  2 k6 v6  

How to get the row_id i.e 1 which matches all the key value pairs of {k1:v1,k2:v2,k3,v3}?

How to set password for PostgreSQL database with a script?

Posted: 17 Jun 2013 11:30 AM PDT

In a bash script I'm writing, I need to create a database and set a password that is generated automatically by my script.

The problem is I can't find a way to set up the password without user interaction.

I need something like this:

$ createdb mydb -p $(cat password_file)  

Any ideas?

How to create a table in MySQL with automatic numeration of columns?

Posted: 17 Jun 2013 04:44 PM PDT

Example. My data is from Table name:BIKE

Bike_No   Repair_Date     Repair_Cost  --------  -----------     ------------  ABC1234   2013-01-05      50.00  BMX5678   2013-02-04      75.00  ABC1234   2013-01-25      20.00  BON3333   2013-03-06      80.00  DEB1111   2013-08-04      40.00  ABC1234   2013-09-06      50.00  

I want to see the repair cost for each bike based on a date range (example: from 01/01/2013 to 31/03/2013) and result table must display the repair cost with Bike_No and the affected months as columns. How do we generate the affected months as columns based on the queried date range?

Simplify subqueries with join

Posted: 17 Jun 2013 11:30 AM PDT

I'm trying to create a report in SQL Server Reporting Services that shows the availability of multiple sites. The sites are monitored by a tool that uses a SQL Server 2012 database to store the data.
I've created a query that shows me the results and I'm wondering if I can rewrite this query in a single select without the subqueries.

My actual query looks like

    SELECT            t1.CHECKID          ,t1.CHECKNAME          ,t2.Startdate          ,t2.Enddate          ,t2.StatusID          ,t2.Month          ,t2.Year      FROM       (SELECT CHECKID, CHECKNAME FROM CHECKS ) AS t1      LEFT JOIN       (SELECT                      STATUSCHANGES.CHECKID              , STATUSCHANGES.DATETIME AS Startdate              , MIN(STATUSCHANGES_1.DATETIME) AS Enddate              , STATUSCHANGES.STATID AS StatusID              , CHECKS.CHECKNAME              , STATUS.STATUSNAME              , DATEPART(MONTH,STATUSCHANGES.DATETIME) as Month              , DATEPART(YEAR,STATUSCHANGES.DATETIME) as Year          FROM              STATUSCHANGES               INNER JOIN STATUSCHANGES AS STATUSCHANGES_1                   ON STATUSCHANGES.CHECKID = STATUSCHANGES_1.CHECKID                   AND STATUSCHANGES.DATETIME < STATUSCHANGES_1.DATETIME               INNER JOIN CHECKS ON STATUSCHANGES_1.CHECKID = CHECKS.CHECKID               INNER JOIN STATUS ON STATUSCHANGES.STATID = STATUS.STATUSID          GROUP BY               STATUSCHANGES.CHECKID,               STATUSCHANGES.DATETIME,               STATUSCHANGES.STATID,               CHECKS.CHECKNAME,               STATUS.STATUSNAME          HAVING               (STATUSCHANGES.STATID IN (3))               AND DATEPART(Q,STATUSCHANGES.DATETIME) = 2          ) AS t2          ON t1.CHECKID = t2.CHECKID          ORDER BY t2.Startdate  

Is it possible to integrate the first subquery t1 in the from clause of subquery t2?

how important are mysql's innodb logs?

Posted: 17 Jun 2013 11:07 AM PDT

I'm basically concerned about the following two settings, these are my default settings :

innodb_log_buffer_size  1048576  innodb_log_file_size    5242880  

I haven't changed them for 2 reasons : one, I don't know what they are, and two changing this is risky, according to articales I've read

My application data-structure is write-intensive and I use only innodb tables. Currently I've set innodb_buffer_pool_size to 5GB and my database size is around 7GB and increases at the rate of a 400MB per day. Also note, this 400MB insert is done within a period of only about 30 minutes and some might refer to this as "bulk insert"

As far as i understand, log files are for keeping a record of all the insert/update operations, why do I need this?

What are these log files and why should I increase their size?

And lastly, if I don not want to keep a log, how should I fine tune my database then?

Given two known statements, how can I replicate deadlock?

Posted: 17 Jun 2013 10:43 AM PDT

I have two statements (an update against every row in one table (call it table A) and a delete on another table that looks up rows in table A) that I know are causing occasional deadlocks. It seems that there is an X lock and a U lock on the same primary key index of table A.

I have been trying, and failing, to replicate the deadlock in SQL Server Management Studio. Should I be able to?

Separately, the delete statement is very inefficient and I think I can fix the issue by creating a covering index that means that the primary key index mentioned above is no longer included in the actual execution plan of the delete statement. Given that ultimately the same rows are required by both statements will this guarantee no deadlocks or simply reduce the chance of it happening by giving SQL Server a different path to the data?

SQL Server giving intermittant connection failures

Posted: 17 Jun 2013 02:22 PM PDT

I have a SQL Server 2000 running on Windows 2003 Server that is giving me intermittent connection failures.

The user connections are getting closed on waiting but mature connections. Meaning when users are connected to an application that is running on this server, the programs start up fine. The problem usually shows up after the connection has been sitting for a bit. I'm wondering if there is some odd setting that is killing older, not active connections?

What could be causing this? What can I check to further troubleshoot this.

EDIT:

Is it possible that my problems are caused by running SQL Server 2000 personal edition, where the old server that died and was replaced was on Standard? I have no way to verify the old version, but is this possible?

Custom sp_who/sp_whoUsers

Posted: 17 Jun 2013 03:32 PM PDT

I need to allow a client in a dev DW SQL 2K8R2 environment, to view and kill processes, but I do not want to grant VIEW SERVER STATE to this person (he's a former sql dba and is considered a potential internal threat).

When I run the following, it returns one row as if the user ran the sp themselves with their current permissions.

USE [master]  GO    SET ANSI_NULLS ON  GO  SET QUOTED_IDENTIFIER ON  GO    CREATE PROCEDURE [dbo].[usp_who] with execute as owner  AS  BEGIN      SET NOCOUNT ON;      exec master.dbo.sp_who;  END  

Changing the "with execute as" to "self" (I'm a sysadmin) returns the same results. I've also tried the below instead of calling sp_who, and it only returns one row.

select * from sysprocesses  

It seems that the context isn't switching, or persisting, throughout the execution of the procedure. And this is to say nothing of how I'm going to allow this person to "kill" processes.

Does anyone have a solution or some suggestions to this seemly unique problem?

Need to suppress rowcount headers when using \G

Posted: 17 Jun 2013 11:31 AM PDT

Is there a command to suppress the rowcount headers and asterisks when using '\G' to execute a SQL statement? I am executing mysql with the -s and --skip-column-name options, but these don't suppress the rowcounts.

How to search whole MySQL database for a particular string

Posted: 17 Jun 2013 01:32 PM PDT

is it possible to search a whole database tables ( row and column) to find out a particular string.

I am having a Database named A with about 35 tables,i need to search for the string named "hello" and i dont know on which table this string is saved.Is it possible?

Using MySQL

i am a linux admin and i am not familiar with databases,it would be really helpful if u can explain the query also.

multivalued weak key in ER database modeling

Posted: 17 Jun 2013 12:31 PM PDT

I was wondering since i didnt find out any clarification for this. I want to store movies that exist in different formats (dvd, bluray etc) and the price for each format differs from each other as well as the quantity of each format, so i came up with this:

example

Is this correct from a design perspective? Does this implies redundancy? I dont understand how will this be stored in a table. Would it be better to do it like this :

enter image description here

Thanks in advance.

EDIT : I add some more descriptive information about what i want to store in this point of the design. I want to store information about sales. Each movie that exist in the company i need to store format, price and stock quantity. I will also need to store customer information with a unique id, name, surname, address, movies that he/she has already bought and his credit card number. Finally i will have a basket that temporary keeps items (lets suppose that other items exist apart from movies) that the customer wants to buy.

Microsoft Office Access database engine could not find the object 'tableName'

Posted: 17 Jun 2013 04:32 PM PDT

First a little background: I am using MS access to link to tables in an advantage database. I created a System DSN. In the past in Access I've created a new database, and using the exteranl data wizard, successfully linked to tables. Those databases and the linked tables are working fine.

Now I am trying to do the same thing, create a new access db, and link to this same DSN. I get as far as seeing the tables, but after making my selection, I get the error, " The Microsoft Office Access database engine could not find the object 'tableSelected'. Make sure the object exists and that you spell its name and the path name correctly.

I've tried creating another datasource (system and user) with no luck. Environment is Wn XP, Access 2007, Advantage DB 8.1

Database Mail to Distribution List

Posted: 17 Jun 2013 11:13 AM PDT

I have a SQL Server 2005 which I have configured the database mail to setup monitoring alerts and notifications. I also create a email distribution list containing a couple of emails. The thing is that when I send the email from the database to a distribution list, it never gets to the emails inbox, but if I do it individually it does.

I tried to send an email from the outlook to the distribution list and it worked fine. I do not understand what is happening. Please can you help me?

[SQL Server] Table 1 New Record Autocounter field value copied to Table 2 and creates new record

[SQL Server] Table 1 New Record Autocounter field value copied to Table 2 and creates new record


Table 1 New Record Autocounter field value copied to Table 2 and creates new record

Posted: 17 Jun 2013 09:49 AM PDT

When the user selects an Add New record - Table1 uses the Autocounter field to generate a PrimaryKey and creates the new record for data entry.Table2 is part of a Rules Engine. It needs to have a one-to-one relationship of the primary keys.The trigger must fire after the primary key (autocounter) is assigned to Table1. Use the AddNew record (or append) on Table2 - and include the same Primary key that is on Table1.Note: Table1 never has a record deleted. It has an Active / Inactive field to determine what is live or historical. Table2 will constantly have dozens of new columns added as the rules engine is developed. Table2 has metadata. Table2 will not use an autocounter. Since the PK from the Table1 autocounter is unique, and in sequence, it will just be a 1:1 relationship between Table1 <--> Table2. At least that is the goal. The front-end interface is Access 2010 using SQL Server Native 11.0 client and DSN-Less linked tables.

Trigger to set a value for a column

Posted: 17 Jun 2013 07:21 AM PDT

I have a column that has to be set with a certain value as A,B,C or D when a certain column has a value i.e. is not null. The columns are in various tables in the databases and the select is show below[quote]select A.decription, B.Decsription,C.Description,D.Decription from Audit A left join A.Id = B.ID left join on B.Id =C.ID left join on C.ID = D.ID [/quote]and the column X in Audit table has to be updated with A,B,C,D as when if there is value in A.description the value for X will be B, when B.Description has value the value on X will be C, when c.description is there the value for X will be D how do i form a trigger that does this thanks

SQL Server scheduled jobs

Posted: 17 Jun 2013 07:01 AM PDT

I have a SQL Server job that checks for a file in an FTP server folder. If the file is there the file is processed and the job completes. If there is nothing there (the vast majority of the time), the job fails and an error is reported. I am checking every minute from 8 am to 4 pm Monday through Friday. What I would like to do is check if the file is in the FTP server folder, if it is there process the remainder of the 9 steps. If it is not, simply return nothing and wait until the next minute comes up. I have looked for something like count to determine if a file exists or not. I am checking for the validity of the file in the first step of the processing, but I think that there should be an easier and less error producing way to check.I am using SQL Server 2008 R2.

Replace the values from the lookup tables

Posted: 17 Jun 2013 07:16 AM PDT

I have 2 lookup tables. I need to get and replace the values from the lookup tables for the below data.[u][b]Lookup Table 1[/b][/u][b]LT_ID Desc[/b]Cat_Code [u]Category[/u]Prod_Code [u]Product[/u][u][b]Lookup Table 2[/b][/u][b]LT_ID[/b] [b]Desc[/b]Drama__Q__M [u]Drama[/u]Comedy__Q__M [u]Comedy[/u]Blue__Q_Ray [u]BlueRay[/u]I have another table with the below information1 Changed Cat_code to Drama__Q__M From Null2 Changed Product_code to Blue__Q__Ray From Null3 Changed Cat_code to Comedy__Q__M From Drama__Q__Mfrom the above I need an output like below1 Changed Category to Drama From Null2 Changed Product to BlueRay From Null3 Changed Category to Comedy From DramaCan anyone please provide the solution for this?Thank you in advance

Extracting Excel files from Multiple subfolders and load them into sql server table basing on city name

Posted: 17 Jun 2013 03:23 AM PDT

Hi, I have a folder which contain's Sub folders(City Names) and in those sub folders I have excel files. I have to take all the excel files from the sub folder and load them into a SQL Server table basing on the city name. The files are structured as below.Dallas Folder contain's these files DALLAS_Cars_2011.xlsDALLAS_Trucks_2011.xlsDALLAS_Bikes_2011.xls my requirement is I have to load the cars data into cars table and Trucks data into Trucks tables, basing on their names and all the city file's should go into same table like cars's, trucks.. And I have to create an ID column basing on the city name it can be based on the alphabetical order can any of the guys help me in this...

files and filegroups

Posted: 16 Jun 2013 09:55 PM PDT

Hi I have a database with 3 filegroups primary,secondary and third.secondary marked as default.now i have created a table on third filegroup,and inserted some data into the tablemy question is where will the data get stored? secondary or thirdwhile secondary is still marked as defaultthanks

How to get this trigger on insert/update working

Posted: 16 Jun 2013 10:52 PM PDT

Hi all!I need to do something upon update/insert of a new record, and as it has to be automatical, i think a trigger is the answer.As a start, I wrote this:CREATE TRIGGER [dbo].[SetOtherWorkAccount] ON [dbo].[timeaccountmovement]AFTER INSERT, UPDATEASIF UPDATE (starttime)BEGIN UPDATE timeaccountmovement SET timeaccountid = 22 WHERE ID IN (SELECT ID FROM inserted)ENDThe syntax is ok, its created on table timeaccountmovement.First problem here is, that i do not know which field is updated - it could be any field in the table, the trigger should fire anyway.OK, I tried to change the value of starttime on a single record - It should then have a timeaccountid of 22 after that happened.But nothing happened.What am I doing wrong?

[MS SQL Server] Centralised monitoring of disk space

[MS SQL Server] Centralised monitoring of disk space


Centralised monitoring of disk space

Posted: 16 Jun 2013 11:58 PM PDT

Hi Experts,I want to monitor disk space of all servers and i have created a centralised server from where i can check all the server disk space at once using xp_fixeddrives. How can i make it as a scheduled job and any script to show servers having least disk space first?Thanks in Advance.

[Articles] Losing Data

[Articles] Losing Data


Losing Data

Posted: 16 Jun 2013 11:00 PM PDT

As data professionals, we want to avoid losing data. We won't always be successful, but we can avoid making the easy mistakes.

Search This Blog