Saturday, April 20, 2013

[SQL Server 2008 issues] Normalization

[SQL Server 2008 issues] Normalization


Normalization

Posted: 19 Apr 2013 01:44 PM PDT

Is It possible To Identify A Table By Seeing it Whether It is Normalized Or Denormalized?

Master Database

Posted: 19 Apr 2013 01:06 AM PDT

Hi, I found that some developers created user created tables in master database recently. Can we create user created tables in master database. Incase yes, in what situations we need to create tables in master database. As far as i know we should not create tables in master database.This is the script i used to find tables in master database.SELECT * FROM sys.tables GO

Database Virtual Log File count

Posted: 19 Apr 2013 05:32 PM PDT

Hi,I am using sql server 2008 r2 express.I heard virtual log file count may be the problem for server performance.I searched sometime but i didn't come up with optimum VLF count.Some says it depends.Can anyone please help me to find out optimum VLF count for my databasesThank you

Updation using fuction

Posted: 19 Apr 2013 12:43 AM PDT

Hi I want to perform updation using fuction but I am not able to do it and it throw following error:Msg 443, Level 16, State 15, Procedure GetSTNValue, Line 21Invalid use of a side-effecting operator 'UPDATE' within a function.CREATE FUNCTION dbo.GetSTNValue(@FromStore int, @ToStore int)RETURNS varchar(10)ASBEGINDeclare @STN varchar(10)=0,@count int=0,@AWBNo varchar(10)=0SET @AWBNo = (SELECT ISNULL(gs.AWBNo,0)FROM GV_STNDetails gs WHERE gs.From_StoreCode = @FromStore AND gs.TO_StoreCode = @ToStore)SET @count = (SELECT gsv.CountValue FROM GV_STNCountValues gsv WHERE gsv.StoreCode = @FromStore)IF @AWBNo = 0BEGINIF NOT EXISTS (SELECT ISNULL(gs.STNNO,0) FROM GV_STNDetails gs WHERE gs.From_StoreCode = @FromStore AND gs.TO_StoreCode = @ToStore)BEGInSET @STN = CONVERT(varchar,REPLACE(STR(@FromStore,4),' ',0)) + CONVERT(varchar,REPLACE(STR(@count + 1,5),' ','0'))--SELECT @STN--RETURN (@STN)UPDATE GV_STNCountValuesSET CountValue = @count + 1WHERE StoreCode = @FromStoreENDELSEBEGINSET @STN = (SELECT gs.STNNO FROM GV_STNDetails gs WHERE gs.From_StoreCode = @FromStore AND gs.TO_StoreCode = @ToStore)--RETURN (@STN)ENDENDELSEBEGINDECLARE @oldSTN varchar(10)SET @oldSTN = (SELECT gs.STNNO FROM GV_STNDetails gs WHERE gs.From_StoreCode = @FromStore AND gs.TO_StoreCode = @ToStore)SET @STN = CONVERT(varchar,REPLACE(STR(@FromStore,4),' ',0)) + CONVERT(varchar,RIGHT(@STN,6)+ 1)--RETURN (@STN)END RETURN (@STN)END

Help with Idea on Round Robin TSQL solution

Posted: 19 Apr 2013 01:58 PM PDT

Hi SQLServerCentral,Here is what I am trying to accomplish and am trying to come up with an approach.I have a table called "RoundRobin" and this holds the round StaffID and StudentCount.What I would like to do is, using the current student count per staff member distribute additional students to each staff member based on load.So for example I would like to redistribute 50 students to the staff members based on load.What I am hoping to get is a number of students that should be redistributed based on the input number (say 50 in example). RoundRobinID StudentCount Student New Count Number of new students1 1 20 192 4 20 163 5 20 154 25 25 0 35 85 50Any help would be appreciated.IF OBJECT_ID('dbo.RoundRobin', 'u') IS NOT NULL DROP TABLE dbo.RoundRobincreate table RoundRobin( StaffID INT IDENTITY(1,1) PRIMARY KEY ,StudentCount INT)INSERT INTO dbo.RoundRobinSELECT 5UNIONSELECT 25UNIONSELECT 1UNION SELECT 4select *from dbo.RoundRobin

SQL 2008 R2 Backup Question - Log File

Posted: 19 Apr 2013 10:27 AM PDT

I'm helping a friend troubleshoot an issue with a SQL 2008 R2 database that is sitting behind SharePoint. Currently, SQL is loaded on a server that is using RAID 1 for the OS (13GB space free) and RAID 5 for data (O GB free). There are 8 log files totaling approximately 100 GB (6-11 GB per file). The automated backups stopped running over a month ago b/c the log files kept growing and maxed out the space on the RAID 5 array. We tried to shrink the logfile, but that option is grayed out in SSMS (possibly b/c the log file has not committed?). The drive bays in the server and motherboard will not support additional drives. Can I plug in a USB drive and backup the log file to the USB drive?I'm looking for options to backup, shrink, the log file so that we can get SharePoint, SQL backonline, and then adjust the backup strategy longer term to prevent this from happening again.Appreciate your feedback!Michael

Recommend Specs for Reporting Server

Posted: 19 Apr 2013 10:49 AM PDT

I've been tasked to figure out the specs for the reporting server. I was thinking of loosely basing it off our production server, which has the following specsProcessor: Intel Xeon X7350 @ 2.93 GHz (2 Processors)Memory: 32 GBOperating System: Windows Server 2008 R2 Server Standard (64 Bit).Storage: 1.3 TBAny suggestions would be great. The reporting database is going to be pretty small, but I want a copy of production database, in case I ever have to do ad hoc reporting.

help searching multiple words

Posted: 19 Apr 2013 01:08 AM PDT

hello, i am trying to make a procedure that returns values if and only the query matches all words .. EX:[code="sql"]declare @Names TABLE( name varchar(100))DECLARE @Searchs TABLE( word varchar(100))insert into @Searchs (word) select items from dbo.Split('John Andrews', ' ')insert into @Names values ('John Andrews')insert into @Names values ('John Adams')insert into @Names values ('John Matthews')insert into @Names values ('Mark Snow') select name,word from @Namesinner join @Searchs on name like '%' + word + '%' [/code]this produces the following result [code="other"]name wordJohn Andrews JohnJohn Adams JohnJohn Matthews JohnJohn Andrews Andrews[/code]my goal is trying to return on 1 result - John Andrews.what do i need to do to achieve this ?PS: Is this "procedure" advisable to search multiple words ?Thank you in advance.

What causes timestamp/rowversion to update?

Posted: 19 Apr 2013 06:23 AM PDT

We've got transactional replication setup on a number of tables in a database from one SQL 2008 server to another. Many of the tables have a timestamp/rowversion column defined, and in our table replication options, we have "convert timestamp to binary" selected to make sure the values stay consistent between the two databases.Periodically, I've noticed that there will be a large backlog of replicated transactions to deliver, and looking through at what the replication account is doing, the vast majority of the operations are just updating the timestamp column to a new value.We're trying to figure out what might cause just the timestamp/rowversion column to update on the source db, and sadly I'm not finding a ton of info. It appears that standard maintenance operations (index reorgs/rebuilds, etc) don't update it, anyone have any other suggestions?

Transactional replication

Posted: 19 Apr 2013 06:01 AM PDT

today we lack of disk space for distribution database primary file and we added spacebut during no space replication failed, agent thrown errorafter some log reader and dist agent started their own and running finemy question is will agents picksup where they left off?

Script Table As -> ALTER To - grayed out??? Why?

Posted: 14 Jul 2010 06:52 AM PDT

I am using SSMS 2008 and went to add some new columns to a table.I can do this in the design view, but why can't I script an ALTER statement?Just adding new columns shouldn't require the table to be dropped or suffer any "massive changes"?I Googled a bit and it said if I use design it will copy all the data to a temp location, drop, then recreate the table.So I assume it will preserve all PKIDs, but what if I had a 16 TB table, that would be a MASSIVE disk I/O operation...Am I missing something here?

Revoke Grant permission to DB_OWNER

Posted: 19 Apr 2013 03:37 AM PDT

Hi,How can I 'REVOKE', the 'GRANT' permission to DB_OWNERS members on SQL SERVER 2008 R2 ?

You will not be able to switch out or switch in data in this table since the table has non storage aligned indexes on it.

Posted: 19 Apr 2013 04:19 AM PDT

html,body{padding:0;margin:0;font-family:Verdana,Geneva,sans-serif;background:#fff;}html{font-size:100%}body{font-size:.75em;line-height:1.5;padding-top:1px;margin-top:-1px;}h1{font-size:2em;margin:.67em 0}h2{font-size:1.5em}h3{font-size:1.16em}h4{font-size:1em}h5{font-size:.83em}h6{font-size:.7em}p{margin:0 0 1em;padding:0 .2em}.t-marker{display:none;}.t-paste-container{position:absolute;left:-10000px;width:1px;height:1px;overflow:hidden}ul,ol{padding-left:2.5em}a{color:#00a}code, pre{font-size:1.23em}Hi,I did partitioning on a table on date column asFILEGROUP= APPLE.ndf FILES= APPLE_2010, APPLE_2011, APPLE_2012, APPLE_2013now i am trying to implement sliding window partitioning with partitioning wizardwhen i tried opening manage partition i was not able to select the option for sliding windowYou will not be able to switch out or switch in data in this table since the table has non storage aligned indexes on it.can you please help me in making sliding window partitioningThank all

reason for this???

Posted: 19 Apr 2013 03:39 AM PDT

I've recently run across this structure in a database that I'm migrating.[code="sql"]create table fktest ( dept int not null, dps int not null, constraint [pkfktest] primary key clustered (dept,dps), constraint [fktester] foreign key (dept, dps) references fktest (dept,dps) ) on [Primary] [/code]Is there a logical reason to have a foreign key that references itself when it's also the primary key in a table? Is this used for some reason? It just doesn't make sense to me.Thoughts?

Questions about extended events

Posted: 01 Oct 2012 11:30 AM PDT

Hey all,I am trying to learn more about extended events. I have been using SQL Profiler to help me troubleshoot SQL performance issues and errors in my Visual Studio projects. I learned about extended events recently and would like to use them instead of SQL Profiler. Is this an appropriate use of extended events? So far, I've been having some trouble using it.My understanding is that you can only view results while the extended events session is running, and that once the session is stopped, the results are discarded. How can I prevent this from happening? I would like to be able to view the events after the session is stopped.Is there a way to do this? Otherwise, while the session is running, is there a way to export the results? Seeing as how the results will be in XML, I was thinking of using the Excel XML Source Pane to parse them. What do other people use? I would greatly appreciate any advice that anyone can provide. Thanks.

Deadlock on update command

Posted: 18 Mar 2013 10:33 PM PDT

I have the following dead lock happening quite frequently at 3-4 times a day. The deadlock seems to happen on 2 things one being an index on the status column keylock hobtid=72057605790367744 dbid=9 objectname=dbname.dbo.orderha indexname=IX_status id=lock2189e7200 mode=S associatedObjectId=72057605790367744the other on the primary clustered keykeylock hobtid=72057602492792832 dbid=9 objectname=dbname.dbo.orderha indexname=PK_orderha id=lock1399f2200 mode=X associatedObjectId=72057602492792832DEADLOCK INFO:Node:1 KEY: 9:72057602492792832 (7900b77d3449) CleanCnt:2 Mode:X Flags: 0x1Grant List 1:Owner:0x00000001A34042C0 Mode: X Flg:0x40 Ref:1 Life:02000000 SPID:67 ECID:0 XactLockInfo: 0x00000002643C19B0SPID: 67 ECID: 0 Statement Type: UPDATE Line #: 1Input Buf: Language Event: [b]update orderha set status=2 where sessionid='7560129' and orderha=1[/b]Requested by:ResType:LockOwner Stype:'OR'Xdes:0x00000001A183B780 Mode: S SPID:64 BatchID:0 ECID:0 TaskProxy:(0x00000000D7EAC538) Value:0x118db7c0 Cost:(0/0)Node:2 KEY: 9:72057605790367744 (7a00de2866cc) CleanCnt:2 Mode:S Flags: 0x1Grant List 0:Owner:0x00000002E14CBCC0 Mode: S Flg:0x40 Ref:0 Life:00000001 SPID:64 ECID:0 XactLockInfo: 0x00000001A183B7C0SPID: 64 ECID: 0 Statement Type: SELECT Line #: 1Input Buf: Language Event: [b]select rsn from orderha where sessionid='7558101' and status < 3[/b]Requested by:ResType:LockOwner Stype:'OR'Xdes:0x00000002643C1970 Mode: X SPID:67 BatchID:0 ECID:0 TaskProxy:(0x0000000281984538) Value:0x42de2bc0 Cost:(0/456)Victim Resource Owner:ResType:LockOwner Stype:'OR'Xdes:0x00000001A183B780 Mode: S SPID:64 BatchID:0 ECID:0 TaskProxy:(0x00000000D7EAC538) Value:0x118db7c0 Cost:(0/0)Should I use a Lock hint to force a block rather then encounter a deadlock , such as UPDLOCK? or HOLDLOCK?

Parent-Child But Not All Have Children...?

Posted: 19 Apr 2013 12:46 AM PDT

Here's a tricky one:I have to show values for each of our customers. Data comes from a table something like this:[font="Courier New"]Customer_id Value1 1002 1503 10004 300...10 12011 15012 100013 200[/font]etcMost customers just have one account, but some have multiple "sibling" (or "child") accounts. Those with "children" are mapped in a table like:[font="Courier New"]Customer_id (Parent) Child_id1 101 111 122 133 143 155 16[/font]etcWhat I need to do is display a list showing all accounts and their own values, but if an account is a Parent it needs to show the value of itself and all its children added together:[font="Courier New"]Customer_id Value1 1370 (100 + 120 + 150 + 10002 350 (150 + 200)3 1000...10 12011 15012 100013 200[/font]If a Parent doesn't exist in the Parent-Child mapping table all we show is its own value[font="Courier New"]Customer_id Vaue4 300[/font]This seems like it requires a convoluted query!Any ideas?Thanx!

Advice on problems with "Client Processing Time"

Posted: 19 Apr 2013 12:34 AM PDT

Hello all,I've encountered a strange problem on one of my test SQL Servers.It is a VM box on a SAN, with Windows Server 2008, SQL Server 2008 R2, 16GB RAM and 4 CPU cores allocated.The hardware allocation is more than sufficient for the database that is running on it, and is similar to the current production server.When I run a test query (which is essentially returning the entire results of a view), it takes 9-10 seconds to return the results.However, when I run the same query on the production server, it takes 1-2 seconds.I've monitored the hardware while the query was running and also enabled execution plan and client statistics.I/O is less than the production box and the memory utilisation anc CPU utilisation is next to nothing.The only difference I can see between the queries run on each, is that the CLIENT PROCESSING TIME is 3 times as long on the test box.Interestingly, to make matters even more murky, I can run the query from my own PC (within Management Studio) and the query returns the results in 4-5 seconds (less than half the time it takes to run directly on the server, but still twice the time in production).Can anyone recommend some good places to start looking, to uncover what the underlying problem could be?Many thanks in advance for any suggestions...:)

Comma separated lists

Posted: 18 Apr 2013 11:37 PM PDT

Hi allI've got a couple of queries that use quite large comma separated lists in the WHERE clauses.I've seen (somewhere) a way of putting these into some sort of variable and using the variable instead of the entire list in the WHERE clause.Would someone be kind enough to point me in the right direction?

Sandbox environment for customers

Posted: 19 Apr 2013 01:12 AM PDT

I am looking for ideas on how to solve few problems we have with our customers creating sandbox environments. We are running an ASP.NET web application along with a windows server and a SQL Agent job configured to run against a production SQL Server database. Customers can create a parallel setup and restore the production database to a new instance. Once this is complete everything works great and they get full functionality as planned. Our problems start when the sandbox application starts performing actions that should only happen in the production DB. So we may have schedule tasks that could export to the same folder location and file, send emails and or transfer data to other production sites. All of these should be functional need to be functional in the sandbox application but it should only be running newly created tasks or configuration that had been set up after the restore took place.I can come up with a unique solution for each of the areas where I have problems. I can write a script that could be run to disable or change all of these areas, but it is a manual step that would depend on the customer running when the environment is restored. I am hoping for a more global solution of preventing or disabling things in a sandbox environment. Even something like a restore trigger that could automatically run a script.How do others handle such issues?

Moving data from one mdf to multiple ndf files

Posted: 19 Apr 2013 12:35 AM PDT

I have a database with one 110g mdf file. I would like to stripe the I/O across five 25g files and purge the data out of the 110g file so theyare all the same size. What is the easiest way to accomplish this task, if possible?

SQL disk config

Posted: 19 Apr 2013 12:04 AM PDT

Hello - I need to approve a SQL disk configuration - I know it is best to put data, logs and tempdb on seperate drives but not sure how to determine size.This is what was suggested by project manager...E:\ Data - 10 TBF:\ Logs - 5 TBG:\ TempDB - 5 TBThe software is websense and tracks we activity so it will be writing a lot of records.What is the best way to determine how to size the drive for logs and tempdb?Thanks in advanceDave

Strange datetime conversion issue

Posted: 16 Apr 2013 04:36 PM PDT

I have been using a query that joins several raw tables (nvarchar datatypes), has several conversions, nested queries, CASE WHEN statements. In other words it is messy, but I didn't create it nor do I have time to recreate it properly. It has been generating a very strange conversion error (the cause not the error itself is strange) where there is some kind of datetime conversion error when converting a string datatype. I have debugged every single field from every table in the query that is either converted explicitly or implicitly (these have all been updated to explicit conversions), with no problems present.The error is very strange because if I change the select fields (none of which are converting to a datetime implicitly or otherwise) to * the error message goes away, in fact if I simply add ,* after the last field in the select list it also goes away. However I can't use this because don't want every single field from every table in the query.Further, if I change the Select statement to SELECT TOP XXX where XXX is greater than the number of rows returned the error still does not occur. But if I use SELECT TOP 100 PERCENT the error occurs again.This made me think that it might be an error in the query execution cache. Which I cleared and tried again with the same errors still occurring. The database is set up to use Korean_Wansung_CI_AS on SQL Server Enterprise Edition v10.50.1600.1This problem is not critical as I can set it to use SELECT TOP XXX such that XXX is far larger than will ever be actually required. It was just very strange behavior that I had never seen before.Unfortunately, I can't provide the code or data both are quite confidential, however I'm quite sure that this is not directly related to the code itself, as the code is working using the two scenarios mentioned.Has anyone seen this behavior and/or know of the cause?

how i can store marathi font in sql server

Posted: 18 Apr 2013 10:29 PM PDT

I am going to design a application having front end with VB6.0 and Back end with SQL SERVER in. And I want the complete interface of that application in marathi language.when I enter the marathi character in data base and execute it, it shows "????????" like this.What I do to enter and store the marathi character in Database?

Warning message for user having no db_owner permission but having full permission for the table

Posted: 18 Apr 2013 09:16 PM PDT

I am accessing my sql server 2008 r2 express server using SSMS. I have about 5 logins. For a particular login i hav all server roles other than sysadmin.In user roles it has only "public" permission for a database.For that database in properties->permission i gave all values 'grant '. Then for some table i restricted the access in table properties.Now when i login by using this user i can access the database and access all tables other than for which i have completely restricted access. So it is what has to be according what permission i gave.only problem is when i access other tables and open designs i get message like"You are not logged on as the database owner or system administrator. You might not be able to save changes to tables that you do not own."But i am able to edit design. And after editing design when i save i again get message like "Warning were encountered during the pre-save validation process, and might result in a failure during save. Do you want to continue attempting to save?" .Now on clicking save the design gets saved with the changes i have done. It is ok that it got edited because the user has permission for that table.But why does those message come even if i have full permission to edit database and design.(even though i don't have db_owner) ?Can i avoid those messages? Or is there any problem to database if i edit in user with no db_owner permission?

logical and physical file names of all databases

Posted: 03 Apr 2011 01:00 PM PDT

Is there a query or sp that I can pull all databases on the server along with their logical file names and physical file names? Thanks

Convert file from ANSI to UNICODE

Posted: 05 Apr 2013 01:10 AM PDT

Currently, we generate file in ANSI. Is there a setting we need to change to generate files in UNICODE? or how can we convert ANSI files to UNICODE?

Friday, April 19, 2013

[SQL Server] No-updateable query

[SQL Server] No-updateable query


No-updateable query

Posted: 19 Apr 2013 05:45 AM PDT

Help much needed here!I've got an Access DB as front-end and a SQL back-end. When trying to delete-modify a record in Access (table just linked), I get the usual error 'The record as been changed by...' > Drop changes. I take the error is on the table, not on the Access form.The table that populates the form is also non-updateable in Access, but I can't find WHY is not updateable. If anyone sees some clue pls let me know, even if it's not about the updateability.Table script attached:CREATE TABLE [dbo].[tblVolunteer]( [VolunteerID] [varchar](20) NOT NULL, [PartID] [int] NOT NULL, [IDChecksum] [varbinary](20) NULL, [Barcode] [nvarchar](50) NULL, [SiteID] [varchar](10) NULL, [Class] [varchar](50) NULL, [Year] [varchar](10) NULL, [Title] [varchar](10) NULL, [Initials] [varchar](10) NULL, [ForeNames] [varchar](50) NULL, [Surname] [varchar](50) NULL, [DOB] [datetime] NULL, [Sex] [varchar](1) NULL, [Type] [varchar](10) NULL, [Addr1] [varchar](50) NULL, [Addr2] [varchar](50) NULL, [Town] [varchar](50) NULL, [County] [varchar](50) NULL, [PostCode] [varchar](50) NULL, [ConsentDate] [datetime] NULL, [Leader] [bit] NULL, [MobTel] [varchar](50) NULL, [Email] [varchar](50) NULL, [Status] [int] NULL CONSTRAINT [df_tblVolunteer_Status] DEFAULT ((0)), [Date] [datetime] NULL, [Reason] [int] NULL CONSTRAINT [df_tblVolunteer_Reason] DEFAULT ((0)), [Comment] [varchar](400) NULL, [UserName] [varchar](50) NULL, [Password] [varchar](20) NULL, [Flag] [bit] NOT NULL CONSTRAINT [df_tblVolunteer_Flag] DEFAULT ((0)), [Accept] [datetime] NULL, [PrintSelect] [bit] NOT NULL CONSTRAINT [DF_tblVolunteer_PrintSelect] DEFAULT ((0)), [Created] [datetime] NOT NULL CONSTRAINT [df_tblVolunteer_Created] DEFAULT (getdate()), [CreatedBy] [varchar](50) NOT NULL CONSTRAINT [df_tblVolunteer_CreatedBy] DEFAULT (suser_name()), [Modified] [datetime] NULL, [ModifiedBy] [varchar](50) NULL, CONSTRAINT [aaaaaVolunteer_PK] PRIMARY KEY NONCLUSTERED ( [VolunteerID] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY]GOSET ANSI_PADDING OFFGOALTER TABLE [dbo].[tblVolunteer] WITH NOCHECK ADD CONSTRAINT [ck_tblVolunteer_IDChecksum] CHECK ((hashbytes('SHA1',[VolunteerID]+CONVERT([varchar](7),[PartID],(0)))=[IDChecksum]))GOALTER TABLE [dbo].[tblVolunteer] CHECK CONSTRAINT [ck_tblVolunteer_IDChecksum]GOALTER TABLE [dbo].[tblVolunteer] WITH NOCHECK ADD CONSTRAINT [ck_tblVolunteer_Sex] CHECK (([Sex]='M' OR [Sex]='F' OR [Sex] IS NULL))GOALTER TABLE [dbo].[tblVolunteer] CHECK CONSTRAINT [ck_tblVolunteer_Sex]Thanks!

SQL Server rename concern with IIS

Posted: 19 Apr 2013 06:46 AM PDT

We have a windows server which will be renamed shortly. It has SQL Server 2008 R2 and IIS v7.5 installed. After the server is renamed, my next step would be to drop and add servername via SSMS to match the new server name. I'm not that worried about this. My concern is IIS v7.5. Do I have any concerns when renaming the server name or SQL Server name? My level of expertise with IIS is poor. Will the SQL Server rename created problems? If so what steps should be employed to resolve. Any comments / URLs would be appreciated. Thank you.

Business Intelligence - Online course

Posted: 19 Apr 2013 03:40 AM PDT

[url=https://www.udemy.com/business-intelligence-para-todos][/url] 29$

Cannot drop the table because it does not exist or you do not have permission. [SQLSTATE 42S02] (Error 3701). The step failed.

Posted: 21 Mar 2010 03:54 AM PDT

Good morning. I hope someone can help me.I am running into a problem executing the following query within a job in SQL Server Management Studio.DROP TABLE xyz..xyz_BatchProcessRuntimeselect * into xyz..xyz_BatchProcessRuntime FROM [xx.xxx.xx.xxx].xyz.dbo.xyz_BatchProcessRuntime go It executes for about an hour and then comes back with this error:Executed as user: xxx\svc_xxxxx. Unspecified error occurred on SQL Server. Connection may have been terminated by the server. [SQLSTATE HY000] (Error 0) Cannot drop the table xyz..xyz_BatchProcessRuntime', because it does not exist or you do not have permission. [SQLSTATE 42S02] (Error 3701). The step failed.Why does it take an hour or more to come back and tell me that it can't drop the table?I think that the remote database is reachable because other lines in the job that reference it work fine. When I try to connect to the remote machine to check permissions, I get no response from ping, RDC, or SSMS. Could there be a firewall rule in place that only allows one sort of traffic to connect to the remote database? Is there a way that I can track down the problem with the above error?

[how to] Relational vs Non-Relational Database for Events Database

[how to] Relational vs Non-Relational Database for Events Database


Relational vs Non-Relational Database for Events Database

Posted: 19 Apr 2013 04:43 PM PDT

I'm trying to find out whether an SQL or no-SQL solution would be better for creating an events database. I'm creating a ticketing system, similar to ticket master. I know that for either database-type storage is the simple part. The deciding factor is the performance of the following queries:

  • Select events by location within a specific date range.
  • Select all events in a given location (city, town, etc.), sort by date.
  • Search for events by keyword, within a specific date range, within a specific location.

Events basically have ID, NAME, LOCATION, VENUE, START DATE, END DATE

In a relational schema I would have an EVENTS table, a DATES table for storing dates separately because events can occur on more than one date and they are repeatable, and a VENUES table from which the event location (country, city, etc) can be cross-referenced.

I have no experience with no-SQL databases, so if you vote for no-SQL please suggest how you see the "schema" being organized and which particular DB.

I hope this question is specific enough. Query performance is the deciding factor.

Accidentally Deleted SQL Server Log Reader Service Account

Posted: 19 Apr 2013 05:07 PM PDT

Uh, I accidentally deleted the Security Login in my SQL server that allows for the reading of log files.

Now, I can no longer see any of the log files in my SQL Server Logs node of SMSS:(

What account is it so that I can add back the login?

I think it started with "BUILT IN" or something.

PostgreSQL and default Schemas

Posted: 19 Apr 2013 04:11 PM PDT

Whenever I create a brand new database in PostgreSQL Maestro it creates the following list of default schemas:

enter image description here

Now from my understanding Schemas are like folders for organization, etc. So the question I have is are all these schemas needed when I create a new DB? If so what are they used for on PG side as I won't ever use them myself.

I can understand information_schema as this is default for an install of MySQL on a server, though I don't get why a database would need it's own as opposed to the entire server, but to each DB type his own I guess.

sql web-application Connection String for member of a domain group

Posted: 19 Apr 2013 05:09 PM PDT

In my testing environment, I have a local sql express (2008R) server instance. I've created a domain group and given it a login in the database.

But now, how do I consctruct the connection string for my web-application? Do I give it the username and password of one of the members of the group?

And, what do I put for a "user" in the Database security section? The group itself doesn't have a password.

Do I have to add the service account from the application pool to the AD group?

Here is what I have for a connection string; but this allows all connections, it seems:

<add name="development" connectionString="server=myserver\sqlexpress;      database=mydatabase;Persist Security Info=True;Integrated Security=SSPI;      Trusted_Connection=True; Pooling=True" providerName="System.Data.SqlClient"  />  

Accidentally turned down SQL memory too low to log in, how do I fix?

Posted: 19 Apr 2013 02:09 PM PDT

I turned down the Maximum Memory of the my SQL Server instance to 15 MB... Now I can't log in to increase it. How can I increase the maximum memory without logging in?

Version is SQL 2008 R2

Unable to import exported Wordpress DB - #2006 - MySQL server has gone away

Posted: 19 Apr 2013 01:08 PM PDT

I'm having a nightmare trying to clone a live Wordpress site into a local dev environment using MAMP Pro and BackupBuddy.

I have no problem with setup. The problem is that there seems to be some issue within my database that prevents import.

Backupbuddy fails at the database import stage.

If I switch and try to import manually, via MAMP PRO's phpMyAdmin, I get the following:

2006 - MySQL server has gone away

The import always stops at the same place: with my wp_options table.

Experimenting, I was able to import the full database when the wp_options table was NOT included. Thereafter, trying to import only the wp_options table leads to the same error.

The wp_options take is only 3MB. The whole database is 13MB.

I followed suggestions online relating to increasing some basic configuration values. In MAMP PRO's startMysql.sh (MAMP.app/bin/startMysql.sh):

max_allowed_packet=100M

I also added the following to my php.ini (MAMP PRO.app --> File menu --> Edit Template --> PHP ..)

max_allowed_packet= 100M wait_timeout= 6000

Memory limit (in php.ini) is 256MB, max input time is 600, and max execution time is 600.

These values should be more than enough to prevent the "#2006 - MySQL server has gone away" error, but I'm still getting it.

FURTHER, I have tried importing the database as xml, and csv. So far, I simply cannot get a working copy of this database going in my localhost environment.

I even wanted to simply export and cut and paste the actual database, but then I find that Wordpress uses .frm .MYD and .MYI files, and my export is a .sql file that likely phpMyAdmin would convert and segment out, but then the import fails.

Does anyone have any suggestions?

Reasons for incremental backup on Progress OpenEdge

Posted: 19 Apr 2013 02:00 PM PDT

In the case that storage is not a problem: are there actually any good reasons for doing incremental backups instead of just doing full backups?

Edit
This could actually refer to any database with full & incremental backups. In this case we use a Progress OpenEdge RDBMS with support for both backup plans as well as real time transactional log archiving. Apart from that I don't think the question must relate to a single vendor. The choice of full/incremental backup can apply to lots of different database engines.

Why use separate databases for high traffic/volume tables?

Posted: 19 Apr 2013 01:27 PM PDT

While looking at the database structure of an application I am using I recognized that it uses 3 different databases on the same SQL Server instance for different things.

The first one contains the main data that changes rarely.

The second one contains event logs with high traffic and volume and the last one is an archive database for old event logs.

I was wondering what the benefit of this structure might be, because the databases are running on the same instance and the database files are located on the same disk. Therefore I would not expect any performance improvements from this.

I thought, maybe I am overlooking something and somebody can point me to benefits that I did not think of.

Update:
Some good points were made regarding maintenance and security. But I am still wondering if it is possible to get a performance improvement.

A more general question would be: Can the performance of one table suffer from other large tables in the same database (due to fragmentation or for some other reason) or are those effects probably negligible.

Clustered vs Nonclustered Index

Posted: 19 Apr 2013 02:18 PM PDT

My database currently has a primary Key/Clustered index on the ID column for each table. However, the application that connects to the database is always looking at views defined as:

SELECT * FROM Table WHERE DeletedDate IS NULL  

Would it be best to add the DeletedDate to the clustered index or is it better practice to have a nonclustered index with the DeletedDate as an include?

IIS takes longer to run queries than SMSS

Posted: 19 Apr 2013 12:23 PM PDT

I'm completely stumped on this one. We're running a pretty usual intranet app here, classic ASP frontend and SQL Server 2005 backend. Occasionally, once/twice a week, a certain stored proc takes ~50x longer to run when I run it in the Management Studio. This slowness lasts the entire day and it inexcplicably goes away the next day.

So essentially what's happening is this: once/twice a week, when I fire up the intranet page that runs the stored procedure and displays the results, it takes roughly 115secs to run. I then fire up SQL Server Management Studio and run the same query, which this time around takes only 3-4secs to run.

The next day, the intranet page is taking as long as SSMS to run the very same stored procedure, with nothing having been changed in the interim (not to my knowledge anyway).

Other details:

  • the IIS connects to SQL Server 2005 using Provider=sqloledb as the driver
  • the slowdown lasts the entire day, regardless of the number of active users
  • I can replicate the slowness by running the same query from Excel, using the same driver/connection string
  • I've ran a trace on the SQL server and it turns out that on those particularly slow days, the duration of the stored proc is actually longer when executed via IIS than direct query

As such, I was wondering whether any of you has encountered any similar behaviour before, if you've fond a solution to it and what would your advice be in tackling/solving it?

Thanks for your help,

Please find Trace results in .csv format below:

EventClass,TextData,ApplicationName,CPU,Reads,Writes,Duration (s),ClientProcessID,SPID,StartTime,EndTime 12,exec spStubbornProc,Internet Information Services,"106,016","7,059,999",1,115.80,5476,85,18/04/2013 08:17:15,18/04/2013 08:19:11 12,exec dbo.spStubbornProc,Microsoft SQL Server Management Studio - Query,"3,141","146,051",0,3.40,5876,84,18/04/2013 08:20:45,18/04/2013 08:20:48 12,exec dbo.spStubbornProc,Microsoft SQL Server Management Studio - Query,"2,563","147,387",0,2.93,5876,84,18/04/2013 08:21:04,18/04/2013 08:21:07 12,exec spStubbornProc,Internet Information Services,"103,156","7,083,365",7,118.73,5476,80,18/04/2013 09:39:41,18/04/2013 09:41:40 12,exec dbo.spStubbornProc,Microsoft SQL Server Management Studio - Query,"2,406","175,029",1,2.57,5876,84,18/04/2013 10:08:58,18/04/2013 10:09:01 12,exec spStubbornProc,Internet Information Services,"112,218","7,103,267",7,114.72,5476,75,18/04/2013 14:06:26,18/04/2013 14:08:21 12,exec spStubbornProc,Internet Information Services,"93,515","7,101,229",3,93.60,6092,60,18/04/2013 19:37:02,18/04/2013 19:38:35 12,exec spStubbornProc,Internet Information Services,"2,500","148,775",1,3.42,5160,75,19/04/2013 09:16:01,19/04/2013 09:16:04

Access 2007 - Splitting single record into component parts similar to Microsoft Money's split transaction

Posted: 19 Apr 2013 02:57 PM PDT

I currently import a bank statement into Access 2007 and would like to be able to categorise each record into its component parts similar to the way that Microsoft Money splits a transaction. I have a Microsoft Money screen grab showing a payment to Amazon that has been split into 3 component categories, however, I can't post it as I do not have a 10 reputation.

I hope therefore that someone may have used this product to know what I am referring to.

If so, is there a way to achieve this using tables/queries?

Or, can it be done programatically?

Or, is this just a step too far for someone with my limited Access knowledge?

Tool to export data with all relational data?

Posted: 19 Apr 2013 02:11 PM PDT

Is there a tool to export data from selected rows in a table with all data stored in other tables in other tables linked by relational design?

The purpose is to ease migrations of bits of data between servers for adhoc migrations. I am looking specifically for a tool for MySQL InnoDB with defined foreign keys.

Limiting user access to tables based on a ROLE

Posted: 19 Apr 2013 05:20 PM PDT

I have a SQL Server 2005 database to which I would like to LIMIT access. I have done the following:

  • Created a database role called NO_HR
  • Added the tables to this role under Securables that should be blocked
  • Set all permissions to DENY for these tables
  • Assigned my users this role under Database Users -> Role Members

I would expect the user to have whatever public access is allowed to the database but be denied access to the tables as defined in the role.

Problem is, users can access all tables as if the rules in the role are not taken into account. My question: How do I block access to tables based on role membership?

A few facts:

  • SQL Server 2005 Standard
  • Windows 2003 R2 Enterprise
  • Users are DOMAIN logins
  • Any help would be appreciated.

SG

SQL Server to compress the backup file from procedure

Posted: 19 Apr 2013 02:56 PM PDT

Is it possible to compress a backup file to .zip (or another) in a stored procedure invoked from a maintenance plan?

My maintenance plan contains these steps:

  • check DB (consistency, re_index, maybe not important)

  • full backup

  • to check backup file by using restore_verifyonly

  • shrink database

After restore_verifyonly/shrink database I want to compress this file.

Server side trace to database

Posted: 19 Apr 2013 05:23 PM PDT

How can I set up a server side trace that dumps to a database instead of a trace file?

I set-up and tested a trace configuration I want with Sql Server Profiler that dumps only needed info straight to a separate database (without a .trc file).
When I export this trace as sql statement however, I see it is configured to save to .trc file instead of a database.

Either my googling-fu is letting me down or nobody is interested in this.

Proper way to export database to other servers

Posted: 19 Apr 2013 05:24 PM PDT

I am using SSMS (I am a software developer, not a DBA) and in each application release I need to release the database with its tables (some with data some with scheme only).

I tried doing 3 things (SSMS):

  1. Using Database --> Tasks -> Generate scripts... This is fine, for small databases, when the scripts is over 200 MB, users reports memory problems, both when running sqlcmd.exe and from SSMS. Moreover, I have to remember to script triggers and I am never sure whether the scripts generate identical database.
  2. I tried using the Back Up option but I guess this option is intended for backup and not export data, it prevents me from exporting only the scheme for example. Back ups also keep histories for differential restores which is not what I intend.

  3. I tried out Script Database as... CREATE To... but these doesn't keep triggers and data. Moreover, this script use hardcoded paths to .mdf and .idf which changes from a server to server.

What is the correct approach to replicate a database to another server ?

P.S. I am new to this Stack Exchange website, I read several question but didn't find an answer after looking on several questions.

Migrating from SQL Server 2005 to SQL Server 2008 R2 on a different box

Posted: 19 Apr 2013 05:24 PM PDT

We currently have a SQL Server 2005 instance hosting a lot of application specific databases (ArchiveManager, SolarWinds, Kaspersky etc). We are planning to deploy a new SQL Server 2008 R2 instance on a new server and slowly migrate across to this instance. All of the applications have fairly straightforward migration plans for changing the database location normally revolving around stopping services, detaching, attaching and rerunning installers.

All looks good so far. My only concern is with SQL Server Agent jobs and logins. Initially I was under the impression that these were all stored in the system databases and that I could move these. I understand now that this is not the case. Microsft KB 918992 suggest running a script to generate a stored procedure and then calling this and running the generated script on the new instance. There seem to be a lot of caveats to it though. Another option would be to use SSIS to generate a "Transfer Logins" and "Transfer Jobs" package and run that (both editions are Standard) but I am unversed in SSIS so am a touch scared to use that.

I suppose my question boils down to this:

  • Is the Microsft KB the best way to migrate logins to the new SQL instance?
  • Is the only way to migrate tasks to the new instance via SSIS?

I am trying to get it as clear in my head as possible for my planning documentation so am very appreciative of any replies/guidance.

Replication on MySQL server

Posted: 19 Apr 2013 01:34 PM PDT

I had to stop the slave server to test something. After I started the server again there is a problem with replication on MySQL server

On the problematic server

mysql> SHOW SLAVE STATUS\G  *************************** 1. row ***************************                 Slave_IO_State: Connecting to master                    Master_Host: servera                    Master_User: replica                    Master_Port: 3306                  Connect_Retry: 60                Master_Log_File: servera-bin.000024            Read_Master_Log_Pos: 808459481                 Relay_Log_File: serverb-relay-bin.000071                  Relay_Log_Pos: 4          Relay_Master_Log_File: servera-bin.000024               Slave_IO_Running: No              Slave_SQL_Running: Yes                Replicate_Do_DB:            Replicate_Ignore_DB:             Replicate_Do_Table:         Replicate_Ignore_Table:        Replicate_Wild_Do_Table:    Replicate_Wild_Ignore_Table:                     Last_Errno: 0                     Last_Error:                   Skip_Counter: 0            Exec_Master_Log_Pos: 808459481                Relay_Log_Space: 106                Until_Condition: None                 Until_Log_File:                  Until_Log_Pos: 0             Master_SSL_Allowed: No             Master_SSL_CA_File:             Master_SSL_CA_Path:                Master_SSL_Cert:              Master_SSL_Cipher:                 Master_SSL_Key:          Seconds_Behind_Master: NULL  Master_SSL_Verify_Server_Cert: No                  Last_IO_Errno: 1129                  Last_IO_Error: error connecting to master 'replica@servera:3306' - retry-time: 60  retries: 86400                 Last_SQL_Errno: 0                 Last_SQL_Error:  

on the problematic server:

SELECT user, host FROM mysql.user WHERE Repl_slave_priv = 'Y';  +---------+-----------+  | user    | host      |  +---------+-----------+  | root    | localhost |  | root    | serverb   |  | root    | 127.0.0.1 |  | replica | servera   |  | replica | serverb   |  +---------+-----------+  

on the main server:

SELECT user, host FROM mysql.user WHERE Repl_slave_priv = 'Y';  +---------+-----------+  | user    | host      |  +---------+-----------+  | root    | localhost |  | root    | servera   |  | root    | 127.0.0.1 |  | replica | servera   |  | replica | serverb   |  +---------+-----------+  

according to what I've read, there is a need to execute the following command om the main server:

mysql> FLUSH HOSTS;   

What will happen then? if there is any application connected to it - will it disconnect it also?

query taking long time to execute.we are expecting result in very quickly anybody please rewrite it

Posted: 19 Apr 2013 03:02 PM PDT

Query is taking long time to execute.we are expecting result in 0 sec.If possible Please help me to rewrite it. Also suggest me to add any indexes on table.Also suggest me if optimization is needed.If possible explain me what is the root cause.

Query And Explain Plan:

mysql> explain SELECT * FROM (`xgen_studentMLs`)      LEFT JOIN `xgen_quizzes`           ON xgen_studentMLs.quizid = xgen_quizzes.idnum      WHERE            `student` = 27126        AND xgen_studentMLs.topic = 1829        AND xgen_studentMLs.metatype = 'topic'        AND (xgen_studentMLs.department='Nursing' OR xgen_studentMLs.department='Biology')        AND (xgen_quizzes.book IS NULL OR xgen_quizzes.book=0)      ORDER BY xgen_studentMLs.timestamp DESC LIMIT 100;    +----+-------------+-----------------+--------+---------------+---------+---------+------------------------------+------+-----------------------------+  | id | select_type | table           | type   | possible_keys | key     | key_len | ref                          | rows | Extra                       |  +----+-------------+-----------------+--------+---------------+---------+---------+------------------------------+------+-----------------------------+  |  1 | SIMPLE      | xgen_studentMLs | range  | student,mult  | mult    | 265     | NULL                         |   18 | Using where; Using filesort |  |  1 | SIMPLE      | xgen_quizzes    | eq_ref | PRIMARY       | PRIMARY | 4       | prepu.xgen_studentMLs.quizid |    1 | Using where                 |  +----+-------------+-----------------+--------+---------------+---------+---------+------------------------------+------+-----------------------------+  

Table structure:

mysql> show create table xgen_quizzes\G  *************************** 1. row ***************************         Table: xgen_quizzes  Create Table: CREATE TABLE `xgen_quizzes` (    `idnum` int(11) NOT NULL AUTO_INCREMENT,    `userid` int(11) NOT NULL DEFAULT '0',    `timestarted` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',    `timefinished` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',    `questionlist` mediumtext NOT NULL,    `topics` mediumtext NOT NULL,    `totalnum` int(11) NOT NULL DEFAULT '0',    `completednum` int(11) NOT NULL DEFAULT '0',    `assignment` int(11) NOT NULL DEFAULT '0',    `department` varchar(255) NOT NULL DEFAULT '',    `book` int(11) NOT NULL DEFAULT '0',    `cqs` mediumtext NOT NULL,    `metatype` varchar(25) DEFAULT 'topic',    PRIMARY KEY (`idnum`),    KEY `userid` (`userid`),    KEY `assignment` (`assignment`)  ) ENGINE=InnoDB AUTO_INCREMENT=13547573 DEFAULT CHARSET=latin1  1 row in set (0.00 sec)    mysql> show create table xgen_studentMLs\G  *************************** 1. row ***************************         Table: xgen_studentMLs  Create Table: CREATE TABLE `xgen_studentMLs` (    `student` int(11) NOT NULL,    `topic` int(11) NOT NULL,    `ML` float NOT NULL,    `MI` int(3) NOT NULL DEFAULT '0',    `timestamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,    `department` varchar(255) NOT NULL,    `metatype` varchar(25) DEFAULT 'topic',    `quizid` int(11) NOT NULL DEFAULT '0',    KEY `student` (`student`),    KEY `mult` (`topic`,`department`,`student`,`timestamp`,`ML`)  ) ENGINE=InnoDB DEFAULT CHARSET=latin1  1 row in set (0.00 sec)  

Merge Replication identity field issues

Posted: 19 Apr 2013 12:21 PM PDT

One of our clients is using our software with merge replication of a database on a SQL Server 2008 R2 machine. There are two production environments in separate geographical locations only one of which is live at any one time, so basically one and live one on standby. Only the live database is updated by teh applications. Every couple of months they failover between the datacentres and the standby environment become the live centre. There is an instance of SQL Server 2008 in each datacentre and merge replication is used to keep them in sync. This was all working ok until the beginning of the year when we started getting replication errors with some lTID columns in various tables that have the Identity property set.

The errors were like this one:

The insert failed. It conflicted with an identity range check constraint in database 'GateMain', replicated table 'dbo.tGateCalcsLog', column 'lTID'. If the identity column is automatically managed by replication, update the range as follows: for the Publisher, execute sp_adjustpublisheridentityrange; for the Subscriber, run the Distribution Agent or the Merge Agent.

Then after the last failover we noticed we had an issue with the lTID values in one specific table. Our application relies on the lTID value always having incremented in order such that the highest lTID value is always the newest entry in the table. We've found that due to how the identity ranges are being managed by replication that when the system is failed over that the lTID range of the now live database server may have a range of values that are lower than those already present in the table. Is there a way to manage this in merge replication so we can guarantee that the next identity value allocated to the lTID column in greater than any lTID currently in the table? Or do we need to use a different type of replication or possibly mirroring?

MySQL Slaves lag behind master

Posted: 19 Apr 2013 02:02 PM PDT

I have one master and four slaves. Sometimes all my slaves lag behind the master. I have implemented the heartbeat for monitoring replication lag. Now I am trying to find why the slaves are lagging behind the master.

I saw the slow queries (for today) on the master and I found that the slowest query (DML) was taking 138 seconds. But the slaves were lagging about 1400 seconds and there were also no slow queries on the slaves for DML (update, delete, insert, etc.).

Points to be taken into consideration:

  1. All tables are InnoDB.
  2. 68 GB of RAM (Master as well as slaves).
  3. Data size about 1 TB.
  4. Master and slave are running from a long.

What may be the reason for lag?

MySQL replication between VPS and shared host

Posted: 19 Apr 2013 01:02 PM PDT

I have a VPS where I have ssh access and shared host with CPanel. Both have MySQL.

I need to create a master/slave setup among the two server. As per this tutorial: How To Set Up Database Replication In MySQL, I will need to modify the my.conf file.

Is it possible to achieve this with the following permissions on both the servers?

  • VPS

    I have root access with ssh login where I can modify /etc/my.conf file.

  • Shared host

    Cpanel with phpmyadmin.

Partition of tablespace

Posted: 19 Apr 2013 08:02 PM PDT

My postgres server is running in a windows server and three tablespace are created. Postgresql server is installed in C: drive and the tablespaces are in D: and E: drive. So how to take a backup and restore. Is it similar to normal backup and while restore, in another machine, do i need to have the same setup or i can restore it in a single drive? From the tablespace folder, is it possible to retreive information for only one table which is huge?

SQL Server equivalent to functionality of Oracle RAC?

Posted: 19 Apr 2013 12:55 PM PDT

I did some Googling and couldn't find an answer to this question more recent than a few years ago, so I thought I'd ask. Oracle's RAC feature offers load-balancing for both read and write transactions, as well as scale-out and high-availability without downtime (at least, as I understand it - we're about to deploy our first databases that use RAC, so we'll see how it goes).

Is there any SQL Server feature set (or third party component you could install on top) that delivers equivalent functionality? We've always used Windows clustering, where a failover event causes about 20-30 seconds of SQL downtime - always tolerable, but not ideal. Now, with AlwaysOn in SQL 2012, SQL Server shrinks that to about 15 seconds and adds the concept of read-only-secondary databases, but they still require that write transactions are choked through a single connection point (much improved, since many transactions are just read, but still not really load balancing), and in the case of a node failure or the need to patch, there's still downtime.

I suppose it's just more curiosity - I feel like this is the only area that SQL Server falls behind Oracle (at least among the features I've personally seen used). I wanted to see if there are any options out there to close that gap and possibly improve our own SQL Server deployment while we wait for Microsoft's equivalent feature to be added - maybe in SQL 2014/2015?

Call a stored procedure from a trigger

Posted: 19 Apr 2013 04:56 PM PDT

I have created a stored procedure in mysql using the following syntax.

DROP PROCEDURE IF EXISTS `sp-set_comment_count`;    DELIMITER $$    CREATE PROCEDURE `sp_set-comment_count` (IN _id INT)  BEGIN     -- AC   - AllCount     DECLARE AC INT DEFAULT 0;       SELECT COUNT(*) AS ac       INTO AC       FROM usergroups AS ug  LEFT JOIN usergroup_comments AS ugm ON ugm.`gid` = ug.`id`  LEFT JOIN mediagallery AS dm ON ugm.mid = dm.`id`      WHERE dm.`status` NOT IN (200, 201, 202, 203, 204, 205)        AND ug.`id` = _id;       UPDATE usergroups        SET allCount = AC,      WHERE usergroups.`id` = _id;    END $$  DELIMITER ;  

FYI I've greatly simplified the stored procedure but I do know it works without any issues.

What I'd like to be able to do is set up a trigger from usergroup_comments that works like this.

DROP TRIGGER IF EXISTS `usergroups_comments_insert`     CREATE TRIGGER `usergroups_comments_insert` AFTER INSERT ON `usergroups_comment`      FOR EACH ROW      BEGIN         CALL sp-set-comment_count(NEW.`gid`);      END;  

But for some reason every time I do mysql throws an error at me that's less than helpful stating that there's a syntax error on line 4.

I've combed through the mysql documentation and found some information on restrictions of triggers but found it to be fairly convoluted.

http://dev.mysql.com/doc/refman/5.1/en/stored-program-restrictions.html

Any ideas would be helpful.

How to safely change MySQL innodb variable 'innodb_log_file_size'?

Posted: 19 Apr 2013 01:16 PM PDT

So I'm fairly new to tuning InnoDB. I'm slowly changing tables (where necessary) from MyIsam to InnoDB. I've got about 100MB in innodb, so I increased the innodb_buffer_pool_size variable to 128MB:

mysql> show variables like 'innodb_buffer%';  +-------------------------+-----------+  | Variable_name           | Value     |  +-------------------------+-----------+  | innodb_buffer_pool_size | 134217728 |  +-------------------------+-----------+  1 row in set (0.00 sec)  

When I went to change the innodb_log_file_size value (example my.cnf on mysql's innodb configuration page comments to change the log file size to 25% of the buffer size. So now my my.cnf looks like this:

# innodb  innodb_buffer_pool_size = 128M  innodb_log_file_size = 32M  

When I restart the server, I get this error:

110216 9:48:41 InnoDB: Initializing buffer pool, size = 128.0M
110216 9:48:41 InnoDB: Completed initialization of buffer pool
InnoDB: Error: log file ./ib_logfile0 is of different size 0 5242880 bytes
InnoDB: than specified in the .cnf file 0 33554432 bytes!
110216 9:48:41 [ERROR] Plugin 'InnoDB' init function returned error.
110216 9:48:41 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.

So my question: Is it safe to delete the old log_files, or is there another method to change the innodb_log_file_size variable?

How do you document your databases?

Posted: 19 Apr 2013 02:52 PM PDT

I find that most of my clients are not documenting their databases at all and I find that pretty scary. To introduce some better practice I would like to know what tools/process you are using.

  • How do you document your database? (SQL-Server)
  • What tool do you use?
  • Documentation Storage Format for database schema/meta-data?
    • Word documents
    • Excel spreadsheet
    • Plain Text
  • Documentation process or policies?

I am not talking about reverse engineering / document a existing database, but mainly on the documentation best practices while you develop your system/database.

[SQL Server Data Warehousing] Forms templates for documenting DW analysis & design, + naming conventions


you need to check some structural principles like how they implemented Slowly Changing Dimension, Incremental Load, Late Arriving Fact tables ... and any other challenge-able parts of data warehouses. you will get some good ideas about how these concepts should be considered after reading Kimball books (the Data Warehouse toolkit , and ETL book)


for naming conventions, there are some naming conventions that you might find by googling. for example this is good naming convention for SSIS:


http://consultingblogs.emc.com/jamiethomson/archive/2006/01/05/SSIS_3A00_-Suggested-Best-Practices-and-naming-conventions.aspx


and for SSAS:


http://cwebbbi.wordpress.com/2010/04/07/naming-objects-in-analysis-services/



http://www.rad.pasfu.com



.

social.technet.microsoft.com/Forums

[SQL Server Data Warehousing] Recovery model for DWH databases


Using Recovery Model (Simple) and relying on a mixture of say differential backups during the week and full backups once a week can very significantly reduce maintainance overhead especially when the data warehouse reaches many gigbytes in size. If as Mike says the data warehouse is not populated with new data outside the nightly ETL. So I find in practice if you perform a backup immediately after your perform your ETL you have the most optimised solution in most scenarios.


http://msdn.microsoft.com/en-us/library/ms175987(v=sql.105).aspx


Kind Regards,


Kieran.



If you have found any of my posts helpful then please vote them as helpful. Kieran Patrick Wood MCTS BI, PGD SoftDev (Open), MBCS http://www.innovativebusinessintelligence.com http://uk.linkedin.com/in/kieranpatrickwood



.

social.technet.microsoft.com/Forums

[SQL Server] Efficiently Reuse Gaps in an Identity Column



Weekly SQL Server newsletter with articles, forum posts, and blog posts via email. Subscribers receive our white paper with performance tips for developers.



SQLTeam.com Articles via RSS


SQLTeam.com Weblog via RSS



.

sqlteam.com

Search This Blog