Friday, July 12, 2013

[SQL Server 2008 issues] Can't Reduce VLFs

[SQL Server 2008 issues] Can't Reduce VLFs


Can't Reduce VLFs

Posted: 11 Jul 2013 07:19 PM PDT

Hi guys, I have a job that tries to make sure every DB has < 50VLFs. It shrinks the log then regrows it to its original size in 1 step. I have one DB that I can't get < 300 VLFs. I think its because I can't shrink the log smaller than its original size but I'm not sure how I can get my 50 VLFs. Any ideas guys?Cheers

What is happening at END of an insert/delete?

Posted: 11 Jul 2013 08:00 AM PDT

I have been inserting large numbers of records from a "live" table into an archive table in a query window in SSMS. I'm doing this in batches and know in advance how many records will be inserted. Once the records are in the archive table, I'm deleting them from the live table. I'm dealing with roughly 50,000,000 records at a time. In another query window, I'm monitoring the live and archive tables to get a sense of where the process is based on the number of rows in the tables. I'm using:sp_spaceused 'MySchema.MyTable'What I noticed is that my sp_spaceused will return a "rows" value that indicates my insert or delete is done, however the query window where the insert or delete is occurring continues to indicate "Executing…" for several more minutes. In one example, when performing the insert, my "rows" value indicated all the rows in question were inserted in 25 minutes, however the query window continued to chug along and the final time on the insert was 58 minutes. If all the rows were in the table at the 25 minute mark, what exactly was going on under the hood from minute 26 to minute 58?? I'm not a DBA and can only think of a few things:- Is it completing writing to the transaction log? I would have thought it would be doing that all along?- Is it cleaning up from the internal "inserted" table?- Is it cleaning up tempdb?- Is it writing to disk? I would think it would have been doing that all along and the point at which I saw all the rows in the table, I would have expected the disk space to have been allocated.- Is it updating statistics? But when I run DBCC SHOW_STATISTICS ('MySchema.MyTable', PK_MyPKName) WITH STAT_HEADER, I see that the statistics are stale.Regardless of the methodology (this was just a quick, get the old data archived kind of thing) I would love to understand what's actually happening here!

How can I find the Total free memory in the windows machine where a sqlserver instances are installed?

Posted: 10 Jul 2013 11:33 PM PDT

Hello,I need to find the free memory in the machine where a SQL server 2008 r2 is installed, and I need to do it via T SQL.Are the table:sys.dm_os_sys_info orsys.dm_os_sys_memoryabout this or not? I've found out different opinion in the internet and I've been told that with this query I can only find the RAM available:select available_physical_memory_kbfrom master.sys.dm_os_sys_memoryis this true? how can I find the total space available in all unit C:/ with a query?thanks a lot.

SQLSERVER Agent is not running- SQLSERVER 2008R2

Posted: 16 Jul 2012 08:00 PM PDT

Hello:We have Windows 7 with SQLSERVER 2008R2 Express Edition 10.5 Installed. Database Engine and SQL Browser services are running perfect .The Problem is SQLSERVER Agent is not running. We have Provided the all the permissions to logged in users. Can anyone help us to resolve the problem?Thanks in advance ..!Regards,Ram

XML Formatting Using SQL

Posted: 11 Jul 2013 10:04 AM PDT

Can someone recommend how to create the following XML file with data that I already have in my database...?There will be many transaction elements...eg <transaction id-type="VENDOR-CUSTOMER" id="76" seq="2"></transaction>Here's the format...[code="xml"]<?xml version="1.0" encoding="ISO-8859-1"?><vip-customer-load created-date-time="2013-06-26 14:55" origin="ISP"> <data> <transaction id-type="VENDOR-CUSTOMER" id="4823798" seq="4"> <customer service="ISP" country="US" action="ADD"> <vendor-account-id>SOMEDATA</vendor-account-id> <customer-status>ACTIVE</customer-status> <credit-decision></credit-decision> <teamid>708</teamid> <name> <first>JOE </first> <middle></middle> <last>BLOW</last> <suffix></suffix> </name> <language>en</language> <currency>USD</currency> <ssn></ssn> <startdate>2013-03-31</startdate> <address> <street>123 Main St</street> <apt></apt> <double-locality></double-locality> <city>HANALEI</city> <state>HI</state> <country>US</country> <postal-code>84098</postal-code> </address> <phone>8009183278</phone> <alternate-phone></alternate-phone> <residential>Y</residential> <entered-by>ONLINE</entered-by> <loa-date>2013-03-31</loa-date> <rate-plan>09</rate-plan> <enrollment-method></enrollment-method> <physical-address></physical-address> <rep-verified></rep-verified> <account-number></account-number> <business-name></business-name> <reason-code></reason-code> <is-rep></is-rep> </customer> </transaction> </data> <control> <customer service="ISP" country="US"> <transaction-count>4</transaction-count> <add-count>1</add-count> <change-count>3</change-count> <delete-count>0</delete-count> <vendor-system-total-count>9082</vendor-system-total-count> <vendor-system-active-count>150</vendor-system-active-count> <vendor-system-incomplete-count>0</vendor-system-incomplete-count> </customer> </control></vip-customer-load>[/code]

conditional table selection

Posted: 11 Jul 2013 04:35 PM PDT

i have problem while selecting table_nameA B C2 ab ab2 bc bc3 ac ac4 as asthis is my table and i want to executeselect * from table_nameonly when it has more than 2 rows.if count(*) table_name 2 i dont want to print table_name.for this i tried SELECT CASE (SELECT COUNT(*) FROM table_name) WHEN 2 THEN "Min Row Cond not valid" ELSE (SELECT * FROM table_name AS abc ) ENDbut i am gettingError Code : 1241Operand should contain 1 column(s)how to solve this problemthank you

Distribution database cleanup

Posted: 11 Jul 2013 03:11 PM PDT

Hi All,I m going to configure a distribution database with retention period of 24 hours. Does the clean up job remove transactions that are older than 24 hours but not yet replication? or does the clean up job only remove transactions that are replicated?Also, please let me know how to monitor the performance of a distribution databaseThanks for your help..

A little help needed with simple Custom Function

Posted: 11 Jul 2013 02:22 AM PDT

This should be a simple task but I think I'm tripping over the syntax. I have data from a database that is fed out onto a web page (aspx). Due to reasons out of my control, the data comes to me as "dirty". 99% but not 100% of the data comes with "junk" characters appended to the end of the string. So, full name might look like "Doe, John, H *^" where it should look like "Doe, John, H".I could do this one the front end and rewrite some of the web page's logic but I'm trying to handle this on the back end. It would seem I could write a simple Function to "strip" out junk chars I see fit to remove. So, in my query, I could do something like:[i]SELECT dbo.[b]MyCustomFunction[/b](FieldName), Field2, Field3 FROM......etc...[/i] Then, if I've written the function correct, it'll just strip out the bad chars. I've done this many times before with VB.The Custom Function on SQL Server is where I'm having trouble. Here's what I have so far:CREATE FUNCTION fncStripBadChars (@strInputString varchar(150))RETURNS varchar(150)ASBEGIN [b]@strInputString = Replace((@strInputString), ", * *", "")[/b]RETURN @strInputStringENDGOIt's the BOLDED line above is where I'm having a little trouble. I'm getting an incorrect Syntax error.Any tips or pointers? See what I'm doing wrong?I should also mention that the ",* *" is one of the few strings I'm trying to replace with an empty string.

Generate dynamic script

Posted: 19 Aug 2011 04:40 AM PDT

Hello,I want to perform the below action on 150 databases in the same sql instanceALTER DATABASE [DB_NAME] SET RECOVERY SIMPLEHow to create a dynamic script for this to change the recovery to simple for 150 databases ?Thanks

Indexing small tables: what is small enough to avoid indexes altogether?

Posted: 11 Jul 2013 06:45 AM PDT

From books online:Indexing small tables may not be optimal because it can take the query optimizer longer to traverse the index searching for data than to perform a simple table scan. Therefore, indexes on small tables might never be used, but must still be maintained as data in the table changes.I was having a debate with a co-worker the other day and we were talking about a small table (heap) that had absolutely no indexes. There are 35k rows in just over 1,000 pages.My co-worker was of the opinion that this was a "small" table. This is a 90+% read lookup table (heap), and I was able to show that after creating a clustered index and a single non-clustered composite index of three key columns and a few included columns that I could now cover the query, and reduced the 1048 logical reads of a table scan to a 2 logical read index seek.So this specific example aside, and given that mileage will vary, my question is are there general rules of thumb for what is the threshold for a "small" table?I started thinking in terms of storage internals, that if a table is sized so it fits in under 8 pages (a single extent) that SQL Server would effectively read the entire table during a normal disk read, then there would be absolutely no benefit from indexes. It seems like once we cross the 8 page threshold, adding at least one non-clustered index should prove beneficial. I know now we're venturing into a bit of the theoretical realm, but I'm curious what are the technical boundaries for when we can expect to see benefits.For the purposes of this (theoretical) discussion lets leave out the "tipping point" and assume any non-clustered indexes we may create are covering, and that queries are not predicated on the clustering key. There may be other assumptions necessary for a debate, so please feel free to add.Thanks,Jared

Incorrect Syntax?

Posted: 11 Jul 2013 03:55 AM PDT

I'm trying to write an sp to query an Oracle db through openquery. The oracle portion of the query I've written works when run through the Oracle sqldeverloper, but the overall procedure returns an error when I try to run it through ssms.[code="other"]declare @rec_id intset @rec_id = 263703 -- for testing purposesdeclare @query nvarchar(4000)declare @sql nvarchar(4000)declare @linked_server nvarchar(4000)set @linked_server = 'VTRACK'SET @query = 'SELECT * FROM OPENQUERY(' + @query + ','SET @sql = '''select i.id, sys.utl_raw.cast_to_varchar2(dbms_lob.substr( i.raw_data, dbms_lob.getlength(i.raw_data), 1)) AS message from fmc_in i inner join fmc_hail_messages_gui_v m on i.id = substr(m.creator_name,instr(m.creator_name,'':'')+1,length(m.creator_name)-9)where m.id = '' + @rec_id + '')'''exec (@query+@sql)[/code]The error I get isIncorrect syntax near 'select i.id, sys.utl_raw.cast_to_varchar2(dbms_lob.substr( i.raw_data, dbms_lob.getlength(i.raw_data), 1)) AS message from fmc_'but I don't see what's wrong. Is there another way to approach this?

Adding new column to the table with 16 million rows

Posted: 11 Jul 2013 08:05 AM PDT

we have a table with 16 Million records,and also this table is replicated.we want to add a new column in to this table for some reason?what can harm?

reporting issue

Posted: 11 Jul 2013 08:11 AM PDT

Hey everyone. Just been given a report to do in work. Have studied SQL at uni and also done the first part of the Oracle training, but this one is taxing my abilities a bit. Essentially the problem is that we have an annual income figure in the database and he wants to see this broken down to a monthly income stream. some payments are made monthly others are made every 4 weeks. Any ideas to get me started on how to go about starting a report that will show such annual figures split across months?

Do i have to use cursor processing?

Posted: 11 Jul 2013 04:24 AM PDT

i have to grab this [b]filemakerID[/b] based on trainer id table1TrainerID [b]FilemakerID[/b]1078 53421138 87391810 4609and associate the filemakerID with a course:table2EntryID TrainerID CourseNums138 1138 37139 1138 120140 1138 119654 1078 150655 1078 151656 1078 38657 1078 36658 1810 37659 1810 159660 1810 111and combine course numbers(separated by comas) associated with same filemakerID and insert the data into a table3: Can this be done without putting data into temp tables or processing with a cursor to combine and separate coursenums by commas? FilemakerID CourseNums5342 36, 37, 38, 150, 1518739 37, 119, 1204609 37, 159, 111

I want to insert the count() into new table i created

Posted: 11 Jul 2013 06:09 AM PDT

I have this query running perfectly fine and the result set looks like this[quote]NumberRows | Time_stamp ----------- | ----------------------------940 | 2013-07-11 18:00:00.357[/quote]Now i want to insert these two columns values in a new table.i cant figure out how can i do this.DECLARE @dCurrentTime DATETIMEDECLARE @dCurrentTimeMinus5 DATETIME--Declare @counttotal intSET @dCurrentTime = GETDATE()SET @dCurrentTimeMinus5 = DATEADD(minute, -5, @dCurrentTime)--insert into UsersLoggedIn (NumberRows,@dCurrentTime);SELECT Count(*) As NumberRows, @dCurrentTime as Time_stampFROM ( SELECT lID FROM SessionState_Variables WHERE dLastAccessed BETWEEN @dCurrentTimeMinus5 AND @dCurrentTime GROUP BY lID) As SessionsTableRegards

SQL Server 2008R2 linked from a SQL2012 server problem

Posted: 11 Jul 2013 05:51 AM PDT

I have a 2012 server with link to a 2008r2 server. When I run a simple select from the 2012 query window and specify the fully qualified linked server (2008r2) the query runs fine.When I log into the 2008R2 server, launch SSMS (2008R2) but connect to the 2012 server and run the same query i get the 18456 'NT Authority\Anonymous Logon' error.Further, When I connect to the the 2012 server from a client PC using SSMS (2012), I get the same error when I run the query from the linked 2008R2 server. Using the fully qualified name.Any thoughts?Thanks.Joe

StarQuest Data Replicator (SQDR) - anyone use it? anyone even heard of it?

Posted: 11 Jul 2013 06:23 AM PDT

I'm setting-up a proof of concept where I need to replicate data between DB2 on Z/OS, DB2 LUW on Windows, and MS SQL Server 2008. We actually already have IBM WebSphere MQ and DB2 Replication (SQL/Q Replication) in place to do this, but given it's in need of upgrading I'm investigating other options in-case there's something simpler to maintain.StarQuest Data Replicator seemed to bubble up to the top of my Googls search, and I've found a few artles and questions about it sprinkled around various DB2 and MS SQL forums. Problem is the product description is exactly what we're looking for, and given the company has been around for over a decade I'm shocked I haven't heard of it nor can I find much about them on any tech forum.Brian Knight did [url=http://www.sqlservercentral.com/articles/Distributed+Queries/starsql/167/]an article[/url] about it back in 2001, but that's all I can find on here. Any thoughts on this? Thanks for any suggestions.Sam

how i can identify local windows service that coresponds to sql spid?

Posted: 11 Jul 2013 01:29 AM PDT

Good morning Everyone,Our environment: sql server 2008 r2 sp2 on windows 2008 r2 enterprise sp1, 2 node active/passive cluster; 200 (500mb databases)Approximately every 10-15 minutes i see multiple (per database) connections are being established…those would run for a minute and disconnectIt's not really causing any performance issues….but, 600-800 connections to server that comes and goes are driving me crazy.I cannot identify WHERE are they coming from….i do see that some local process initiating them…cannot figure out which one.All I see is sql User: NT AUTHORITY\SYSTEMhost name : local physical node nameProgram: .Net SqlClient Data ProviderNet Library: TCPWin Domain : NT AUTHORITYWin User: SYSTEMI see that these are just sql server related information collections and we do have 3rd party monitoring tools, which I currently disabled.Can these be just a sql server native status info collections?????And these are the query that it runs:Connection1:SELECT [dbfiles].[physical_name] AS DBFilesPhysicalName,[mediafamily].[physical_device_name] AS BackupFilesPhysicalName FROM msdb.dbo.backupmediafamily [mediafamily] JOIN (SELECT [backupsets].media_set_id, [backupsets].database_name FROM msdb.dbo.backupset [backupsets] JOIN (SELECT TOP 1 [backupsets].media_set_id, [backupsets].database_name, [backupsets].backup_finish_date FROM msdb.dbo.backupset [backupsets] JOIN sys.databases [databases] ON [backupsets].[database_name] = [databases].[name] and [databases].[is_in_standby] = 0 AND [databases].[source_database_id] IS NULL AND ([databases].name = 'master' OR DATEDIFF ( ss, [databases].[create_date] , [backupsets].[database_creation_date] ) = 0 ) WHERE [databases].[database_id] = DB_ID() AND [backupsets].type = 'D' ORDER BY [backupsets].backup_finish_date DESC) AS [latest_backupset] ON [backupsets].[database_name] = [latest_backupset].[database_name] AND [backupsets].[backup_finish_date] >= [latest_backupset].[backup_finish_date]) AS latest_backups ON [latest_baConnection2:select * from sys.master_files where database_id = '20' and type in (0,1)Connection3:(@DatabaseName nvarchar(10))IF (Exists(SELECT * FROM(SELECT name as 'DatabaseName' FROM sys.databases WHERE name NOT IN ('master','model','tempdb','msdb' ) and source_database_id IS NULL) AS UserDatabase WHERE UserDatabase.DatabaseName = @DatabaseName))BEGIN SELECT distinct obj.name FROM sys.objects AS obj JOIN sys.stats stat ON stat.object_id = obj.object_id CROSS apply sys.dm_db_stats_properties(stat.object_id, stat.stats_id) AS sp WHERE obj.type = 'U' AND modification_counter > 1000 END

Default trace enabled

Posted: 11 Jul 2013 03:57 AM PDT

Hi Friends,How to clear older logs captured in default trace?

backup via .mdf and .ldf ok?

Posted: 11 Jul 2013 02:12 AM PDT

I was asking the network/system guy whether a third-party app's databases were being backed up. He did some checking and said "Yes" but I suspect he meant that it's the .mdf and .ldf files that are being backed up (copied). A quick search did not find any .bak files for those databases.I think I know (at least part of) the answer, but wanted to get some thoughts from people on this forum.1) is it OK to rely on a copy of the .mdf and .ldf files as a backup?2) if the current version becomes corrupt can you go back to a previous copy and restore that?

conditional table selection

Posted: 11 Jul 2013 12:18 AM PDT

how to select particular table after condition is true?i want to select table only if it has rows more than two else i want to print less number of rows ?how can i solve this problem ?Thank you

Deadlock on Import

Posted: 10 Jul 2013 09:14 PM PDT

Good DayWe try to import data into a tabe from a file , but we get deadlocks on insert . We added indexes before and we suspected they were the cause and we removed them . But this did not solve the problem. I checked for open transactions using dbcc opentran and it showed that the replication has an open transaction . We did activate all the traceflags as per SQL Server guidelines on the web . We get the table that is the victim, but we cannot identify the process that causes the problem . Any ideas ? We are a bit desperate. Thanks in advance

Is this a crazy way to migrate from one OS version to another?

Posted: 11 Jul 2013 12:06 AM PDT

I need to migrate all the servers I support from Windows Server 2008, to 2008 R2 by the end of the year (plenty of time, really.) All the servers are virtuals, and all of them are running SQL 2008 R2. Seeing as they're configured with an OS drive, and multiple data / SQL Server drives, I had a thought on a possible way to migrate quickly.Of course, this method does have the downside of being one way, with no quick way to go back if it doesn't work...My thought is, set up a new VM with just an OS drive and the OS. Once this is ready, on the current server, detach all the DBs, and stop the SQL service (the reason for the detach will become apparent.) Have the VMWare admins then remove the data drives from the "old" server, and attach them to the new. Once the drives are attached, delete EVERYTHING except where the DBs / TLogs live, install SQL 2008 R2 as normal and update as needed, then proceed to attach all the DBs I detached.My thinking (if this idea has merit) is this gives me a few advantages:1. I don't need to worry about the VMWare / Storage admins complaining about space, I'm just switching already used space from one VM to another...2. I think this might go a bit quicker than backing up each DB on the old, copying the backup to the new, then restoring.Of course possible problems:1. If this doesn't work well, or a DB doesn't come up, my only fall back is a backup of the DB.2. If it takes longer than expected, I may have users hitting me with "why isn't it done yet, what's taking so long..."I'm mostly interested in whether people think this would be a way to do this. If you ignore the VM part, it'd be no different than buying a new server, installing the OS, then pulling out the drives from your old server with the DBs on them and installing them in the new...Of course, a better solution would be if I could get the VMware admin to "clone" the data / log drives and attach the clones to the new VM. But that potentially brings in a fight with the storage guys...Thanks,Jason

SSMS - Databases are not Visible

Posted: 10 Jul 2013 08:19 PM PDT

Hi Team,After connecting SSMS [b]'Databases, Security, Server Objects, Management' [/b] folders are not showing in the object explorer, again i need to connect to sql server with login credentials then only above folders are showing.Is there any configuration required to display the above folder automatically once login to SSMS.Please suggest...

Require help to get number of hits to sql server 2008 R2 in a day

Posted: 10 Jul 2013 11:17 PM PDT

Hello All,Please help me to find out number of hits to sql server 2008 R2 in a day.Thanks,Sumit Rastogi:-)

Update to Base Table from Details Table

Posted: 10 Jul 2013 11:02 PM PDT

Create Table #Customer_Profile_Master1 ( Lnno Varchar(15), Co_apcode_1_categ Varchar(200), Co_apcode_2_categ Varchar(200), Co_apcode_3_categ Varchar(200), Co_apcode_4_categ Varchar(200) ) Insert Into #Customer_Profile_Master1(Lnno) Values ('1') Select * from #Customer_Profile_Master1 Drop table #Temp1 Create Table #Temp1 ( Lnno Varchar(15), CustCode Varchar(200), Co_apcode Varchar(200), Category Varchar(200), Rownumber Int ) Insert Into #Temp1 Values ('1', 'M0000036', 'A0000026', 'SALARIED', 1), ('1', 'M0000036', 'A0000027', 'SELF EMPLOYED PROFESSIONALS', 2), ('1', 'M0000036', 'A0000028', 'STUDENT', 3) Select * from #Temp1 My Requirement is I want to update in #Customer_Profile_Master1 table from #Temp1 as Co_apcode_1_categ = For that Lnno if Rownumber = 1 then corresponding Category should be updated Co_apcode_2_categ = For that Lnno if Rownumber = 2 then corresponding Category should be updated Co_apcode_3_categ = For that Lnno if Rownumber = 3 then corresponding Category should be updated Co_apcode_4_categ = For that Lnno if Rownumber = 4 then corresponding Category should be updated I had tried this way but it is not working for me, Update #Customer_Profile_Master1Set Co_apcode_1_categ = Case when B.Rownumber = 1 then B.Category else '' End, Co_apcode_2_categ = Case when B.Rownumber = 2 then B.Category else '' End, Co_apcode_3_categ = Case when B.Rownumber = 3 then B.Category else '' End, Co_apcode_4_categ = Case when B.Rownumber = 4 then B.Category else '' EndFrom #Customer_Profile_Master A Join #Temp1 BOn A.Lnno = B.LnnoPlease help me as I need this in urgent.Thanks in Advance!

Analysis Services in Named Instance

Posted: 10 Jul 2013 09:32 PM PDT

I have set up a 2008R2 SQL cluster with several named instances and a default instance. The application using SQL Server requires analysis services, which I installed with his instance. The application has no problem connecting to the database but is unable to connect to analysis services. I set up a specific service ID for this purpose. What should we be doing to enable a connection?

Filegroup

Posted: 10 Jul 2013 08:39 PM PDT

Hi All,Please let me know whether it is possible to use the same secondary filegroup for 2 different databases.I tried manually and it did not allow.Just wanted to confirm the same.

Thursday, July 11, 2013

[how to] What can I do to repair a corrupted partitioned table on the slave?

[how to] What can I do to repair a corrupted partitioned table on the slave?


What can I do to repair a corrupted partitioned table on the slave?

Posted: 11 Jul 2013 07:59 PM PDT

  • mysql Ver 14.14 Distrib 5.5.31, for Linux (x86_64) using readline 5.1

The slave has been stopped due to power outage:

              Master_Log_File: mysql-bin.000491            Read_Master_Log_Pos: 132257162                 Relay_Log_File: mysqld-relay-bin.001088                  Relay_Log_Pos: 208318295          Relay_Master_Log_File: mysql-bin.000444               Slave_IO_Running: Yes              Slave_SQL_Running: No                Replicate_Do_DB:             Replicate_Ignore_DB:              Replicate_Do_Table:          Replicate_Ignore_Table:         Replicate_Wild_Do_Table:     Replicate_Wild_Ignore_Table:                      Last_Errno: 1594                     Last_Error: Relay log read failure: Could not parse relay log event entry.   The possible reasons are:   the master's binary log is corrupted (you can check this by running 'mysqlbinlog' on the binary log),   the slave's relay log is corrupted (you can check this by running 'mysqlbinlog' on the relay log),   a network problem, or a bug in the master's or slave's MySQL code.   If you want to check the master's binary log or slave's relay log,   you will be able to know their names by issuing 'SHOW SLAVE STATUS' on this slave.                   Skip_Counter: 0            Exec_Master_Log_Pos: 208318149                Relay_Log_Space: 12525464864                Until_Condition: None                 Until_Log_File:                   Until_Log_Pos: 0             Master_SSL_Allowed: No             Master_SSL_CA_File:              Master_SSL_CA_Path:                 Master_SSL_Cert:               Master_SSL_Cipher:                  Master_SSL_Key:           Seconds_Behind_Master: NULL  Master_SSL_Verify_Server_Cert: No                  Last_IO_Errno: 0                  Last_IO_Error:                  Last_SQL_Errno: 1594                 Last_SQL_Error: Relay log read failure: Could not parse relay log event entry.   The possible reasons are:   the master's binary log is corrupted (you can check this by running 'mysqlbinlog' on the binary log),   the slave's relay log is corrupted (you can check this by running 'mysqlbinlog' on the relay log),   a network problem, or a bug in the master's or slave's MySQL code.   If you want to check the master's binary log or slave's relay log,   you will be able to know their names by issuing 'SHOW SLAVE STATUS' on this slave.    Replicate_Ignore_Server_Ids:                Master_Server_Id: 647  

This can be fixed by running:

  1. STOP SLAVE
  2. SHOW SLAVE STATUS
  3. Note down 'Relay_Master_Log_File' and 'Exec_Master_Log_Pos' entries.
  4. RESET SLAVE
  5. CHANGE MASTER TO ..... (use MASTER_LOG_FILE=relay_master_log_file and MASTER_LOG_POS=exec_master_log_pos from Step 3)
  6. START SLAVE

Then I got another error:

              Master_Log_File: mysql-bin.000727            Read_Master_Log_Pos: 97824193                 Relay_Log_File: mysqld-relay-bin.000197                  Relay_Log_Pos: 158825847          Relay_Master_Log_File: mysql-bin.000510               Slave_IO_Running: Yes              Slave_SQL_Running: No                Replicate_Do_DB:             Replicate_Ignore_DB:              Replicate_Do_Table:          Replicate_Ignore_Table:         Replicate_Wild_Do_Table:     Replicate_Wild_Ignore_Table:                      Last_Errno: 1696                     Last_Error: Error executing row event: 'Failed to read from the .par file'                   Skip_Counter: 0            Exec_Master_Log_Pos: 158825701                Relay_Log_Space: 57756920922                Until_Condition: None                 Until_Log_File:                   Until_Log_Pos: 0             Master_SSL_Allowed: No             Master_SSL_CA_File:              Master_SSL_CA_Path:                 Master_SSL_Cert:               Master_SSL_Cipher:                  Master_SSL_Key:           Seconds_Behind_Master: NULL  Master_SSL_Verify_Server_Cert: No                  Last_IO_Errno: 0                  Last_IO_Error:                  Last_SQL_Errno: 1696                 Last_SQL_Error: Error executing row event: 'Failed to read from the .par file'    Replicate_Ignore_Server_Ids:                Master_Server_Id: 647  

I guess that it's because the ALTER TABLE DROP PARTITION; is being replicated, similar to this thread.

Is there any way to repair this table and make the slave thread continues to run instead of start from beginning?

PostgreSQL : Restrict (another) superuser from accessing database

Posted: 11 Jul 2013 07:41 PM PDT

I'm new in PostgreSQL and looking to find if this is possible.

There are two superusers, 'postgres' (which is default superuser from PostgreSQL) and 'super2'.

Superuser 'super2' creates a new database named 'Movies' and being the owner of it.

Is it possible to restrict superuser 'postgres' to access the database 'Movies' so that only superuser 'super2' could access it, because he is the owner of the database.

If it is possible, then how to make it happen?

What is a good way to replicate code across multiple databases?

Posted: 11 Jul 2013 08:24 PM PDT

SQL Server 2005

I have two databases (Source1 and Source2) with the same data model and I plan to merge the data into a third database (Target) with an "abstracted" data model. In the future, it's possible to have additional Source databases that will need to be merged into Target.

Originally, I was hoping to build views and stored procedures that would reside in the Target database that I could dynamically run again Source1 to load Target then switch to Source2. That doesn't seem to be possible without dynamic SQL.

So plan B is to build the views and procedures in the Target database then deploy them to Source1 and Source2. What is the best way to replicate code to multiple databases?

I'm open to any new ideas that address this situation.

Polling for new records in a table

Posted: 11 Jul 2013 04:06 PM PDT

We have a table that is append+read only. We'd like to have multiple applications consume all new rows that show up in that database, in a cheap and easy way (without change tracking).

The simple suggestion of having each app record the last ID doesn't seem safe. Suppose 3 connections (2 writers and 1 readers) do this:

W1: BEGIN TRAN, INSERT 1000 Records (IDs 1-1000)  W2: BEGIN TRAN, INSERT 1000 Records (IDs 1001-2000)  W2: COMMIT  R1: SELECT * WHERE Id > 0 (last sync value)  W1: COMMIT (or possibly ROLLBACK)  

In this case, the reader would get records 1001-2000 and save 2000 as the highest ID. Obviously that's not correct. Assuming the various writers are running at varying speeds (so we can never assume they'll commit in order, due to latency or perf), what options are there?

Must we resort to something like change tracking, or require every application to save the last (batch*writers*x) IDs and re-read them to make sure they didn't miss any?

Rollback and other gaps in the sequence (MSSQL can jump the IDs by 1000; pretty sure other DBs do that under certain situations) make it complicated to try to guess if rows were skipped. Plus the batch inserts might not be a proper batch; they could consist of multiple INSERT statements, meaning each tx might not even have a contiguous block of IDs assigned.

We're currently using MSSQL 2012 if there's a hack specific to that system, but I'm more interested in general.

InnoDB log sequence number is in the future

Posted: 11 Jul 2013 08:31 PM PDT

130711 23:03:40 InnoDB: Error: page 23 log sequence number 3336825872455  InnoDB: is in the future! Current system log sequence number 235089.  InnoDB: Your database may be corrupt or you may have copied the InnoDB  InnoDB: tablespace but not the InnoDB log files.  

I've never seen or had this error before. The client does not have backups so they need a recovery done.

I've done MySAIM recovery before but never InnoDB. Is there any tips on how to do it correctly?

From what I've read up on I need to increase a single counter ("log sequence number") which is stored somewhere in the headers of ib_logfile0 and ib_logfile1 or something like that correct?

Negative number of rows inserted

Posted: 11 Jul 2013 01:59 PM PDT

I just ran a rather simple insert statement (that I think processed alot of rows), and I got the message: "-1,311,750,183 rows inserted". Should I be concerned about the fact that it's negative?

Is it possible to keep track of changes to tables structure?

Posted: 11 Jul 2013 12:50 PM PDT

Is it somehow possible to keep track of changes made to the STRUCTURE (not the data) made to tables?

I mean, i don't want to know when rows are inserted/updated/deleted, what i want is to know when the table definition is been changed (and what those changes were). E.g. to know if/when a column was added, removed, if a column type was changed, if an index was added, etc.

In other words, to keep track of all the "ALTERS" of the tables. (and the state before those alters)

I'd like this to know exactly when some changes were made, and to keep track of discrepancies of data, for example to know when a column was changed to hold VARCHARs instead of INTs (just an idea), or when a new column was added, etc.

So is there a way to do this?

I am using MySQL, version 5.1.66. And, if it is not possible in MySQL, is it in other Databases?

Thanks

Locking and Faulting on import

Posted: 11 Jul 2013 01:20 PM PDT

My collection with a _id and a single field index is run on a single node/on localhost, has 32GB RAM and 16 core CPU available. The hdd is in RAID1, without SSD cache. Mongo version 2.4.

I run a mongoimport with 5Mio documents dump with a total of 5GB with mongoimport. It starts fast and slows down very quickly.

This starts fast with about 11000/second. After 3 minutes going down to 9000/s, again 1 minute later it is 7000/s and after some more minutes I we are in slow motion on 200/s. The mongoimport takes for hours instead of some minutes.

mongostat shows increasing LOCKs and faults:

insert  query update delete getmore command flushes mapped  vsize    res faults  locked db idx miss %     qr|qw   ar|aw  netIn netOut  conn       time   *0     *0     *0     *0       0     1|0       0   912g  1825g    21g      1 tdb:74.0%          0       0|0     0|1     5m     3k    10   03:20:21   ...  79     *0     *0     *0       0     1|0       0   912g  1825g    21g     10 tdb:33.6%          0       0|0     0|0    47k     3k    10   03:22:22   ...  104     *0     *0     *0       0     1|0       0   912g  1825g    21g     13  tdb:7.9%          0       0|1     0|1    60k     3k    10   03:22:33   ...  86     *0     *0     *0       0     1|0       0   912g  1825g    21g     11  tdb:0.8%          0       0|0     0|1    52k     3k    10   03:23:17   ...  128     *0     *0     *0       0     1|0       0   912g  1825g    21g      9 tdb:58.3%          0       0|0     0|1    76k     3k    10   03:25:00   ...  95     *0     *0     *0       0     1|0       0   912g  1825g    21g     11 tdb:39.3%          0       0|0     0|0    58k     3k    10   03:26:07   ...  86     *0     *0     *0       0     1|0       0   912g  1825g    21g     12 tdb:88.8%          0       0|0     0|1    51k     3k    10   03:27:12   

I run mongodb without --upsert and also tried with only the _id index. Furthermore reIndex()did not improve this.

How to improve this and remain the speed of bulk INSERTs constantly?

Whay makes this rare error: "Can't connect to local MySQL server through socket '/tmp/mysql.sock' (146)"

Posted: 11 Jul 2013 02:16 PM PDT

This is a rare error that occurs on Solaris, have seen it from Perl looking like this.

DBI->connect(...) or die "Unable to connect: $DBI::errstr"

Outputs

Unable to connect: Can't connect to local MySQL server through socket '/tmp/mysql.sock' (146) at something.pl line ...

It is very rare for this error to occur.

The global variable max_connections has been ruled out as a problem.

Can you enumerate the possible or likely causes of this error happening only occasionally?

MySQL stored procedure create database by month

Posted: 11 Jul 2013 12:29 PM PDT

I'm a systems admin, and I'm not a dba, I'm wondering if I could use MySQL stored procedure to Query main database and move records by month to another database located on the same system.

This way we can maintain database small main database, and have data moved to individual database by month.

Thank you

optimizing MySQL for traffic analytics system

Posted: 11 Jul 2013 12:18 PM PDT

background :
I've developed a URL shortener system like Bitly with same features , so the system also tracks clickers info and represent as graphs to the person who has shorten the link as analytics data. currently I'm using MySQL and have a table to store click info with this schema:

visit_id (int)  ip (int)  date (datetime)  country  browser  device  os  referrer (varchar)  url_id (int)  //as foreign key to the shortened URL  

and for now , just the url_id field has index

The system should represent click analytics in the time periods the user wants, for example past hour, past 24 hours , the past month , ...

for example to generate graphs for past month , I do following queries:

SELECT all DAY(date) AS period, COUNT( * )                           FROM (                            SELECT *                           FROM visits                          WHERE url_id =  '$url_id'                          ) AS URL                          WHERE DATE > DATE_SUB( CURRENT_TIMESTAMP( ) , INTERVAL 1 MONTH )                           GROUP BY DAY( DATE )    //another query to display clicker browsers in this period  //another query to display clicker countries in this period  // ...  

issues:

  • for a shortened link with about 500,000 clicks , it takes about 3-4 seconds to calculate just the first query , so for total queries about 10-12 seconds which is terrible.
  • lots of memory and CPU is needed to run such queries

questions :
1- how to improve and optimize the structure , so the analytics of high traffic links will be shown in less than 1 second(like bitly and similar web apps) and with less usage of CPU and RAM ? should I make an index on the fields date, country, browser, device, os, referrer ? if yes , how to do that for the field date because I should group clicks some times by DAY(date), sometimes by HOUR(date), sometimes by MINUTE(date) and ...

2- is MySQL suitable for this application? assume at maximum my application should handle 100 million links and 10 billion clicks on them totally. Should I consider switching to an NoSQL solution for example?

3- if MySQL is ok , is my database design and table structure proper and well designed for my application needs? or you have better recommendations and suggestions?

Thanks for your help

How do I migrate varbinary data to Netezza?

Posted: 11 Jul 2013 12:48 PM PDT

I got a warning message while migrating DDL from SQL Server to Netezza:

Warning: [dbo].[spec_binarymessage].[blobdata] data type [varbinary] is not supported the target system and will be scripted as VARCHAR(16000).

I'm wondering whether this kind of data conversion will cause some issues such as truncation of data etc.?

SQL Server not using available memory

Posted: 11 Jul 2013 10:35 AM PDT

SQL 2008R2 Microsoft SQL Server Standard Edition (64-bit)
on Windows 2008R2 Enterprise

Server has over 300 GB of memory but total memory use in control panel never goes over 86 GB
Have SQL configured to use the maximum amount of memory
Even under heavy use - CPU over 80% for minutes

Dedicated SQL Server
Several large databases
One heavily used table has an index size alone of over 10 GB

Set the service account to hold locks in memory

Is that normal?
What can I test?
Can I get SQL to use more memory?

Create Table If Not Exists Hanging

Posted: 11 Jul 2013 12:58 PM PDT

I am still new to using databases, but currently I am using a mysql database and attempting to make my bash script unbreakable because it might be used in several different environments (dev, qa, etc). I know it will run correctly without this one line.

CREATE TABLE IF NOT EXISTS backupfiles (fileName VARCHAR(20), archiveId VARCHAR(500), checkSum VARCHAR(100), glacierVault VARCHAR(100), timeStamp date);  

I also know that if I put that exact line into my database when that table does not exist then it runs perfectly and if it does exist already it gives me a warning, but does not break.

+-------+------+------------------------------------+  | Level | Code | Message                            |  +-------+------+------------------------------------+  | Note  | 1050 | Table 'backupfiles' already exists |  +-------+------+------------------------------------+  

When I run the line above and there is no table named backupfiles it works perfectly, but when there is it hangs(when I say hangs it sits there and does nothing) 95% of the time and 5% of the time works. Has anyone run into a problem like this? I am using AWS RDS(Amazon Web serices Relational Database Service) and the mysql server is 5.5.27

Here is all of my code that I relates to the mysql database

mysql -h portal-rds -u $user --password=$mysqlpw <<QUERY_INPUT  CREATE DATABASE IF NOT EXISTS $DATABASE;  use $DATABASE;  CREATE TABLE IF NOT EXISTS backupfiles (fileName VARCHAR(20), archiveId VARCHAR(500), checkSum VARCHAR(100), glacierVault VARCHAR(100), timeStamp date);  INSERT INTO backupfiles VALUES ('$archive_file_name', '$archiveID', '$CURRENTVAULT', '$checkSum', CURDATE());  COMMIT;  QUERY_INPUT  

SQLCMD Exporting 16.5 Million Rows

Posted: 11 Jul 2013 11:30 AM PDT

I am exporting a table to a TSV file using sqlcmd, and I am running into issues. The table has 16+ million rows and about 55 columns.

The problem is that it does not export the full table, but seems to stop randomly at various points (i am guessing a timeout?) Each time a different number of rows are exported and each time the file is of a slightly different size (indicating that I am not hitting any row or size limit).

I am not using any timeout switch (meaning the default of "as long as it takes" is used).

Here is my command (with most columns removed for simplification and illustration purposes):

sqlcmd -U myUsername -P myPassword -S SERVERNAME -d "DBNAME" -Q "SELECT ROW_KEY, ISNULL(CARRIER_ID, ''),ISNULL(CONVERT(VARCHAR(19),ORDER_DATETIME,120),''),ISNULL(HEIGHT, ''),ISNULL(RESIDENTIAL_CHARGE, '') FROM MYTABLE" -o "OUTPUT_FILE.txt" -h-1 -s"|" -W  

I wonder if it could have something to do with timeouts or the use of ISNULL() on all of the columns (although when i run the query in sql server management studio I get the correct number of rows returned e.g. 16 million + )?

Again, I get about 4-8 million rows each time, but never the full amount. I am in a sql server 2k5 db, and running sqlcmd from a remote machine with sql server 2k8.

Order table Database design

Posted: 11 Jul 2013 08:05 PM PDT

I'm in need of some help with the database design of order table of a shopping cart for online food ordering system. I have a design below. Tables about which I'm Concerned are order_details and order. Basically what is going to happen is user selects foods from restaurant menus and will place order, User details are stored in the user table (which is not shown in the image).

This is my first attempt, I was never good at Database. So please take it easy if you fill that the question is way too low for this site.

And my questions are:

  • Are the tables good enough?
  • Do I need to add any more fields?
  • And one more question I need to store multiple addresses of users(user can have multiple shipping address) how can I handle this?

    http://i.stack.imgur.com/HCYBi.jpg

Unable to generate a MySQL query

Posted: 11 Jul 2013 12:21 PM PDT

I have a table in a MySQL database as below:

ID  Timestamp           State  1   2013-07-10 09:00:00 Good  1   2013-07-10 09:10:00 Good  1   2013-07-10 09:20:00 Good  1   2013-07-10 09:30:00 Bad  1   2013-07-10 09:40:00 Bad  1   2013-07-10 09:50:00 Bad  1   2013-07-10 10:00:00 Good  1   2013-07-10 10:10:00 Good  

I want to generate a report as shown below using the above table.

ID  Start Timestamp         End Timestamp        Duration(in mins)  State  1   2013-07-10 09:00:00 2013-07-10 09:30:00 30          Good  1   2013-07-10 09:30:00 2013-07-10 10:00:00 30          Bad  1   2013-07-10 10:00:00 2013-07-10 10:10:00 10          Good  

The query that I had generated is not able to showcase the switching taking place.

SqlPackage does not pick up variables from profile

Posted: 11 Jul 2013 05:08 PM PDT

I want to upgrade a database using .dacpac and sqlpackage.exe

here is how I run sqlpackage:

SqlPackage.exe      /Action:Publish      /SourceFile:"my.dacpac"      /Profile:"myprofile.publish.xml"  

The error I get is:

* The following SqlCmd variables are not defined in the target scripts: foo.

I have verified that myprofile.publish.xml file does contain that var:

<ItemGroup>    <SqlCmdVariable Include="foo">      <Value>bc\local</Value>    </SqlCmdVariable>  

I also verified that project that creates dacpac does publish successfully from within visual studio using myprofile.publish.xml

What else could I be missing?

(I'm using SQL Server 2012)

Database Mail sending functionality not working on local system

Posted: 11 Jul 2013 12:54 PM PDT

I am using Database Mail functionality to send mail from a SQL Server 2008 database via following stored procedure execution:

EXEC sp_send_dbmail @profile_name = 'MyProfile',                       @recipients = 'abc@companyname.com',                       @subject = 'Test message',                      @body = 'Congrats Database Mail Received By you Successfully.'   

I have tried with my gmail account profile on my local system it's working properly but not with my company or outlook profile.

Error message:

The mail could not be sent to the recipients because of the mail server failure. (Sending Mail using Account 5 . Exception Message: Could not connect to mail server. (A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond abc.j.i.ooo:pp). )

Reference

http://blogs.msdn.com/b/suhde/archive/2009/07/12/how-to-configure-sql-server-database-mail-to-send-email-using-your-windows-live-mail-account-or-your-gmail-account.aspx  

What would be the problem?

Thanks

Is there a standard way to use common data and procedures across multiple databases?

Posted: 11 Jul 2013 01:52 PM PDT

We have several independent databases that have data and code in common, not in the sense that it is accessed between the databases but in the sense that the data means the same thing in each database, and the code does the same thing.

Examples are:

  • Configuration settings (e.g. error codes 50xxx, boilerplate text for company name, etc).
  • Procedures and functions that perform common tasks (e.g. converting a CSV string into a table, logging an error, formatting an error message based on the error code).
  • Table structures (e.g. table for database version history, table for logging errors). As well as the columns, constraints, and triggers there are also common procedures and functions that read/write the data.
  • Look-up tables (e.g. date look-up table containing dates between 2000-2100). These are similar to table structures but quite often the data will be the same across the databases. In the case of the date table, the start and end dates are configuration settings, which are read by a function, then the date data is generated by a procedure. All these operations are common between the databases.
  • User defined types (e.g. types for passing tables to functions).

For maintenance and support reasons I think it makes sense for things like error codes, procedures, functions, and types to have a "single point of truth" across all the databases, not a different truth in each database.

At the moment each database has it's own copy of everything, including source repository, and we maintain them all independently. This is far from ideal because it's too easy to fix a procedure in A and forget to put it in B, or add an error code to A and the same one to B but they mean different things, etc.

The databases are not updated at the same time and they don't necessarily reside on the same hardware. There are cases where they can read data from each other (if the other one exists).

Is there a standard way of having a single point of truth for data/code that is used across more than one database?

createdb: could not connect to database postgres: FATAL: could not write init file

Posted: 11 Jul 2013 07:54 PM PDT

RedHat Enterprise Server 3.0 32 Bits

psql (PostgreSQL) 8.2.3

user: postgres

server is running:

/soft/postgres/8.2.3/bin/pg_ctl start  pg_ctl: another server may be running; trying to start server anyway  2013-05-09 11:23:07 BRST---:FATAL:  lock file "postmaster.pid" already exists  2013-05-09 11:23:07 BRST---:HINT:  Is another postmaster (PID 12810) running in data directory "/opt/psql/dba/bdadms/data1/pstg"?  pg_ctl: could not start server  Examine the log output.  

I had just created a new database cluster with initdb; but when I run createdb:

8.2.3:postgres:pstg:>/soft/postgres/8.2.3/bin/createdb pstg  createdb: could not connect to database postgres: FATAL:  could not write init file  8.2.3:postgres:pstg:>/soft/postgres/8.2.3/bin/createdb postgres  createdb: could not connect to database template1: FATAL:  could not write init file  8.2.3:postgres:pstg:>/soft/postgres/8.2.3/bin/createdb template1  createdb: could not connect to database postgres: FATAL:  could not write init file  

any clues as to the cause and possible solutions to this problem?

How to load the Adventureworks database into Oracle?

Posted: 11 Jul 2013 01:09 PM PDT

I am trying to work through some of the examples at sqlzoo and I would like to load the example adventureworks example database into an Oracle database at home. Google search has not turned up any sql scripts that have been converted to Oracle. Does anyone have or know where I can get such scripts.

Repeated values in group_concat

Posted: 11 Jul 2013 03:54 PM PDT

I have two tables, first the table food and Second is Activity:

INSERT INTO food      (`id`, `foodName`)  VALUES      (1, 'food1'),      (2, 'food2'),      (3, 'food3'),      (4, 'food4'),      (5, 'food5'),      (6, 'food6'),  ;  CREATE TABLE Activity      (`id` int,`place` varchar(14),`food_id` int,`timing` TIME,`date_and_time` DATETIME)  ;  INSERT INTO Activity      (`id`,`place`, `food_id`,`timing`,`date_and_time`)  VALUES      (1, 'place1', 1, '10:30am','2013-05-01'),      (2, 'place1', 1, '12:30pm','2013-05-01'),      (3, 'place1', 1, '04:30pm','2013-05-01'),      (4, 'place2', 2, '10:30am','2013-05-02'),      (5, 'place2', 2, '12:30pm','2013-05-02'),      (6, 'place2', 2, '4:30pm','2013-05-02'),      (7, 'place1', 2, '10:30am','2013-05-02'),      (8, 'place1', 2, '12:30pm','2013-05-02'),      (9, 'place1', 2, '4:30pm','2013-05-02'),      (10, 'place2', 3, '10:30am','2013-05-03'),      (11, 'place2', 3, '12:30pm','2013-05-03'),      (12, 'place2', 3, '4:30pm','2013-05-03')  ;  

For now I'm using the following query:

SELECT       a.activity_type AS Activity,       COUNT(DISTINCT p.id) AS Products,      CONVERT(GROUP_CONCAT(p.category_id SEPARATOR ',  ') USING utf8)         AS Categories  FROM       food AS p    JOIN       ( SELECT activity_type             , prod_id        FROM activity         WHERE activity_type <> ''         GROUP BY activity_type               , prod_id      ) AS a      ON p.id = a.prod_id  GROUP BY       activity_type  ORDER BY       Products DESC ;  

Could you please help me, I need output in the below format:

place | food_id | Timings             |                              |        |         |---------------------|Date                          |        |         |Time1 |Time2 | Time3 |                              |  ---------------+----------+------------------------------------------|  place1 | 1      | 10:30am| 12:30pm| 4:30pm |2013-05-01(MAX timestamp)|            place2 | 1      | 10:30am| 12:30am| 4:30am |2013-05-01(MAX timestamp)|  

MySQL PDO Cannot assign requested address

Posted: 11 Jul 2013 01:54 PM PDT

Can someone help me with this error?

[08-Apr-2013 17:44:08 Europe/Berlin] PHP Warning:  PDO::__construct(): [2002]      Cannot assign requested address (trying to connect via tcp://****:3306) in       /var/www/***  [08-Apr-2013 17:44:08 Europe/Berlin] PHP Fatal error:  Uncaught exception       'PDOException' with message 'SQLSTATE[HY000] [2002] Cannot assign requested       address' in /var/www/***  

I have a Server with a lot connections per second; out of about 100 Connections, a single one got this error.

I've tried this recommendation from stackoverflow however it does not solve my problem.

Performing SELECT on EACH ROW in CTE or Nested QUERY?

Posted: 11 Jul 2013 06:54 PM PDT

This is a problem in PostgreSQL

I have a table which stores the tree of users;

      +------+---------+      |  id  | parent  |      |------+---------|      |  1   |   0     |      |------|---------|      |  2   |   1     |      |------|---------|      |  3   |   1     |      |------|---------|      |  4   |   2     |      |------|---------|      |  5   |   2     |      |------|---------|      |  6   |   4     |      |------|---------|      |  7   |   6     |      |------|---------|      |  8   |   6     |      +------+---------+  

I can query a complete tree from any node by using the connectby function, and I can separately query the size of tree in terms of total nodes in it, for example

tree for #1 has size 7
tree for #5 has size 0
tree for #6 has size 2, and so on

Now I want to do something like Selecting all possible trees from this table (which is again carried out by connectby), count the size of it and create another dataset with records of ID and size of underlying tree, like this:

      +------------------+-------------+      |  tree_root_node  |  tree_size  |      |------------------+-------------|      |      1           |     7       |      |------------------+-------------|      |      2           |     3       |      |------------------+-------------|      |      3           |     0       |      |------------------+-------------|      |      4           |     3       |      |------------------+-------------|      |      5           |     0       |      |------------------+-------------|      |      6           |     2       |      |------------------+-------------|      |      7           |     0       |      |------------------+-------------|      |      8           |     0       |      +------------------+-------------+  

The problem is, I am unable to perform the same SELECT statement for every available row in original table in order to fetch the tree and calculate the size, and even if I could, I dont know how to create a separate dataset using the fetched and calculated data.

I am not sure if this could be simple use of some functions available in Postgres or I'd have to write a function for it or simply I dont know what exactly is this kind of query is called but googling for hours and searching for another hour over here at dba.stackexchange returned nothing.

Can someone please point to right direction ?

"Could not find stored procedure' even though the stored procedure have been created in MS SQL Server Management Studio

Posted: 11 Jul 2013 08:21 PM PDT

I have created a table testtable inside the database testbase that have the following structure:

product_no (int, not null)  product_name (varchar(30), not null)  price (money, null)  expire_date (date, null)  expire_time (time(7), null)  

which I used the Microsoft SQL Server 2008 Management Studio.

I created a stored procedure testtable_pricesmaller as follows

use testbase  go  create procedure testtable_pricesmaller      @pricelimit money  as  select * from testtable where price = @pricelimit;  go  

and are able to view the Stored Procedures on the Object Explorer of the Microsoft SQL Server Management Studio. (It is listed in the following tree structure of the Object Explorer)

Databases      + testbase          + Tables              + dbo.testtable          + Programmability              + Stored Procedures                  + dbo.testtable_pricesmaller  

I find it very strange when I receive the following error:

Could not find the stored procedure 'dbo.testtable_pricesmaller'.  

when I execute the following SQL statement:

execute dbo.testtable_pricesmaller 50  

What could it be missing?

Stored Procedures Best Practices: Fenced or Unfenced?

Posted: 11 Jul 2013 11:58 AM PDT

I believe I understand the reasons behind fenced and unfenced stored procedures.

Fenced run "outside" of the database (in our case DB2) so as to prevent possible corruption of the database engine should there be issues with things like pointers.

Unfenced runs "inside" of the database, which means that performance is better.

From what I have also researched, SQL PL is always basically unfenced, because it is SQL and therefore cannot access memory like programming languages can.

C/C++ and Java procedures can run fenced or unfenced. But since they can possibly access memory, there should be a consideration for running them fenced, unless there is a certainty on the quality of the code to not crash and it needs performance.

First of all, am I correct in my understand of the above?

Next, is it generally a best practice to start out with all stored procedures (even those defined as SQL PL) as fenced first?

Any other best practices for stored procedures, especially as related to fencing and/or security?

EDIT: Further research has shown that SQL PL procedures cannot run fenced. Since they do not contain any code that could harm the database engine such as pointers or file I/O, DB2 knows they are safe and runs them inside the engine (ie, unfenced). That being said, I am still looking for best practices regarding all other stored procedures.

Search This Blog