Sunday, August 4, 2013

[MS SQL Server] Does the Full-Text Filter Daemon Launcher service need to be running for Full-Text-Search queries to run successfully?

[MS SQL Server] Does the Full-Text Filter Daemon Launcher service need to be running for Full-Text-Search queries to run successfully?


Does the Full-Text Filter Daemon Launcher service need to be running for Full-Text-Search queries to run successfully?

Posted: 08 Apr 2011 07:51 AM PDT

I don't understand the role of this service in SQL 2008 full-text search: [b]Full-Text Filter Daemon Launcher[/b].Does this service need to be running for full-text-search queries to be running successfully?A query like the one below runs fine even when the service is down:[code="sql"]SELECT * from DocumentWHERE CONTAINS(DocCol, 'price');[/code]What type of queries would break when the Daemon Launcher service is down?

[T-SQL] Traversing a hierarchy

[T-SQL] Traversing a hierarchy


Traversing a hierarchy

Posted: 03 Aug 2013 07:54 PM PDT

I have a table representing parent-child relationship and NoOfBooks for that relationship. Each child can then have its own children and so on. Each record has a value for NoOfBooks ranging from 0 to 100.What i want is for a parent (we can start with any parent), identify all relationships of the parent where sum of NoOfBooks for a child is greater than a threshold value (let's say 9). But the catch is, sum of NoOfBooks for a child's [b]each[/b] parent should also be greater that the threshold value.So e.g. A-B 9A-C 10A-D 5B-E 5C-E 6E-F 12E-G 4D-H 14My Output should beA-B 9A-C 10B-E 5C-E 6E-F 12Here is the table with sample data.[code="sql"]Create table MyTable (Id INT IDENTITY(1,1) Primary Key,ParentId INT,ChildId INT,NoOfBooks INT);INSERT INTO MyTable VALUES (1000, 2000, 10);INSERT INTO MyTable VALUES (1000, 2001, 9);INSERT INTO MyTable VALUES (1000, 2002, 8);INSERT INTO MyTable VALUES (2000, 2003, 4);INSERT INTO MyTable VALUES (2001, 2003, 5);INSERT INTO MyTable VALUES (2003, 2004, 9);INSERT INTO MyTable VALUES (2000, 2005, 3);INSERT INTO MyTable VALUES (2001, 2005, 3);INSERT INTO MyTable VALUES (2004, 2005, 3);INSERT INTO MyTable VALUES (2005, 2006, 5);INSERT INTO MyTable VALUES (2002, 2007, 12);INSERT INTO MyTable VALUES (2005, 2008, 5);INSERT INTO MyTable VALUES (2002, 2008, 6);[/code][img][/img]Required output of above sample data (see attachment)[img]http://www.sqlservercentral.com/Forums/Attachment14122.aspx[/img]Note that [1000, 2002, 8], [2005, 2006, 5], [2002, 2007, 12], [2005, 2008, 5], [2002, 2008, 6] are not included in my required output because either SUM(NoOfBooks) for the child is NOT greater than the threshold value OR SUM(NoOfBooks) for each of its parent is NOT greater than the threshold value.

Add one MilliSecond to each row

Posted: 26 Jul 2013 03:06 AM PDT

Hello EveryoneI am trying something, that I am not sure is possible.I have a CreateDate column in a table, there are approx 5000 rows with the same date and time. This was done from the data load. I need to change the the time by adding one millisecond to each one, but incrementing by one from the previous row.So if I have a datetime like this:[code="sql"]RowID, CreateDate1, 2013-07-30 08:20:40:0102, 2013-07-30 08:20:40:0103, 2013-07-30 08:20:40:0104, 2013-07-30 08:20:40:0105, 2013-07-30 08:20:40:010[/code]I would like to add one millisecond to each, but incrementing from the previous row.[code="sql"]RowID, CreateDate1, 2013-07-30 08:20:40:0102, 2013-07-30 08:20:40:0113, 2013-07-30 08:20:40:0124, 2013-07-30 08:20:40:0135, 2013-07-30 08:20:40:014[/code]Here is some dummy code.[code="sql"]; WITH DifferentTimes( RowID, CreateDate)AS(SELECT TOP (500) n = CONVERT(INT, ROW_NUMBER() OVER (ORDER BY s1.[object_id])), '2013-07-30 08:20:40:010'FROM sys.all_objects AS s1 CROSS JOIN sys.all_objects AS s2)SELECT * FROM DifferentTimes[/code]So after that, I am not sure how I can perform an update to increment the time by each row, using the previous row.Thank You in advance for all your assistance, suggestions and commentsAndrew SQLDBA

[SQL Server 2008 issues] Generate "One Merged Email" from any query result ran against multiple instances in CMS (Central Management server)

[SQL Server 2008 issues] Generate "One Merged Email" from any query result ran against multiple instances in CMS (Central Management server)


Generate "One Merged Email" from any query result ran against multiple instances in CMS (Central Management server)

Posted: 03 Aug 2013 05:01 PM PDT

Hello,We manage our instances via CMS. As a daily process, we ran a script against PROD group under CMS to check Last night backup and Job failures and many other things. We then place the output of those query result into excel sheet and then manually add colors to this excel and then send report to the group. It's been going on for a while. I like the fact that we use CMS for this info. But I would like to change this process now. I have created a script that ran against any "individual" server and generate a html good looking email with all the filtering we do in the excel. Now I am wondering if I can get this script to run against CMS server and generate one single email with all the data in it, then all our manual work on generating excel will be gone. It will save 90 minutes of our dedicated work everyday.Script is generating output in temp tables and from those temp tables, am filtering out the unwanted stuff and asking it to generate a email. Now the Issue is, when I ran it on CMS Server. It is sending me "individual mails" from "each individual servers" that are part of the CMS group. Resulting I am getting 90+ emails.Is there any way to send only "One Merged Email" with all the data in it, rather then individual emails from all the servers individually with the result of any query ran against CMS.I am not sure if this is possible. Since in sql query option there is a option for multiple server, over there by default "merge" has been set to true. That is how CMS generate all the result merge into a single result window.Is there any way to do this ? Please advice..Thanks!

passing date parameter in ssis 2008 at run time

Posted: 03 Aug 2013 12:13 PM PDT

hi again,i need to run data parameter in ssis 2008 through BIDSFor example, the package contains dumping the data into table when u select 1st april, 2013.then the data should be extracted for previous 3 months (1-jan-2013 to 31-mar-2013) regardsshaun 2012

Generate a day date according to existing month+year+daydiff

Posted: 31 Jul 2013 06:46 PM PDT

im using sql 2008.i got this 5 columns:StartMonth,StratYear,EndMonth,EndYear,DaysBetweeni dont have the day of these dates and that's what im trying to generate for example:12 2008 1 2009 8now, i want to create a random date (start date and end date , format as dd/mm/yyyy) which will include the day and will make scene upon the data i have under days betweenFOR INSTANCE,if i know that i got 8 days (DaysBetween) and the startmonth is 12, the date must be from 24/12/2008 cause if i add 8 days i get the EndMonth (1/2009)if i would choose the date 2/12/2008 i would get 10/12/2008 and its not good cause the month is still 12....and i need 1 (2009)how can i generate valid dates ?thank you

Backup Set List Not Populating

Posted: 01 Aug 2013 04:14 AM PDT

I have posted this on Amazon's Developer Forum, but I have also seen this happen at other times.I have a Windows 2008 server with SQL Server installed (image_1) on Amazon EC2. I create an image of that machine, then I launch the newly created AMI (image_2). When I go into (image_2)'s Management Studio to perform a database restore, I noticed that all of my backup result sets are not displaying, yet they are on disk.What it does look like: NoBackupSets.jpgWhat it should look like: ListedBackupSets.jpgWhat I was hoping that would happen is I could fire up image_2, then using the Management Studio's GUI, restore a database from a point in time. This would speed up the process greatly over having to do it all individually by script.My question is, when the backup set list is not populated, is there a way to populate it or do I have to run my restore manually from script?Thank you in advance,Jim

Login Creation

Posted: 31 Jul 2013 07:06 PM PDT

Hi Team,While creating a new login, Effective tab is missing in securables window,Please suggest..

Loading Hierarchical data

Posted: 02 Aug 2013 07:56 PM PDT

Hi Experts,[code]CREATE TABLE #Source( Id int identity(1,1) ,category int ,Leaf_Node_code varchar(10) -- ,Level1_Name varchar(20) ,Level2_Name varchar(20) ,Level3_Name varchar(20) ,Level4_Name varchar(20) ,Level5_Name varchar(20) )INSERT INTO #SourceSELECT 1,'101','World','Asia','India',null , null UNION ALLSELECT 2,'102','a','aa','aaa','aaaa' , null UNION ALLSELECT 3,'103','b','bb','bbb','bbbb','bbbbb' Here category 1 has 3 levels , category 2 has 4 levels , category 3 has 5 levels , below is the target table, here Leaf_Node_code should populate to only for leaf nodes for each category .. Need to populate Node_id with hierarchical data I am unable frame a sql query to handle different levels , in future #Source may have more levels .Please guide me on handle multiple hierarchy levels .. here only leaf node should have Leaf_Node_code CREATE TABLE TARGET_TABLE( ID INT IDENTITY(1,1) primary key,Node_id HIERARCHYID ,category int ,Parent_id int references TARGET_TABLE(id),Leaf_Node_code varchar(10),Name varchar(20))Here is the expected output:ID category Parent_id Leaf_Node_code Name Node_id 1 1 NULL NULL World2 1 1 NULL Asia3 1 2 101 India4 2 NULL NULL a5 2 4 NULL aa6 2 5 NULL aaa7 2 6 102 aaaa8 3 NULL NULL b9 3 8 NULL bb10 3 9 NULL bbb11 3 10 NULL bbbb12 3 11 103 bbbb[/code]Thanks.

Exam 70-448 retired or not

Posted: 03 Aug 2013 12:54 AM PDT

HiCan somebody please tell me, is the exam 70-448 SQL Server 2008, Business Intelligence Development and Maintenance retired or not?This link says yes: (overview) [url=http://www.microsoft.com/learning/en-us/exam.aspx?id=70-448&locale=en-us]Link[/url]This link says no: [url=http://www.microsoft.com/learning/en-us/retired-certification-exams.aspx]Link[/url]Furthermore, I had a online chat (31/7) with someone from Microsoft Learning, who said no.I failed on the 31/7, and was told that I could not retake because the exam now retire. My company have spend a lot money, and I have used a lot of time reading (apparently not enough), so I am pretty annoyed, if I can't retake

Saturday, August 3, 2013

[how to] Percona XtraDB Cluster With Async slaves

[how to] Percona XtraDB Cluster With Async slaves


Percona XtraDB Cluster With Async slaves

Posted: 03 Aug 2013 08:07 PM PDT

I have set up a 3 node percona cluster in amazon ec2. As our client required, i also had to set up an asynchronous slaves in another datacenter as well. Each async slave pointed to single percona cluster node with log-slave-updates and bin-log enabled.

However the async slaves failed to maintain the replication process for long. At least once a week i have to build the slaves from scratch. These async slaves are used only for backup purpose.

Is there any alternative method(best way) of doing this?

Percona Cluster ---> async slaves ?

Make "NOT NULL" default in Postgresql 9.2

Posted: 03 Aug 2013 08:19 PM PDT

I use NOT NULL a lot when creating columns.

Is there a way to make this default when defining columns? In other words, make column is_nullable default to NO. (I realize this would make it harder for me to set it to YES later one.)

phpmyadmin Lock Down

Posted: 03 Aug 2013 08:51 PM PDT

I have a client that wants to use myphpadmin to manage his mysql database 5.x on my server. I found this article on how to hide databases from myphpadmin but I don't like this because if I don't specify the databases to hide then every time I add a new database it because available to the client through myphpadmin.

Article: http://www.electrictoolbox.com/hide-databases-phpmyadmin/

Command to hide a database in the myphpadmin config.inc.php file:

$cfg['Servers'][$i]['hide_db'] = 'hidedb1|hidedb2|etc...';  

I also created a separate username and password in mysql to.

Is there a way to reverse this so I can tell myphpadmin what database to only show for this login?

Thanks,

Frank G.

Does the mysql parameter "max_connections" apply for unix socket connections?

Posted: 03 Aug 2013 07:52 AM PDT

The documentation does not specify if max_connections apply only to TCP connections.

The way it is said, it looks like it does apply for any kind of connection, but I'd like to be sure because otherwise, I could use it when the "Too many connection" problem arises, even if the (last) SUPER connection slot is already in use.

DB: Having redundant fields in Table

Posted: 03 Aug 2013 08:23 AM PDT

This is a basic DB design question but I now run multiple times into the same situation which I try to illustrate in the following example.

Lets say I have make a DB-system for messaging in a classifieds platform.

A user publishes many ads and other users can message her for a specific ad.

I thus have an ads table, a user table and a message table in my DB.

Since users message people for specific ads, storing the ad AND the user is redundant.

On the other hand, when listing all incoming messages for a given user it would be much more efficient to have the user name as a field in the message table too, because a user might have many ads and only very few of them have messages.

So should the message table now have a user field or not?

how to use index for asp.net site? [on hold]

Posted: 03 Aug 2013 01:11 AM PDT

I am started learning index in SQL Server. I have a table with some fields. So, here I used index for one of my table column Price.

CREATE INDEX pricetag  ON pricetable (Price)  

When I can use index? If the data type as integer or if the table doesn't have a constraint like primary key or foreign key. Because when I execute my above query it saying following error.

My table have ID as primary key.

Cannot find the object "pricetable" because it does not exist or you do not have permissions.

I want know when to use index, where to apply index in SQL Server?

Moving a database to a new server

Posted: 03 Aug 2013 07:58 PM PDT

My database server is being decommissioned at work. We have a new database server to take its place. Both servers have the same operating system and SQL Server 2008. I need to move all my databases and data over to the new server.

What is the best way to do this to ensure as few problems as possible?

My data is not just tables, views and stored procedures. There are also several SQL Server Agent jobs and several database maintenance plans and scheduled backups.

The system is not being used at night, so I have a late night opportunity to take the database offline, if need be.

Can I just copy the MSSQL directory over to the new server? Or should I just backup each database individually? If I do backup each database, should I back up the system databases (like master, model and msdb) too?

Connect to remote Oracle Database

Posted: 03 Aug 2013 04:08 PM PDT

I needed to make an application connect to an oracle database located in the same network (ping the host works fine), but not in the same machine (i.e. it's a remote server). I developed then a software using Qt and I was able to connect remotely successfully with that database from my machine, but I'm not being able to do the same from the machine my software is supposed to be located during its functioning.

I've tried to understand what is so special about my machine that on it my software is able to connect with the Oracle database while it's not able to do the same in the right machine, and it seems that the only different thing is that I've installed the Oracle database here, while I'm not allowed to do the same there.

The error returned by Qt is that I'm simply unable to logon, while the QOCIDriver returns that it wasn't possible to create the environment. Interestingly enough, the same problem was occurring with me on my machine during the time I didn't have the Oracle database installed on it and neither the Oracle Client. And this Oracle Client thing is already installed in the official machine with no positive results.

So I'ld like to know exactly what I need to do to make my software works. Install the database after all in the official machine? Not viable. Install the Oracle Client? I've already done it. Add the ORACLE_HOME registry/environment variable? Did both. ^^ Copied the dbhome_1/BIN folder with lots of dlls and .exe to the official machine? Did it already, and that only made it began to crash while trying to connect to the database. And now I'm out of solutions... So what is happening that I can connect my app to this oracle database from my machine but I can't do the same from that other machine?

SQL Server logs in Gmail

Posted: 03 Aug 2013 05:25 AM PDT

First off, I'm using SQL Server 2008 R2.

When I set up my maintenance plan and got to the "Select Report Options" step, I selected to email the report to the Operator I already have set up. This does not allow me to alter the subject, which I believe Gmail uses when it collapses emails into conversations.

Is there a way to trick Gmail to separating them out based on the database? Or is there a better way from within SQL Server Management Studio to email the reports out? I have multiple instances, with their own Operator, each Operator is using the same email address (no_reply@noneofyourbusiness.com), but the name is different....but since the subject is the same, Gmail still collapses them into one big conversation.

Does anyone else have to deal with this issue?

Select rows, where 3 columns have the same values

Posted: 03 Aug 2013 03:02 AM PDT

I have a structure like this:

 col0 col1 col2 col3   1    1    1    1   1    1    1    2   1    1    1    3   1    2    1    1   1    2    3    1   2    3    4    5   2    3    4    9  

I now want to select all rows, that match other rows on the first three columns, so the result of this query using the sample data would be:

 col0 col1 col2 col3   1    1    1    1   1    1    1    2   1    1    1    3   2    3    4    5   2    3    4    9  

How can I achieve this, I'm kinda stuck at the early beginning...

Thanks in advance! Alex

Update: Clearified the question. Thanks Erwin.

MySQL 5.5 High CPU Usage

Posted: 03 Aug 2013 05:23 PM PDT

I have been working with a client with high traffic website(500k visitors and 600-800 active users at any given time). This uses wordpress and the DB is using MyISAM engine. The problem we had was high CPU usage in the server. All the time CPU load is 15-20. We used litespeed and MySQL 5.1 with CentOS 5.9 in Dual Xeon L5506, 12GB RAM server with Sata HDD.

So I analyzed the database and found there is only 4GB of data and index size of that DB and decided to convert to InnoDB. Once we did, we ended up having 80-150 CPU load and server was about to crash. So we transferred MySQL to another server with same config but to MySQL 5.5.

In New DB server CPU load is 1-2 and web server still on 4-6 constant CPU load.

Here is my my.cnf

[mysqld]  innodb_file_per_table=1  local-infile = 0  default-storage-engine = InnoDB  max_connections = 1000    innodb_buffer_pool_size = 8G  innodb_flush_method = O_DIRECT  innodb_log_file_size = 256M  innodb_log_buffer_size = 8M  innodb_flush_log_at_trx_commit = 2  innodb_thread_concurrency = 8  innodb_file_format = Barracuda    myisam_sort_buffer_size = 16MB    query_cache_type = 1  query_cache_limit = 2M  query_cache_size = 256M  thread_cache_size = 16K    key_buffer_size = 128M  max_heap_table_size = 128M  tmp_table_size = 128M  join_buffer_size = 32M  read_buffer_size = 32M  read_rnd_buffer_size = 1M  sort_buffer_size = 32M  table_cache = 4K  open_files_limit = 65535    log-slow-queries = /var/log/mysql/slowqueries.log  long_query_time = 3  

PS : Not all of our DB's are InnoDB, so MyISAM values are placed after good analysis.

Statistics : For 4 hours

Questions since startup: 7,339,471 Documentation   ø per hour: 1,704,102  ø per minute: 28,402  ø per second: 473       Traffic              ø per hour  Received 4.8 GiB    1.1 GiB  Sent     248.5 GiB  57.7 GiB  Total    253.3 GiB  58.8 Gi  

How to run a SELECT query within while loop in PHP?

Posted: 03 Aug 2013 04:51 PM PDT

Within a SELECT query via PHP while loop, I need to run a mysql SELECT query as

$result1 = $mysqli->query("SELECT * FROM table1");    while ( $row = $result->fetch_assoc() ) {    if ( $row['X'] == 'X' ) {  $result2 = $mysqli->query("SELECT * FROM table2");  .....  }    }  

but this does not work. I cannot JOIN the tables, as the if statement is complicated to perform SELECT from different tables.

Delete from table where multiple fields match select subquery from other table

Posted: 03 Aug 2013 09:50 AM PDT

I want to delete an entry in a table where multiple fields match the results of another select subquery which takes data from another table.

This is what I have so far, though it doesn't work:

DELETE FROM table1  WHERE table1.id IN     (SELECT id     FROM table1 a JOIN table2 b     ON a.field1 = b.field1     AND a.field2 = b.field2     AND a.field3 = b.field3     AND b.id = ?     )  

Oracle 11g http listener configuration

Posted: 03 Aug 2013 02:50 AM PDT

Could please someone point me how to configure oracle db to be able to display PL/SQL Server pages. I have successfully created and mapped dad with the DBMS_EPG.

Here is the listener:

# listener.ora Network Configuration File: D:\app\Ja\product\11.2.0\dbhome_1\NETWORK\ADMIN\listener.ora  # Generated by Oracle configuration tools.    LISTENER =    (DESCRIPTION_LIST =      (DESCRIPTION =        (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))      )      (DESCRIPTION =        (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))      )      (DESCRIPTION =        (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 8080))      )    )    ADR_BASE_LISTENER = D:\app\Ja  

When I want to enter

localhost:8080/my_dad/home

I get error: No data received.

How do I solve this?

MySql one time event never runs?

Posted: 03 Aug 2013 12:50 PM PDT

Please have a look at below events

1) create EVENT Test1 ON SCHEDULE AT CURRENT_TIMESTAMP + INTERVAL 20 second ON COMPLETION PRESERVE ENABLE DO ...     2) create EVENT Test2 ON SCHEDULE EVERY 20 SECOND STARTS CURRENT_TIMESTAMP ON COMPLETION PRESERVE ENABLE DO ...   

I expect event Test1 to run one time after 20 seconds but it never runs. Event Test2 is working fine.

Any idea? Thanks.

Ok sorry it is the alter that is not working

At first i did create EVENT Test1 ON SCHEDULE AT CURRENT_TIMESTAMP + INTERVAL 20 second ON COMPLETION PRESERVE ENABLE DO

then shortly after i did alter EVENT Test1 ON SCHEDULE AT CURRENT_TIMESTAMP + INTERVAL 20 second ON COMPLETION PRESERVE ENABLE DO

Expect event Test1 to run again in another 20 secs but it didn't.

How can row estimates be improved in order to reduce chances of spills to tempdb

Posted: 03 Aug 2013 02:40 AM PDT

I notice that when there are spill to tempdb events (causing slow queries) that often the row estimates are way off for a particular join. I've seen spill events occur with merge and hash joins and they often increase the runtime 3x to 10x. This question concerns how to improve row estimates under the assumption that it will reduce chances of spill events.

Actual Number of rows 40k.

For this query, the plan shows bad row estimate (11.3 rows):

select Value    from Oav.ValueArray   where ObjectId = (select convert(bigint, Value) NodeId                       from Oav.ValueArray                      where PropertyId = 3331                          and ObjectId = 3540233                        and Sequence = 2)     and PropertyId = 3330  option (recompile);  

For this query, the plan shows good row estimate (56k rows):

declare @a bigint = (select convert(bigint, Value) NodeId                         from Oav.ValueArray                        where PropertyId = 3331                          and ObjectId = 3540233                          and Sequence = 2);    select Value    from Oav.ValueArray   where ObjectId = @a                    and PropertyId = 3330    -- Workpiece Side Active Objects      option (recompile);  

Can statistics or hints be added to improve the row estimates for the first case? I tried adding statistics with particular filter values (property = 3330) but either could not get the combination correct or perhaps it is being ignored because the ObjectId is unknown at compile time and it might be choosing an average over all ObjectIds.

Is there any mode where it would do the probe query first and then use that to determine the row estimates or must it fly blindly?

This particular property has many values (40k) on a few objects and zero on the vast majority. I would be happy with a hint where the max expected number of rows for a given join could be specified. This is a generally haunting problem because some parameters may be determined dynamically as part of the join or would be better placed within a view (no support for variables).

Are there any parameters that can be adjusted to minimize chance of spills to tempdb (e.g. min memory per query)? Robust plan had no effect on the estimate.

Add partition works but not the drop partition

Posted: 03 Aug 2013 04:50 AM PDT

We have set an event as below. What we notice is that the add partition is working well as we can see on a daily basis the partition list is growing but the drop partition is not working well any reason for this?

IF EXISTS(SELECT * FROM INFORMATION_SCHEMA.PARTITIONS     WHERE TABLE_NAME = 'testPart1' AND PARTITION_NAME =     CONCAT(' p'      ,   DATE_FORMAT( DATE_SUB( CURDATE(), INTERVAL 14 DAY ), '%Y%m%d' ))) THEN        SET @stmt := CONCAT(          'ALTER TABLE testPart1 DROP PARTITION '      ,   ' p'      ,   DATE_FORMAT( DATE_SUB( CURDATE(), INTERVAL 14 DAY ), '%Y%m%d' )       );      PREPARE stmt FROM @stmt;      EXECUTE stmt;      DEALLOCATE PREPARE stmt;      END IF;       IF NOT EXISTS(SELECT * FROM INFORMATION_SCHEMA.PARTITIONS     WHERE TABLE_NAME = 'testPart1' AND PARTITION_NAME =     CONCAT(' p'      ,   DATE_FORMAT( DATE_ADD( CURDATE(), INTERVAL 2 DAY ), '%Y%m%d' ))) THEN       SET @stmt := CONCAT(          'ALTER TABLE testPart1 ADD PARTITION ('      ,   'PARTITION p'      ,   DATE_FORMAT( DATE_ADD( CURDATE(), INTERVAL 2 DAY ), '%Y%m%d' )      ,   ' VALUES LESS THAN ('      ,   TO_DAYS( CURDATE() ) + 2       ,   '))'      );      PREPARE stmt FROM @stmt;      EXECUTE stmt;      DEALLOCATE PREPARE stmt;     END IF;   

Slow SSRS Report in production

Posted: 03 Aug 2013 07:50 AM PDT

I have an SSRS report which gets its data by firing a series of stored procedures.

Now the report is timing out big time when run in production, yet when I pull down the prod database and restore to development the report runs fine.

I was thinking to set up a sql server profiler trace in production and hopefully that will tell me something... eg high Disk I/O at the time it's being run.

What else should I be doing? Something with perfmon?

Shrinking the SYSTEM tablespace in Oracle

Posted: 03 Aug 2013 06:50 AM PDT

Our SYSTEM tablespace grew out of control because of the SYS.AUD$ table.

We have truncated SYS.AUD$ but the datafile is still very big (~30G).

Resize doesn't work because the file contains used data beyond requested RESIZE value

What should I do here?

Here's our version information:

  • Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
  • PL/SQL Release 11.2.0.1.0 - Production
  • CORE 11.2.0.1.0 Production
  • TNS for 64-bit Windows: Version 11.2.0.1.0 - Production
  • NLSRTL Version 11.2.0.1.0 - Production

ORA-16000 when trying to perform select on read only Oracle database

Posted: 03 Aug 2013 08:51 PM PDT

My application's SQL encounters an ORA-16000 error when trying to access a read only Oracle database.

ORA-16000: database open for read-only access ORA-06512: at "SYS.XMLTYPE",   line 272 ORA-06512: at line 1 ### The error may exist in com/xyz/report/dao/Transaction001Mapper.xml  

This is the query that involves the XMLTYPE, the INTERFACE_CONTENT is a CLOB column:

SELECT CONTENTS FROM ERRCODES WHERE          CODE=(SELECT xmltype(INTERFACE_CONTENT).extract('/Message/PaymentStatus/ps:FIToFIPmtStsRpt/ps:TxInfAndSts/ps:StsRsnInf/ps:Rsn/ps:Prtry/text()','xmlns="urn:abcde" xmlns:head="urn:iso:std:iso:20022:tech:xsd:head.001.001.01" xmlns:ps="urn:iso:std:iso:20022:tech:xsd:pacs.002.001.03"').getstringval() APP_CODE  FROM MESSAGE_EXTERNAL_INTERACTION MEI WHERE MEI.MSG_TYPE='Pacs_002'      AND MEI.MID='MY_ID')  

I also did a lot of EXTRACTVALUE() method on an XML field type.

The SQL is working perfectly if the database is not read only (read write).

What is the issue here? Is this related to some missing privileges/grants?

Is a join optimized to a where clause at runtime?

Posted: 03 Aug 2013 05:30 AM PDT

When I write a query like this...

select *  from table1 t1  join table2 t2  on t1.id = t2.id  

Does the SQL optimizer, not sure if that is the correct term, translate that to...

select *  from table1 t1, table2 t2  where t1.id = t2.id  

Essentially, is the Join statement in SQL Server just an easier way to write sql? Or is it actually used at run-time?

Edit: I almost always, and will almost always, use the Join syntax. I am just curious what happens.

Restore database from backup file of different version / edition

Posted: 03 Aug 2013 12:20 PM PDT

I read that it's possible to restore a database in SQL Server as long as you're restoring from an older version to a newer version, for backwards compatibility reasons.

Does anyone know off hand if you can restore a database from a *.bak file for different editions of SQL Server? We are moving a very large database via FTP that will take a couple days, so we'd rather only do this once. If nobody responds by the time we transfer the database via FTP, we will obviously try this out and see if it works by testing, and answer our own question.

Below is a query to get version details of SQL Server. The productversion is in the format {major revision}.{minor revision}.{release revision}.{build number}. In my case, the {release revision} has a value of 5500 for the source and 5512 for the target. So that looks okay. However, the edition is different.

Query:

SELECT     SERVERPROPERTY('productversion'),     SERVERPROPERTY('productlevel'),     SERVERPROPERTY('edition')  

Source database:

10.0.5500.0  SP3  Developer Edition (64-bit)  

Target database:

10.0.5512.0  SP3  Enterprise Edition (64-bit)  

Restore SQL Server database using Windows Powershell 3.0

Posted: 02 Aug 2013 11:50 PM PDT

I'm trying to restore a SQL Server database with a PowerShell script, but I'm having problems.

Here is the error I'm getting:

Exception calling "SqlRestore" with "1" argument(s): "Restore failed for Server 'WUSFK250042-OLU\SQLSERVER2008R2'. " At line:48 char:1 + $smoRestore.SqlRestore($server)

Here is my code:

#clear screen  cls    #load assemblies  [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | Out-Null  #Need SmoExtended for backup  [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoExtended") | Out-Null  [Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.ConnectionInfo") | Out-Null  [Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoEnum") | Out-Null    $backupFile = "C:\SafewayRT\SafewayRTFUll.bak"    #we will query the database name from the backup header later  $server = New-Object ("Microsoft.SqlServer.Management.Smo.Server") "WUSFK250042-OLU\SQLSERVER2008R2"  $backupDevice = New-Object("Microsoft.SqlServer.Management.Smo.BackupDeviceItem") ($backupFile, "File")  $smoRestore = new-object("Microsoft.SqlServer.Management.Smo.Restore")    #restore settings  $smoRestore.NoRecovery = $false;  $smoRestore.ReplaceDatabase = $true;  $smoRestore.Action = "Database"  $smoRestorePercentCompleteNotification = 10;  $smoRestore.Devices.Add($backupDevice)    #get database name from backup file  $smoRestoreDetails = $smoRestore.ReadFileList($server)    #display database name  "Database Name from Backup Header : " +$smoRestoreDetails.Rows[0]["Safeway_LogixRT"]    #give a new database name  $smoRestore.Database =$smoRestoreDetails.Rows[0]["Safeway_LogixRT"]    #specify new data and log files (mdf and ldf)  $smoRestoreFile = New-Object("Microsoft.SqlServer.Management.Smo.RelocateFile")  $smoRestoreLog = New-Object("Microsoft.SqlServer.Management.Smo.RelocateFile")    #the logical file names should be the logical filename stored in the backup media  $smoRestoreFile.LogicalFileName = $smoRestoreDetails.Rows[0]["Safeway_LogixRT"]  $smoRestoreFile.PhysicalFileName = $server.Information.MasterDBPath + "\" + $smoRestore.Database + "_Data.mdf"  $smoRestoreLog.LogicalFileName = $smoRestoreDetails.Rows[0]["Safeway_LogixRT"] + "_Log"  $smoRestoreLog.PhysicalFileName = $server.Information.MasterDBLogPath + "\" + $smoRestore.Database + "_Log.ldf"  $smoRestore.RelocateFiles.Add($smoRestoreFile)  $smoRestore.RelocateFiles.Add($smoRestoreLog)    #restore database  $smoRestore.SqlRestore($server)  

How to run a cold backup with Linux/tar without shutting down MySQL slave?

Posted: 03 Aug 2013 03:50 PM PDT

I run the following before tar-ing up the data directory:

STOP SLAVE;  FLUSH TABLES WITH READ LOCK;  FLUSH LOGS;   

However, tar will sometimes complain that the ibdata* and ib_logfiles* files are updated during the process. What am I missing?

The slave machine is in a cold standby machine so there are no client processes running while tar is running.

CentOS release 5.6 64bits, MySQL 5.1.49-log source distribution.

Run Unix command using PL/SQL

Posted: 03 Aug 2013 08:50 AM PDT

Is it possible to run a Unix command using a query in Oracle?

I want to run simple commands (like df -h) using a query.

Is this at all possible or am I wasting my time? I don't want to use a different language like Java or C to call a procedure, it needs to purely PL/SQL.

Theta join explanation

Posted: 03 Aug 2013 09:38 AM PDT

Given the following question, I'd be glad if someone could help me.

Take the following relational schemes:

R(A, B , C)     S(D, E, F)   

where A and D attributes are the primary keys.

Assume to have an instance r of R with n tuples and an instance s of S with m tuples.

Moreover, assume to have a referential integrity constraint between C and the primary key of S.

1) How many tuples does the θ-join between s and r contain if the join predicate is C = D?     2) How many tuples does the θ-join between s and r contain if the join predicate is B = E?   

My answers:

1) n tuples since it all depends on the number of records belonging to the second relation that match the records which are part of the first one

2) zero since there's no common attributes

Is my reasoning correct?

Any help will be highly appreciated...thanks in advance

[MS SQL Server] Profiler Trace duration?

[MS SQL Server] Profiler Trace duration?


Profiler Trace duration?

Posted: 02 Aug 2013 11:21 PM PDT

Hi,I have transfer data into table from trace file , after that i want check duration of praticulare events..I want to know as bleow query display how much milliseond or microsecond from trace table?SELECT TextData, Duration, CPUFROM trace_table_nameWHERE EventClass = 12 -- SQL:BatchCompleted eventsAND CPU < (Duration * 1000)thanksananda

Gaps in tlogs

Posted: 03 Aug 2013 02:19 AM PDT

Ok, so here's the issue. We have a report server database that has been correctly been updating tlogs for years. A couple days ago, we shrunk the live database/log, and now we continue to receive "gaps" in the tlogs. Does anyone have any suggestions on how to resolve this? Should we somehow restore the entire report server database with the live database...and the tlog shipping will just kick in from there??

Sql server 2008 R2 --- procedure Cache

Posted: 09 Aug 2011 02:17 AM PDT

HiWe have a sql server 2008 R2 on physical server with RAM-- 512 gb.we have around 300 gb of data in disk, the sql server memory has been set to 485gb.sql server is using around 300gb of the memory.the question here is, the plan cache hit ratio is only 65%.can u please help me resolve this and increase the plan cache hit ratio to 100%we are doing around 7000 batches/secwe have enabled optimize for ad-hoc workloads to true and forced parameterization.thanks for the help,

[SQL 2012] sql server Merge Replication

[SQL 2012] sql server Merge Replication


sql server Merge Replication

Posted: 03 Aug 2013 01:21 AM PDT

I have found a couple of blogs on Merge Replication, but, wanted to ask if anyone has a blog or site they would recommend about Merge Replication.Thanks in Advance

SSIS, bulk import with ODBC

Posted: 02 Aug 2013 07:58 PM PDT

Hi all,First of all if I am posting this in the wrong forum my bet didn't see any correct forum for 2012 business intelligence.Let me explain our scenario:We are running an SQL Server 2012 and we run an import daily from and progress database to this SQL Server. However the bulk import from the SSIS package is very slow. What we do know:- Recreate the table with indexes and fields etc.- Import with bulk import- Delete the old table and rename the newly imported tableHowever the total time of importing is around 9 hours each day and then we are not importing all of the tables. Our main problem is one table is over 3 million rows large, and the import time of that table alone is around 3 hours. In my opinion that can go faster and better but I simply don't know how to achieve that...Is there a better practice to increase the speed of the bulk import so that the total import time gets faster?One thing maybe that can be a problem the progress database where the data is been extracted is an read only database we can't directly input data in to that database (Only with our application). So changes to the database are not allowed.Does anyone have tips to increase the speed of our SSIS package so our import runs (much) faster?Thx in advance!

Upgrading to 2012 from 2008R2 on an active/active cluster; named instance on each

Posted: 02 Aug 2013 07:27 AM PDT

So I've read about upgrading to 2012 on a dual instance active/active cluster. They say it is best to failover 1 instance to the other node and then do both instances on the passive node. THEN to failover the 2 active to the passive. This caused us an issue when upgrading from 2005 to 2008R2. Is it different in 2012? Is there a difference between "move services to another node" and "simulate failover"? Testing in about an hour in a sandbox, but I'd like feedback from others.

Mapping Dissimilar Data

Posted: 02 Aug 2013 10:35 AM PDT

For a busy OLTP application, what is the most efficient SQL method/process/tool/technique to map two dissimilar databases in real time? Queries from a new application need a homogenous interface whether connected to an old, inefficient legacy DB or to a new one with different structure. Is this what SCHEMABINDING does?See attached diagram.

how to uninstall SSIS Package in integration Service catalog.

Posted: 02 Aug 2013 03:33 AM PDT

Hi - i experiment creating a SSIS Package to process the cubes using Tata Tools. when I built the project, it create an "Intergration services project deployment file". I executed that file to deploy the SSIS package to the SQL 2012 Integration Service Catalog ( it ask me to create a new catalog name SSISDB and I did).After the deployment, I saw under Integration Service Catalog has to SSIS package.. but I also see the SSISDB database also create plus under SQL server Agent Job, there is an item for "SSIS Server Maintenance Job.Now my question is..I dont want to use this SSIS package any more, How do I uninstall all the SSIS package and everything that create by this deployemnt.Please advice. Thanks,Dee

[T-SQL] deny execute permissions on sp_linkedservers for particular login

[T-SQL] deny execute permissions on sp_linkedservers for particular login


deny execute permissions on sp_linkedservers for particular login

Posted: 02 Aug 2013 03:32 AM PDT

Hi friends,Is there any way to deny execute permission on sp_linkedservers for particular sql login.Thanks in advance.

Need help with T-SQL to combine email addresses

Posted: 02 Aug 2013 05:04 AM PDT

Hello,I am trying to get email addresses in column and concatenate them into a long string, separated by semi-column ";". So, the results would be like:bob@company.com; mary@company.org;paul@company.com;susan@company.netI have this T-SQL below and I am using a while loop that does not work, but I am wondering if I a cursor would work better. I am not sure, as what is below is not working. Can you help me with this query to get the results?declare @emails as varchar(5000) while exists (select People.cEmail from dbo.People)begin set @emails = cEmail + ';' + @emailsend Thanks.

Date Parse Help

Posted: 02 Aug 2013 03:12 AM PDT

Hi Guys, Here is my Date Column, sample data below201308011730092013080117300920130801173009201308011730092013080117300920130801173009Is someone can help me to parse in Year,Month, Day and rest are time?Thanks in advance.

Search This Blog