Thursday, July 18, 2013

[how to] Query (duration) different between data value change

[how to] Query (duration) different between data value change


Query (duration) different between data value change

Posted: 18 Jul 2013 06:41 PM PDT

I am trying to write a query that would get me the duration of a status in a table. This query needs to work in SQL 2008.

Say I have the following table:

Key Value   RecordDate  1   1   2012-01-01  2   1   2012-01-02  3   1   2012-01-03  4   5   2012-01-05  5   5   2012-01-05 12:00:00  6   12  2012-01-06  7   1   2012-01-07  8   1   2012-01-08  

I would like to get the following result

Value StartDate   EndDate     Duration  1     2012-01-01  2012-01-05  4 days  5     2012-01-05  2012-01-06  1 days  12    2012-01-06  2012-01-07  1 days  1     2012-01-07  NULL        NULL  

Basically I would like the get the duration when of the value before it changes.

I am getting somewhere close, but still can't figure it out:

SELECT [Key], [Value],           MIN(RecordDate) OVER(PARTITION BY [Value]) as 'StarDate',         MAX(RecordDate) OVER(PARTITION BY [Value]) as 'EndDate',         DATEDIFF(day, (MIN(RecordDate) OVER(PARTITION BY [Value])), (MAX(RecordDate) OVER(PARTITION BY [Value])))  FROM [RateTable]  Order by RecordDate  

I know that SQL2012 has LAG and LEAD function, but since I am deal with SQL2008, I can't use it.

Please advise

Here is the sql statement that generate the sample data

CREATE TABLE RateTable(      [Key] [int] IDENTITY(1,1) NOT NULL,      [Value] [int] NULL,      [RecordDate] [Date] NULL      )  GO    INSERT INTO [RateTable] VALUES (1, '2012-01-01');  INSERT INTO [RateTable] VALUES (1, '2012-01-02');  INSERT INTO [RateTable] VALUES (1, '2012-01-03');  INSERT INTO [RateTable] VALUES (5, '2012-01-04');  INSERT INTO [RateTable] VALUES (5, '2012-01-05 12:00:00');  INSERT INTO [RateTable] VALUES (12, '2012-01-06');  INSERT INTO [RateTable] VALUES (1, '2012-01-07');  INSERT INTO [RateTable] VALUES (1, '2012-01-08');  GO  

Could not allocate a new page for database because of insufficient disk space in filegroup PRIMARY

Posted: 18 Jul 2013 08:17 PM PDT

*Error 0xc0202009: Data Flow Task 1: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80004005. \"Could not allocate a new page for database 'database' because of insufficient disk space in filegroup 'PRIMARY'. Create the necessary space by dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.". (SQL Server Import and Export Wizard)*

I'm trying to upload a series of 2GB files and while loading the 4th file, I continue to get the above error message even after I go to Database properties > Files and change the Filegroup's autogrowth to 2500 MB and maxsize unlimited.

What is the best workaround? This data will ultimately take up around 60-80GB.

Which is the uniqueness level of object_id?

Posted: 18 Jul 2013 07:48 PM PDT

Which is the uniqueness level of object_id in SQL Server? I mean, is this value unique per database, per instance, per server,... Also, what happens when I restore the database in another server or instance? And what happens with the object_id if I create the database from a script instead of a backup?

Should I refresh query plan cache

Posted: 18 Jul 2013 02:27 PM PDT

Please let me explain my problem and situation:

I have a web application - MVC3, MSSQL Server 2005, LinqToSQL. It has been running great until one fine morning I pushed a lot of rows to a table that is heavily used and since then I was getting query timeouts. In order to fix the problem I run the Database Tuning Advisor and I added some Indexes and Statistics. I also created a maintenance plan to rebuild indexes daily. After those additions, the application has been behaving unstable; it would work fast for couple of hours then it would start timing out again. Next, life forced me to clean up the table in matter, and the amount of rows in it is even smaller now than it was before but the timeouts are still happening. So, I removed all indexes that I created and now the website is much more stable but from time to time I still see some timeouts.

I've been trying to figure out how to fix those queries and when I profile it and paste the query directly into the SQL Management Studio it returns the results in 1 second, but when I run this query from my application, it's about 25 seconds. Then after it runs for the first time, next time it goes as fast as on the server!

I started doing some research and it looks like when I played with all those indexes my query plans got messed up and now they are creating issues.

My questions are :

  1. Should I refresh my query plan cache (around 22000 queries - a lot of them has been used only once) and
  2. If I do it, what would the impact be on the SQL while they are all rebuilding?

Listing databases that are not in use

Posted: 18 Jul 2013 04:19 PM PDT

I want to show databases that are not used. For that I tried this query, but the result set is empty:

mysql> SELECT SCHEMA_NAME FROM INFORMATION_SCHEMA.SCHEMATA where SCHEMA_NAME NOT          IN (SELECT db FROM INFORMATION_SCHEMA.PROCESSLIST);       Empty set (0.70 sec)  

I use MySQL 5.5.24.

Any suggestions?

Using an index for both uniqueness and fast lookup?

Posted: 18 Jul 2013 04:49 PM PDT

Right now I have a postgresql 8.4 database set up for players in a multiplayer game. I want the username column to be unique. I also want to be able to lookup a player by username quickly. Here's the output of \d players:

                            Table "public.players"     Column   |  Type   |                      Modifiers                         ------------+---------+------------------------------------------------------   id         | bigint  | not null default nextval('players_id_seq'::regclass)   username   | text    | not null   trophies   | integer |   Indexes:      "pk_id" PRIMARY KEY, btree (id)      "players_username_key" UNIQUE, btree (username)      "players_trophies_index" btree (trophies DESC NULLS LAST)      "players_username_index" btree (username)  

I'm not a DBA, so bear with me. This seems like an inefficient use of disk space by having two indices on the username column: one for uniqueness and one for fast lookup. Is it possible to combine them into one index, maintaining uniqueness and fast lookup? If so, are there any drawbacks to such an approach?

MySQLism, or SQL Standard

Posted: 18 Jul 2013 12:28 PM PDT

Just came across a weird issue, whereby depending on how I create an index, and index name is required.

http://dev.mysql.com/doc/refman/5.5/en/create-index.html http://dev.mysql.com/doc/refman/5.5/en/alter-table.html

CREATE INDEX `random_name` ON `my_table` (`my_column`); # Requires an index name    ALTER TABLE `my_table` ADD INDEX (`my_column`); # Does not require an index name  

It seems to me that the CREATE INDEX call, should not make index name required. I'm wondering if this is a MySQLism, or a SQL standard?

How to Create Accounting Statement [on hold]

Posted: 18 Jul 2013 11:15 AM PDT

I have data like this:

Column1         Column2  ==============  =======  sales              5000  purchase           3000  salesreturn         100  Purchasereturn      100  Expense            1000   

I want output like this:

Expense        1000             sales         5000  Purchase       3000             salesreturn    100(-)  PurchaseReturn  100(-)                       ----------                -------                          4900                 3900  

I am using Crystal Reports to display this output.

Extended event (sp_statement_completed) firing multiple times

Posted: 18 Jul 2013 01:41 PM PDT

I'm using SQL 2008 R2 enterprise and this is my first foray into extended events. It's pretty cool stuff. I'm wanting to create a session that will run basically all the time to audit query times on our SQL server. I will then be able to pipe that data into a product called Splunk and correlate that data with all sorts of other metrics.

Anyway, I'm having some odd things happen (odd to me ... I'm sure there is a reason) with the sp_statement_completed event. The .xel file is growing SUPER fast ... like 1GB in a minute fast. Looking at the data I see the same sql statements duplicated sometimes 75-100 times. What can I do to deduplicate/filter this data prior to going into the file. Many of them have a cpu of zero and duration of zero. I think I might even be seeing a little duplication using sql_statement_completed. I've also noticed that I do see stored procedure end times using the sql_statement_completed event. So, is there even a need to use sp_statement_completed?

Any thoughts would be most welcomed and thanks in advance.

SSIS hangs during For Each File enumeration

Posted: 18 Jul 2013 11:17 AM PDT

While debugging an SSIS package that loops over thousands of files, execution hangs with no errors and no failed tasks. Slightly simplified, the package does this:

For Each File                       (a directory full of pdf files, each ranging from 800 -                                          2000 pages and 3 - 8MB filesize)      *   Script Task                 (get file information)      *   File System Task            (clear out temp folder)      *   Execute Process Task        (use pdftk to break file into individual pages)      *   For Each File               (each page in temp folder, 40 - 100KB each)          *   Script Task             (parse pdf file)          *   Execute Process Task    (pdftk to concatenate like pages)          *   Execute SQL Task        (insert one row into staging table using OPENROWSET)          *   File System Task        (delete inserted file)          *   Script Task             (set variables for next iteration)      *   Execute SQL Task            (execute proc to merge staging into prod)      *   File System Task            (rename completed file)  

When I execute the package (debug mode in BIDS), it runs as expected through the first file and well into the second, but eventually hangs between iterations of the inner loop. When it hangs, all tasks are green, and the two loop containers are yellow. The output window does generate any messages, and going through the process tab, each task has a matching number of starts and stops except for the two loops and the package itself.

My first thought was that I'd exceeded an iteration limit for the For Each File enumerator, but I cannot find any documentation for such a limit. What else can I check to determine why SSIS stops running?

How to do incremental/differential backup every hour in Postgres 9.1?

Posted: 18 Jul 2013 11:42 AM PDT

Trying to do an hourly hot incremental backup of a single postgres server.

I have the following setup in postgresql.conf:

max_wal_senders=2  wal_level=archive  archive_mode=on  archive_command='copy "%p" "c:\\postgres\\archive\\%f"'  

I did a base backup with pg_basebackup -U postgres -D ..\archive -Ft -l 2013-07-07 -x

Which made a big base.tar file in the archive folder and added some long file name files, which I assume are the WALs.

pg_start_backup('label') and pg_stop_backup() seem to create the WAL files in xp_log, and then copy them to the archive folder.

  1. what command(s) do I run to do a new incremental backup (pg_basebackup does a new base backup which I don't want right now)? do I just run select pg_start_backup('label'); select pg_stop_backup(); on a schedule?

  2. What does the label in pg_basebackup and pg_start_backup() do exactly?

  3. WAL Files don't seem to be removed from pg_xlog. What should I do about that? It seems to keep 5 or so WALs in pg_xlog. Is that to be expected?

  4. Do I need to backup the .backup files in the archive folder or just the 16,384KB WAL files?

  5. should I use the --xlog parameter and if so do I need to change wal_keep_segments from 0?

How to determine Oracle LOB storage footprint?

Posted: 18 Jul 2013 11:23 AM PDT

With SECUREFILE storage I can specify whether I want compression (and the level of it) and deduplication, and it's all a trade-off between time and space.

Timing is fairly easy to profile but what's the easiest way to get a reasonably accurate measurement of how much space a specific LOB column takes up?

Create a trigger to update table data on another Server's database

Posted: 18 Jul 2013 07:27 PM PDT

I am creating a trigger in MySQL and I need a little help.

I have 2 websites, 2 databases (same name) on 2 different web servers, S1 & S2.

These databases have the same tables names.

I want both the user data on both the websites to be the same.

So if one user registers on S1, then that user registration information should be passed to S2.

If a user registration information is updated on S1, the same information should be updated on S2.

And the same applies for S2.

How can I create a trigger so that every time there is an insert / update / delete in database on S1, then the user table on S2 also gets automatically updated.

And every time there is an insert / update / delete in database on S2, then the user table on S1 also get automatically updated.

Is this possible? Could you provide some examples?

SSRS Reports on WUXGA Displays

Posted: 18 Jul 2013 12:10 PM PDT

We use WUXGA displays (1920 x 1200), and have lots of SSRS reports. Since these reports need to be printed once in a while the layout is adjusted to A4. However when the users look at the reports on the display around 40% of the screen is blank. Absolut waste of screen space. How can I adjust SSRS to use the full screen size for reports ?

Tool to export data with all relational data?

Posted: 18 Jul 2013 05:27 PM PDT

Is there a tool to export data from selected rows in a table with all data stored in other tables in other tables linked by relational design?

The purpose is to ease migrations of bits of data between servers for adhoc migrations. I am looking specifically for a tool for MySQL InnoDB with defined foreign keys.

mysql: need help to optimize my query/table

Posted: 18 Jul 2013 10:27 AM PDT

I'm wondering if someone could help me optimize my tables/query to speed up a query. It is currently running ridiculously slow. I think a well-thought out index could help me. Any help would be really appreciated

Tables URLS and TAGS mentioned below are 2 and 20 million rows respectively (will probably end up having 10x). A query like the one below already takes 10 seconds to run.

An Example: http://whatrethebest.com/php+tutorials

Tables

CREATE TABLE IF NOT EXISTS `TAGS` (  `hash` varchar(255) NOT NULL,  `tag` varchar(255) NOT NULL,  UNIQUE KEY `my_unique_key` (`hash`,`tag`),  KEY `tag` (`tag`)  ) ENGINE=InnoDB DEFAULT CHARSET=utf8;  

and

CREATE TABLE IF NOT EXISTS `URLS` (  `url` text NOT NULL,  `domain` text,  `title` text NOT NULL,  `description` text,  `numsaves` int(11) NOT NULL,  `firstsaved` varchar(256) DEFAULT NULL,  `md5` varchar(255) NOT NULL DEFAULT '',  PRIMARY KEY (`md5`),  UNIQUE KEY `md5` (`md5`),  KEY `numsaves` (`numsaves`)  ) ENGINE=InnoDB DEFAULT CHARSET=utf8;  

QUERY

SELECT urls.md5, urls.url, urls.title, urls.numsaves  FROM urls  JOIN tags ON urls.md5 = tags.hash  WHERE tags.tag  IN (  'php', 'tutorials'  )  GROUP BY urls.md5  HAVING COUNT( * ) =2  ORDER BY urls.numsaves DESC  LIMIT 20  

EXPLAIN

I'm not sure what this shows

id  select_type     table   type    possible_keys   key     key_len     ref     rows    Extra  1   SIMPLE  tags    range   my_unique_key,tag   tag     767     NULL    230946  Using where; Using index; Using temporary; Using filesort  1   SIMPLE  urls    eq_ref  PRIMARY,md5     PRIMARY     767     jcooper_whatrethebest_urls.tags.hash    1     

So I think the problem is:

certain tags like 'php have 34,000 entries, most of which only have under 5 saves. But in order to get the 20 most saved it is having to sort them all.Right?

I can't really create a 'numsaves' column in TAGS and index on that because that number will be changing up and down, and that wouldnt make sense. Is it possible to create a cross-table index between urls.numsaves and tags.tag? Or a third table to use in my query somehow? Would this solve my problem? I know almost nothing about indexing.

Any help would be really appreciated!


EDITS BELOW

RESPONSE TO YperCube:

Thank you, Thank you, your suggestions have sped up my queries by a factor of 10-20X . This is an immense improvement. I can't thank you enough.

I'm posting my current queries and tables with execution times in case you or anyone else has any more optimization suggestions. I am worried that as my table grows I may not be able to keep my search times under 3 seconds, which would be a killer.

New Query Example 1

SELECT u.id, u.url, u.title, u.numsaves  FROM urls AS u  JOIN tags AS t1 ON t1.url_id = u.id  AND t1.tag = 'programming'  JOIN tags AS t2 ON t2.url_id = u.id  AND t2.tag = 'language'  ORDER BY u.numsaves DESC  LIMIT 20     Showing rows 20 - 19 ( 20 total, Query took 0.2009 sec)     id  select_type     table   type    possible_keys   key     key_len     ref     rows    Extra  1   SIMPLE  t2  ref     tag_id  tag_id  767     const   53820   Using where; Using index; Using temporary; Using filesort  1   SIMPLE  t1  ref     tag_id  tag_id  772     const,jcooper_whatrethebest_urls.t2.url_id  1   Using where; Using index  1   SIMPLE  u   eq_ref  PRIMARY,id_numsaves_IX  PRIMARY     4   jcooper_whatrethebest_urls.t2.url_id    1     

Neq Query Example 2 (seems to be slower)

SELECT u.id, u.url, u.title, u.numsaves  FROM urls AS u  JOIN   ( SELECT ui.id, ui.numsaves  FROM urls AS ui  JOIN tags AS t1 ON  t1.url_id = ui.id  AND t1.tag = 'programming'  JOIN tags AS t2 ON  t2.url_id = ui.id  AND t2.tag = 'language'  ORDER BY ui.numsaves DESC  LIMIT 20  ) AS ulim ON ulim.id = u.id  ORDER BY ulim.numsaves DESC ;    Showing rows 0 - 29 ( 2,794,577 total, Query took 0.4633 sec)    id  select_type     table   type    possible_keys   key     key_len     ref     rows    Extra  1   PRIMARY     <derived2>  ALL     NULL    NULL    NULL    NULL    20  Using filesort  1   PRIMARY     u   eq_ref  PRIMARY,id_numsaves_IX  PRIMARY     4   ulim.id     1     2   DERIVED     t2  ref     tag_id  tag_id  767         53820   Using where; Using index; Using temporary; Using filesort  2   DERIVED     t1  ref     tag_id  tag_id  772     jcooper_whatrethebest_urls.t2.url_id    1   Using where; Using index  2   DERIVED     ui  eq_ref  PRIMARY,id_numsaves_IX  PRIMARY     4   jcooper_whatrethebest_urls.t2.url_id    1     

Using Query Example on a Single Tag (slower by a lot)

SELECT u.id, u.url, u.title, u.numsaves  FROM urls AS u  JOIN tags AS t1 ON t1.url_id = u.id  AND t1.tag = 'programming'  ORDER BY u.numsaves DESC  LIMIT 20     Showing rows 20 - 19 ( 20 total, Query took 3.7395 sec)    id  select_type     table   type    possible_keys   key     key_len     ref     rows    Extra  1   SIMPLE  t1  ref     tag_id  tag_id  767     const   200576  Using where; Using index; Using temporary; Using filesort  1   SIMPLE  u   eq_ref  PRIMARY,id_numsaves_IX  PRIMARY     4   jcooper_whatrethebest_urls.t1.url_id    1     

I'm not sure why this one is so much slower?

Do you have any ideas of a query to optimize for querying a single tag?

My Current Tables

CREATE TABLE `urls` (  `id` int(11) NOT NULL AUTO_INCREMENT,  `url` text NOT NULL,  `domain` text,  `title` text NOT NULL,  `description` text,  `numsaves` int(11) NOT NULL,  `firstsaved` varchar(256) DEFAULT NULL,  `md5` varchar(255) NOT NULL DEFAULT '',  PRIMARY KEY (`id`),  UNIQUE KEY `md5` (`md5`),  KEY `id_numsaves_IX` (`id`,`numsaves`)  ) ENGINE=InnoDB AUTO_INCREMENT=2958560 DEFAULT CHARSET=utf8    CREATE TABLE `tags` (  `url_id` int(11) DEFAULT NULL,  `hash` varchar(255) NOT NULL,  `tag` varchar(255) NOT NULL,  UNIQUE KEY `tag_id` (`tag`,`url_id`)  ) ENGINE=InnoDB DEFAULT CHARSET=utf8  

Thank you again

Replication on MySQL server

Posted: 18 Jul 2013 03:27 PM PDT

I had to stop the slave server to test something. After I started the server again there is a problem with replication on MySQL server

On the problematic server

mysql> SHOW SLAVE STATUS\G  *************************** 1. row ***************************                 Slave_IO_State: Connecting to master                    Master_Host: servera                    Master_User: replica                    Master_Port: 3306                  Connect_Retry: 60                Master_Log_File: servera-bin.000024            Read_Master_Log_Pos: 808459481                 Relay_Log_File: serverb-relay-bin.000071                  Relay_Log_Pos: 4          Relay_Master_Log_File: servera-bin.000024               Slave_IO_Running: No              Slave_SQL_Running: Yes                Replicate_Do_DB:            Replicate_Ignore_DB:             Replicate_Do_Table:         Replicate_Ignore_Table:        Replicate_Wild_Do_Table:    Replicate_Wild_Ignore_Table:                     Last_Errno: 0                     Last_Error:                   Skip_Counter: 0            Exec_Master_Log_Pos: 808459481                Relay_Log_Space: 106                Until_Condition: None                 Until_Log_File:                  Until_Log_Pos: 0             Master_SSL_Allowed: No             Master_SSL_CA_File:             Master_SSL_CA_Path:                Master_SSL_Cert:              Master_SSL_Cipher:                 Master_SSL_Key:          Seconds_Behind_Master: NULL  Master_SSL_Verify_Server_Cert: No                  Last_IO_Errno: 1129                  Last_IO_Error: error connecting to master 'replica@servera:3306' - retry-time: 60  retries: 86400                 Last_SQL_Errno: 0                 Last_SQL_Error:  

on the problematic server:

SELECT user, host FROM mysql.user WHERE Repl_slave_priv = 'Y';  +---------+-----------+  | user    | host      |  +---------+-----------+  | root    | localhost |  | root    | serverb   |  | root    | 127.0.0.1 |  | replica | servera   |  | replica | serverb   |  +---------+-----------+  

on the main server:

SELECT user, host FROM mysql.user WHERE Repl_slave_priv = 'Y';  +---------+-----------+  | user    | host      |  +---------+-----------+  | root    | localhost |  | root    | servera   |  | root    | 127.0.0.1 |  | replica | servera   |  | replica | serverb   |  +---------+-----------+  

according to what I've read, there is a need to execute the following command om the main server:

mysql> FLUSH HOSTS;   

What will happen then? if there is any application connected to it - will it disconnect it also?

Merge Replication identity field issues

Posted: 18 Jul 2013 02:27 PM PDT

One of our clients is using our software with merge replication of a database on a SQL Server 2008 R2 machine. There are two production environments in separate geographical locations only one of which is live at any one time, so basically one and live one on standby. Only the live database is updated by teh applications. Every couple of months they failover between the datacentres and the standby environment become the live centre. There is an instance of SQL Server 2008 in each datacentre and merge replication is used to keep them in sync. This was all working ok until the beginning of the year when we started getting replication errors with some lTID columns in various tables that have the Identity property set.

The errors were like this one:

The insert failed. It conflicted with an identity range check constraint in database 'GateMain', replicated table 'dbo.tGateCalcsLog', column 'lTID'. If the identity column is automatically managed by replication, update the range as follows: for the Publisher, execute sp_adjustpublisheridentityrange; for the Subscriber, run the Distribution Agent or the Merge Agent.

Then after the last failover we noticed we had an issue with the lTID values in one specific table. Our application relies on the lTID value always having incremented in order such that the highest lTID value is always the newest entry in the table. We've found that due to how the identity ranges are being managed by replication that when the system is failed over that the lTID range of the now live database server may have a range of values that are lower than those already present in the table. Is there a way to manage this in merge replication so we can guarantee that the next identity value allocated to the lTID column in greater than any lTID currently in the table? Or do we need to use a different type of replication or possibly mirroring?

MySQL Slaves lag behind master

Posted: 18 Jul 2013 04:27 PM PDT

I have one master and four slaves. Sometimes all my slaves lag behind the master. I have implemented the heartbeat for monitoring replication lag. Now I am trying to find why the slaves are lagging behind the master.

I saw the slow queries (for today) on the master and I found that the slowest query (DML) was taking 138 seconds. But the slaves were lagging about 1400 seconds and there were also no slow queries on the slaves for DML (update, delete, insert, etc.).

Points to be taken into consideration:

  1. All tables are InnoDB.
  2. 68 GB of RAM (Master as well as slaves).
  3. Data size about 1 TB.
  4. Master and slave are running from a long.

What may be the reason for lag?

MySQL replication between VPS and shared host

Posted: 18 Jul 2013 06:27 PM PDT

I have a VPS where I have ssh access and shared host with CPanel. Both have MySQL.

I need to create a master/slave setup among the two server. As per this tutorial: How To Set Up Database Replication In MySQL, I will need to modify the my.conf file.

Is it possible to achieve this with the following permissions on both the servers?

  • VPS

    I have root access with ssh login where I can modify /etc/my.conf file.

  • Shared host

    Cpanel with phpmyadmin.

How to store massive quantities of machine generated data?

Posted: 18 Jul 2013 10:08 AM PDT

I am tasked with developing a system that stores massive quantities of machine generated data (syslogs) from a number of server farms and I am wondering what tools you fine folks use such instances. Our project will need to keep up to 10 TB of data (maybe more in the future) and serve queries fairly quickly. I have been reading about columnular rdms' as well as nosql options, which all seem promising.

What GUI tools can be used for PostgreSQL in Mac OS X?

Posted: 18 Jul 2013 10:06 AM PDT

I have recently started using PostgreSQL with my rails app.

I was looking for a SQL GUI to the PostgreSQL database (for Mac).

Anything better than pgadmin3?

What are the challenges of administrating an International stock trading database?

Posted: 18 Jul 2013 09:01 PM PDT

I am a student, and have a question for an assignment:

What problems and challenges do database administrators face when designing systems to trade international stocks in real time and 24 hours a day, 6 days a week? Are cloud based database systems as a service a viable option?

How do I change the DEFINER of a VIEW in Mysql?

Posted: 18 Jul 2013 11:27 AM PDT

When I run mysqldump, I get an error:

mysqldump: Got error: 1449: The user specified as a definer ('root'@'foobar') does not exist when using LOCK TABLES  

This makes sense because foobar is a legacy machine that no longer exists.

How do I change the definer of all my tables to 'root'@'localhost'?

Dynamic temporary index creation

Posted: 18 Jul 2013 09:40 AM PDT

In my undergrad databases course, my professor mentioned that some DBMS software is so advanced that it can detect when a long-running query would benefit from an index. The DBMS can then create that index for the duration of the query to increase performance.

However, I don't think that's possible, and some initial Googling seems to agree with me. Does anyone know of a DBMS that actually implements this strategy?

Concurrent Transactions vs Simultaneous Transactions

Posted: 18 Jul 2013 09:41 AM PDT

Chapter 6 of David Kroenke's Database Concepts: Database Administration

Is my understanding correct that in a database, simultaneous transactions don't really happen. Instead, it is a concurrent transaction that is happening fast that users think it is a simultaneous transaction?

The reason I ask is because I was thrown off by the question on my assignment asking how many CPUs are required to perform a simultaneous transaction. I answered only one cpu is required since it is really a concurrent transaction that is happening and not a simultaneous one?

Thoughts, anyone?

EDIT: ~ jcolebrand --

@gbn and @Mark,

I appreciate your input, however, I absolutely have no idea about what those terms are because this course I am currently taking is an introductory course to database management and my question was only discussed in the book in 1 paragraph and did not elaborate any further. I might be able to understand your responses in time, but now, i cannot just yet. Thanks though!

Calculate months between two dates

Posted: 18 Jul 2013 09:42 AM PDT

For a University course I've been given the following task

Suppose that we have an employee table like (first_name, last_name, salary, hire_date, ...). There is regulation which states that every employee each year after he/she is hired, must renew his contract with company. Show how many months remain before the next renewal for each employee.

This is my attempt:

select (abs(months_between(sysdate,hire_date)/12 -            round((months_between(sysdate,e.hire_date))/12)))  from employees e  

Are there any good and free tools for managing a PostgreSQL database?

Posted: 18 Jul 2013 10:59 AM PDT

Before, I have used phpmyadmin to manage an MySQL database, but now I would like to manage a PostgreSQL database. The PostgreSQL database is on a server, but not a webserver, so I don't use PHP.

Are there any good and free tools for managing a PostgreSQL database?

I have tried with pgAdmin III but it was a far from intuitive application to use, compared to phpmyadmin that I have used before. What are PostgreSQL DBAs usually using? do they use graphical tools like pgAdmin III or is it mostly command line tools?

[MS SQL Server] Avamar Transaction Log Backup Issue

[MS SQL Server] Avamar Transaction Log Backup Issue


Avamar Transaction Log Backup Issue

Posted: 04 Feb 2013 01:34 AM PST

Hi,We just recently started using Avamar to backup everything including an hourly SQL server transaction log backup. The issue we're having is that since we started using Avamar, the transaction file (LDF) file keeps growing very large. Has anyone experienced this issue before? If not, any idea?Thanks,

about servers.

Posted: 18 Jul 2013 03:52 AM PDT

can anyone please explain in simple terms what are physical servers and virtual servers?

Can’t use connect using (local) or localhost as a server name

Posted: 18 Jul 2013 02:59 AM PDT

Hello everyoneI have a laptop with SQL Server 2008 installed locally. I can only connect to the server by specifying the instance name (this is true of connection attempts using SSMS, datasources in SSRS, etc.) It would greatly simplify things if I could connect using (local) instead of the instance name.When I try using (local), it tells me the "server was not found or is inaccessible".Can anyone please tell me what I should do to configure this?Thanks in advance for your help.

litespeed restore

Posted: 02 Jul 2013 02:35 AM PDT

we have db backup file(litespeed backup file).Litespeed is not installed in the server.we need a command to restore the litespeed backup file.

Transaction Log growth out of control

Posted: 18 Jul 2013 12:19 AM PDT

Last weekend I re-indexed some Databases.Before doing so I backed up the Databases and Set them to simple recovery mode.After the rebuild of the Indexes was complete I Set the Database back to Full recovery and performed another backup.Ever since the Transaction log growth and size on one of the Databases is huge.The Database is 259 GB but the Log File has been quite large.I'm backing up the log and shrinking it.This morning the log file was 108 GB.I tried backing up and shrinking several times but the size remained the same.I checked for Open Transactions and there were none.So I put the Database in Simple Recovery and then I shrank the log file.I want the Database to be in Full Recovery mode.I never experienced anything like this, and ideas?Thanks.

Backup strategies and scheduling

Posted: 17 Jul 2013 11:48 PM PDT

Our DBA has left the company and I have just inherited the daily job of reviewing backup jobs to ensure that they have completed successfully.I created a list of databases and their corresponding backup jobs/schedules in order to check them off and document completion.During this exercise I have noted two schedules that appear odd to me:DatabaseW scheduled for Full daily, Diff every 4 hours, Logs every 6 hours. DatabaseR scheduled for Full daily, Logs weekly.I have always scheduled logs most frequently and in between diffs and/or fulls...am I missing something or should these schedules be changed?

Server crash during load testing

Posted: 18 Jul 2013 12:02 AM PDT

After a peak load test carried out for an application for two consecutive days, the DB server got automatically bounced the next day with following event logged.Does a deadlock situation/lot of concurrent connections bring down the server? What should be taken care of in order to address this issue?The following is the error message from the logs:[sqsrvres] printODBCError: sqlstate = 08S01; native error = 0; message = [Microsoft][SQL Server Native Client 10.0]The connection is no longer usable because the server failed to respond to a command cancellation for a previously executed statement in a timely manner. Possible causes include application deadlocks or the server being overloaded. Open a new connection and re-try the operation.Cluster resource 'SQL Server' in clustered service or application 'SQL Server (MSSQLSERVER)' failed.Cluster Agent: The cluster resource SQL Server has failed. [SNMP TRAP: 15006 in CPQCLUS.MIB]Cluster Agent: The cluster resource SQL Server Agent has failed. [SNMP TRAP: 15006 in CPQCLUS.MIB]The SQL Server Agent (MSSQLSERVER) service terminated unexpectedly. It has done this 2 time(s).Cluster Agent: The cluster resource SQL Server Agent has become degraded. [SNMP TRAP: 15005 in CPQCLUS.MIB][sqsrvres] CheckQueryProcessorAlive: sqlexecdirect failed[sqsrvres] printODBCError: sqlstate = HYT00; native error = 0; message = [Microsoft][SQL Server Native Client 10.0]Query timeout expired[sqsrvres] OnlineThread: QP is not online.

SQL Server available memory during load test

Posted: 18 Jul 2013 12:00 AM PDT

On a Database server, Buffer pool is configured to utilize 80% of RAM and remaining 20% is for OS.At the end of Normal load test on the application, database server shows available memory of around 50% on the server. However, when application Peak load test is carried out, it starts with available memory of less than 10% on the DB server and then goes down till 7%. And no connections are made to SQL server during this time apart from the peak load test.Is this acceptable behavior? Can we have available memory going down until 7% when memory allocated to OS is 20%?

Cannot access tempdb properties

Posted: 04 Apr 2012 11:45 AM PDT

Immediately after a reboot we can access the properties of the tempdb. However, a few minutes later when you try to examine those properties you get:TITLE: Microsoft SQL Server Management Studio------------------------------Cannot show requested dialog.------------------------------ADDITIONAL INFORMATION:Cannot show requested dialog. (SqlMgmt)------------------------------Property Size is not available for Database '[tempdb]'. This property may not exist for this object, or may not be retrievable due to insufficient access rights. (Microsoft.SqlServer.Smo)For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.50.1600.1+((KJ_RTM).100402-1540+)&EvtSrc=Microsoft.SqlServer.Management.Smo.ExceptionTemplates.PropertyCannotBeRetrievedExceptionText&EvtID=Size&LinkId=20476==============I'm not sure if this is a symptom or the problem but this server and db are experiencing timeouts for inserts and updates. The vendor is working on it but I'm hoping we could lend some assistance. None of our other servers that do much higher volumes have this issue. We've been over and over network, memory etc. Any ideas or thoughts are appreciated.

[Articles] Password Ninjas

[Articles] Password Ninjas


Password Ninjas

Posted: 17 Jul 2013 11:00 PM PDT

How do you keep track of passwords? A few experts out there share some of their techniques and Steve Jones adds his thoughts.

[SQL 2012] Can SSMS be made to display a column wider?

[SQL 2012] Can SSMS be made to display a column wider?


Can SSMS be made to display a column wider?

Posted: 18 Jul 2013 12:30 AM PDT

When I select a column that has several 100 characters in it I need to "grab" the edge and drag it over so I can see all the string. Is there a way to make SSMS display all of the string, or at least more of it?It would be nice is SSMS would/could redisplay data with whatever formatting it is currently on the screen instead of having to "grab the edge and draaaaag it over ..." :-D

SSIS 2012 Package Won't Run as SQL Server Agent Job

Posted: 06 Jun 2013 10:04 AM PDT

Hello, I'm new to the forum so sorry in advance for any etiquette I break :)I have an SSIS 2012 package that has a OLE DB data source connecting to an Access Database. The package runs fine when i run it from SQL Server Data Tools. I then deploy it to an SSIS Catalog and I can run the package fine from there. Next I add it as the only step in an SQL Server Agent Job and I get 4 consistent error messages."failed validation and returned validation status "VS_ISBROKEN"""One or more component failed validation""There were errors during task validation""An error occurred executing the provided SQL command: "select * from 'zqryExportTesting'". Object reference not set to an instance of an object.I've tries everything i can find. I've set the package encryption to be EncryptSensativeWithPassword instead of the default EncryptSensativeWithUserID. I've tried the 32 vs 64 bit runtime (I'm using 32 in the scenarios where it works). I've set the SQL Server Agent services to log on as the same user I am currently logged on to the server as. I've set up a proxy with the same user I am logged on to the servers as. I'm pretty sure i've tried every combination i've researched and still got nothing. I'm sure there is a simple setting or trick I'm missing.Thanks a TON in advance for any help you can provide.-Nick

Help me SQL Select

Posted: 17 Jul 2013 02:32 PM PDT

I have sample data as below:IDActivity Subject ProjectName Fullname CusName2 Tuan NULL NULL Aquafina2 Tuan NULL T1 NULL2 Tuan Test NULL NULL2 Tuan NULL NULL NULL3 ABC NULL NULL NULL3 ABC Test NULL NULLAnd I want Select Result as below:IDActivity Subject ProjectName Fullname CusName2 Tuan Test Test T1 Aquafina3 ABC Test NULL NULLEvery one, please.Thank you very much.

converting an date from oracle to a regular datetime in sql

Posted: 17 Jul 2013 09:20 AM PDT

So I have some data that I am importing in from Oracle - and the date is coming in the following way: 25-JUN-13 12.01.15.096000000 AMIf I run the following: select convert(date,'25-JUN-13 12.01.15.096000000 AM') as [date]I get the following error:Conversion failed when converting date and/or time from character stringIf I run the following:select convert(datetime,substring('25-JUN-13 12.01.15.096000000 AM',0,10))It returns the data in the format: 2013-06-25 00:00:00.000I need to be able to grab the complete time stamp and put in a datetime format. Any ideas on how to do that?

Has anyone implemented SQL Server 2012 Always On for disaster recovery

Posted: 12 Apr 2013 11:31 AM PDT

Am helping a client upgrade his servers (currently SS 2005 with Windows 2003 cluster). Talked to Dell sales rep this afternoon and he had not heard of anyone implementing Always On for DR. He said all his sales for SS2012 have been for active/passive cluster. I've attended several SQL Saturday sessions and webinars on Always On, but now are worried that nobody has really implemented it in production. I don't want the client to be on the technology tip by himself, but would like to take advantage of Always On if it really works.I would appreciate it if anybody could tell me if they have implemented Always On and what pros and cons they have found.Thanks in advance,Mike Byrd

DQS NULL REFERENCE ERROR

Posted: 17 Jul 2013 03:06 AM PDT

Experiencing a DQS error whilst running an SSIS package with a DQS component. I have created a DQS knowledge which is accessed by the DQS component. When the package is run from SQL Server agent we get the following error:[size="1"]DFT_ProcessWorkData:Error: Microsoft.Ssdqs.Infra.Exceptions.EntryPointException: The Knowledge Base does not exist [Id : 1030017]. at Microsoft.Ssdqs.Proxy.Database.DBAccessClient.Exec() at Microsoft.Ssdqs.Proxy.EntryPoint.KnowledgebaseManagementEntryPointClient.DQProjectGetById(Int64 id) at Microsoft.Ssdqs.Component.DataCorrection.Logic.DataCorrectionComponent.PostExecute() at Microsoft.SqlServer.Dts.Pipeline.ManagedComponentHost.HostPostExecute(IDTSManagedComponentWrapper100 wrapper)DFT_ProcessWorkData:Error: System.NullReferenceException: Object reference not set to an instance of an object. at Microsoft.Ssdqs.Component.DataCorrection.Logic.DataCorrectionComponent.ProcessChunk(ReadOnlyCollection`1 fieldMappings, ReadOnlyCollection`1 records, CorrectedRecordsStatusStatistics& correctedRecordsTotalStatusStatistics) at Microsoft.Ssdqs.Component.DataCorrection.Logic.DataCorrectionComponent.ProcessInput(Int32 inputID, PipelineBuffer buffer) at Microsoft.SqlServer.Dts.Pipeline.ManagedComponentHost.HostProcessInput(IDTSManagedComponentWrapper100 wrapper, Int32 inputID, IDTSBuffer100 pDTSBuffer, IntPtr bufferWirePacket)DFT_ProcessWorkData:Error: An unexpected error occurred in the DQS infrastructure.System.NullReferenceException: Object reference not set to an instance of an object. at Microsoft.Ssdqs.Component.DataCorrection.Logic.DataCorrectionComponent.ProcessChunk(ReadOnlyCollection`1 fieldMappings, ReadOnlyCollection`1 records, CorrectedRecordsStatusStatistics& correctedRecordsTotalStatusStatistics)[/size]When the packages were run on our development server they ran fine. The same packages were then deployed to our RTE environment. DQS was also installed on our RTE environment. SQL Jobs were then scheduled to run the packages. The jobs failed giving the above error. I ran the package from visual studio and the package ran successfully. This led me to believe it was a rights issue. I therefore changed the credential and proxy account which ran the sql jobs to ensure that they had the necessary rights. I tried executing the package again from Sql server agent but still got the same error.We uninstalled DQS on our RTE environment and reinstalled it again but the problem still remains. Anybody else experienced this issue or have any idea what may be causing the error.

[T-SQL] delete top 100

[T-SQL] delete top 100


delete top 100

Posted: 17 Jul 2013 05:56 AM PDT

himy query is giving me errodelete top(1000) from table a join table bon a.id = b.idand b.date < getdate()error: incorrect syntax near a.i want to delete top 1000 from table a

SP Date parameter default to GETDATE()?

Posted: 17 Jul 2013 10:34 AM PDT

I want my procedure to include an optional date parameter that defaults to the current date. Why does the following code show a Syntax error?[code="sql"]CREATE PROCEDURE sp_MyProcedure @Record_ID Varchar(12), @OtherKey_ID Int, @DateParameter DATE = GETDATE(), @Comments VARCHAR(MAX) = NULLAS ....[/code]

SPs inside BEGIN/ROLLBACK TRAN

Posted: 17 Jul 2013 10:20 AM PDT

When testing Stored Procedures, can you run something like[code="sql"]BEGIN TRANEXEC sp_deleteSomeRecords "Varchar Parameter", 999, NULL-- COMMIT TRAN-- ROLLBACK TRAN[/code]... to protect the data in case sometime goes wrong when the procedure itself contains BEGIN/COMMIT/ROLLBACK TRAN statements and a TRY/CATCH block?

Need help with Select Where clause using parameter equal blank or in a list

Posted: 17 Jul 2013 03:06 AM PDT

I have a stored procedure that passes in several parameters that can be blank or a list of selected filter values. For each parameter, I split the values and insert into their own temp tables. This all works fine. When I get to my Select statement, I want to select all records if the parameter that was passed is blank or filter the select if the parameter was not blank. Here is an example of what I am dfoing: of what I am doing:create procedure XYZ @param1 varchar(50), -- passing in 2 values @param2 varchar(50), -- passing in 2 values @param3 varchar(50) -- passing in a blank stringascreate table #Param1Values (Param1Value int) insert #Param1Values select * from SplitToInt(@param1)--Repeat above for other two parametersselect * from ABC where (@param1 = '' or (colA in (select Param1Value from #Param1Values)) and (@param2 = '' or (colB in (select Param2Value from #Param2Values)) and (@param3 = '' or (colC in (select Param3Value from #Param3Values))This all works, however, I found the timing for the query to fluctuate immensely when I alter the comparison of the second parameter. The query takes about 1:20 to run when coded as above. If I take out the "@param2 = '' or", the query runs in 11 seconds. I don't see this kind of adjustment if I take out the same portion of the first parameter. That only bring it down another 2 seconds. I have considered populating the temp table with all possible values, if the parameter is blank. The difference between that and just removing the whole second line(to simulate an All filter) is just 10 seconds. I also tried putting an index on the temp table, but that actually took longer.I do know that there are a lot more distinct values in colB then there are in the other two. Also, the primary table of the query does not have any indexes set up for any of the columns in the query. I haven't tried setting those up yet to see what kind of performance improvement I might get. Any ideas as to why there is such a big impact with this one column? Is there a better approach that I should be taking?Thanks.

Index and Table Size

Posted: 17 Jul 2013 01:36 AM PDT

Hello EveryoneHappy hump day to all.I am working on gather some stats for each table in a database. I am gathering row counts, which I already have. But my question is where can I find the data for the size of the table and the size of each index per each table? I like the way that one can see the index space, data space and row count for each particular table by selecting "Storage" under the properties of the table. Where may I find this information, so that I can select that data into another table for stats reporting?Thank you in advance for your assistance, suggestions and comments.Andrew SQLDBA

Query processor ran out of internal resources and could not produce a query plan

Posted: 17 Jul 2013 02:21 AM PDT

Hello,I'm having the following error when running a query :Msg 8623, Level 16, State 1, Line 1The query processor ran out of internal resources and could not produce a query plan. This is a rare event and only expected for extremely complex queries or queries that reference a very large number of tables or partitions. Please simplify the query. If you believe you have received this message in error, contact Customer Support Services for more information.Query (please see attachments for full query) :WITH SAWITH0 AS (select T73739."CONTACT_ID" as c1, T73739."PROSPECT_ID" as c2, T73739."ACCNT_ID" as c3, T73739."INTEGRATION_ID" as c4from "W_PARTY_PER_D" T73739where ( T73739."INTEGRATION_ID" in (29547 Integration_ids ))) select SAWITH0.c1 as c1, SAWITH0.c2 as c2, SAWITH0.c3 as c3, '0' as c4, '' as c5, '0' as c6, 1 as c7, 1 as c9, SAWITH0.c4 as c10from SAWITH0I managed to not get an error by reducing the number of IN items to 22000. I have attached the estimated query plan.I also managed to not get the error by removing the fixed values from the CTE select statement as shown below : select SAWITH0.c1 as c1, SAWITH0.c2 as c2, SAWITH0.c3 as c3, SAWITH0.c4 as c10from SAWITH0I have also attached the query plan. Query plan for select statement with IN list is attached as well.select T73739."CONTACT_ID" as c1, T73739."PROSPECT_ID" as c2, T73739."ACCNT_ID" as c3, T73739."INTEGRATION_ID" as c4from "W_PARTY_PER_D" T73739where ( T73739."INTEGRATION_ID" in (29547 Integration_ids ))Why am I having this behaviour ? I there some kind of limit at work here ? I know I haven't exceeded the IN limit.All related queries and estimated plans are attached.Kailash

[SQL Server 2008 issues] Indexes with INCLUDE columns

[SQL Server 2008 issues] Indexes with INCLUDE columns


Indexes with INCLUDE columns

Posted: 17 Jul 2013 05:14 PM PDT

Good Day.Include columns sound wonderful , but I would like to know is How many columns should be allowed as INCLUDE columns ? I ran a missing index report once and it suggested up to 10 columns should be added to the INCLUDE columns list . I did not accept the advice as it did not look right at the time . Does it not cause unnecessary overhead on very busy OLTP systems ? I had instances where we were severely impacted by INClUDE columns where we only added 4 columns to the INCLUDE list . Thanks for you feedback in advance.Lian

How to get exact figures in fraction as in execl in sql server.

Posted: 17 Jul 2013 04:54 PM PDT

Hi,In Excel 29/30 gives me 0.966666667 that is 0.97But In Sql it gives as 0.I had tried the following code,but it gives as 0.Select Convert(Numeric(12,2),29/30)Please suggest me how to get answer as exact fraction in sql that is 0.97.Thanks in Advance!

create proc

Posted: 16 Jul 2013 09:52 PM PDT

create procedure insyenkcek@phone nvarchar(20),@SertiNom nvarchar(50),@ASA nvarchar(100),@telUnvan nvarchar(100),@kateqoriya nvarchar(100),@SHesab nvarchar(20),@Alam nvarchar(60),@IDCDMAalam nvarchar(30),@TelTarix nvarchar(50),@MuqBasTar nvarchar(50),@MuqSonTar nvarchar(50),@Milliyet nvarchar(60),@Cinsi nvarchar(50),@DoYer nvarchar(100),@DoTarix nvarchar(50),@SexVes nvarchar(60),@Elaqe nvarchar(50),@Odeme nvarchar(30),@Mebleg nvarchar(20),@BudceA nvarchar(60),@qeyd,@SenTar nvarchar(50),@MINIATS nvarchar(50),@XusKart nvarchar(20),@Eskiz nvarchar(50),@NarN nvarchar(30),@NarBTar nvarchar(30),@NarSTar nvarchar(50),@men nvarchar(100)asinsert natiq.YENCEK(phone,SertiNom ,ASA ,telUnvan,IDKat ,SHesab ,Alam ,IDCDMAalam ,TelTarix,MuqBasTar ,MuqSonTar ,Milliyet ,Cinsi ,DoYer,DoTarix ,SexVes ,Elaqe ,Odeme ,Mebleg ,BudceA ,qeyd,SenTar ,MINIATS,XusKart,Eskiz ,NarN ,NarBTar ,NarSTar ,men )select convert(int,@phone),select convert(decimal(18, 0),@SertiNom),select @ASA,select @telUnvan,(select Kod from natiq.kateqor where Name=@kateqoriya ),select convert(int,@SHesab) ,(select Kod from natiq.alamat where Name=@Alam) ,(select Kod from natiq.CDM where Name=@IDCMAalam),select @TelTarix,select @MuqBasTar ,select @MuqSonTar ,(select Kod from natiq.milli where Name=@Milliyet ),(select kod from cins where Name=@Cinsi) ,select @DoYer ,select @DoTarix ,select @SexVes ,select @Elaqe ,(select Kod from natiq.ode where Name=@Odeme) ,select @Mebleg ,(select Kod from natiq.budc where Name=@BudceA) ,select @qeyd,select @SenTar ,(select Kod from natiq.minats where Name=@MINIATS) ,select @XusKart ,select @Eskiz ,select @NarN,select @NarBTar ,select @NarSTar ,select @men -------getting errorPost 170, Level 15, State 1, Procedure insyenkcek, line 5Line 5: Incorrect syntax near ','.Post 156, Level 15, State 1, Procedure insyenkcek, line 19Incorrect syntax near the keyword 'select'.Post 156, Level 15, State 1, Procedure insyenkcek, line 19Incorrect syntax near the keyword 'select'.Post 156, Level 15, State 1, Procedure insyenkcek, line 19Incorrect syntax near the keyword 'select'.Post 156, Level 15, State 1, Procedure insyenkcek, line 20Incorrect syntax near the keyword 'select'.Message 137, level 15, the status 2, the procedure insyenkcek, line 20Must declare the variable '@ IDCMAalam'.Post 156, Level 15, State 1, Procedure insyenkcek, line 21Incorrect syntax near the keyword 'select'.Post 156, Level 15, State 1, Procedure insyenkcek, line 22Incorrect syntax near the keyword 'select'.Post 156, Level 15, State 1, Procedure insyenkcek, line 22Incorrect syntax near the keyword 'select'.Post 156, Level 15, State 1, Procedure insyenkcek, line 22Incorrect syntax near the keyword 'select'.Post 156, Level 15, State 1, Procedure insyenkcek, line 23Incorrect syntax near the keyword 'select'.Post 156, Level 15, State 1, Procedure insyenkcek, line 23Incorrect syntax near the keyword 'select'.Post 156, Level 15, State 1, Procedure insyenkcek, line 23Incorrect syntax near the keyword 'select'.Post 156, Level 15, State 1, Procedure insyenkcek, line 23Incorrect syntax near the keyword 'select'.Message 137, level 15, the status 2, the procedure insyenkcek, line 23Must declare the variable '@ qeyd'.Post 137, Level 15, State 2, Procedure insyenkcek, line 24Must declare the variable '@ SenTar'.Post 137, Level 15, State 2, Procedure insyenkcek, line 24Must declare the variable '@ MINIATS'.Post 137, Level 15, State 2, Procedure insyenkcek, line 24Must declare the variable '@ XusKart'.Post 137, Level 15, State 2, Procedure insyenkcek, line 24Must declare the variable '@ Eskiz'.Post 137, Level 15, State 2, Procedure insyenkcek, line 24Must declare the variable '@ NarN'.Post 137, Level 15, State 2, Procedure insyenkcek, line 25Must declare the variable '@ NarBTar'.Post 137, Level 15, State 2, Procedure insyenkcek, line 25Must declare the variable '@ NarSTar'.Post 137, Level 15, State 2, Procedure insyenkcek, line 25Must declare the variable '@ men'.

SQL Netbackup Job failures

Posted: 17 Jul 2013 08:41 AM PDT

Hi all,I have an instance where there are 2 databases and it is backed up by Netbackup MS sql client directly to tape.However, one of the database is failing when it runs on schedule, but when I manually backup through netbackup client, it works.....I was thinking if it's a permission issue and I checked the acct Netbackup is running on and it is Local System.....but the other db backup is running fine, so, I am not sure if it's a permission issue since it uses the same acct.Someone asked me to check the Autosys scripts.....I am not very familiar with it.If anyone have experience with netbackup plz advise.

SQL 2008 - Connecting to Integration services

Posted: 17 Jul 2013 05:49 AM PDT

From my SQL 2008 installed development machine, to connect to Integration services of an SQL 2008 server, do I have to do anything. I do have sysadmin access, but still i cant connect to IS of that server, geting an errorConnect to SSIS Service on machine "ABC" failed: Access is denied.I dont want to login into the server box and open the IS to view my packages. I should be able to connect and view them from my dev machine.

using between on string columns

Posted: 17 Jul 2013 12:14 PM PDT

This is puzzling.I've run across a case where using between on string columns returns rows I'm not expecting. The following code will demonstrate the issue[code]declare @t table(Start varchar(10) ,[End] varchar(10)) insert @tvalues('1500','1599') ,('2400','2799') ,('1','599') ,('2800','2999') ,('700','799') ,('A1','A3') ,('B4','B9') declare @val varchar(10)set @val = '2609' select *from @twhere @val between Start and [End] [/code]I'm only expect to get 1 row back. But instead 2 rows are returning. Here's what is coming back..Start End2400 27991 599I can't figure out why the 1, 599 row is coming back. Does anybody have any ideas why this is happening?Thanks!

Sending mail from On Prem server to Off prem exchange

Posted: 12 Jul 2013 05:20 AM PDT

Im having some troubles getting my SQL mail setup using an off premise exchange with office 365. I can get the server to send through googles smtp but cant get it to work with office365. Has anyone gotten this type of setup to work ? Ive found through searching that i may have to setup an SMTP server locally to route to the online exchange ?? Below is a link to a description.[url=http://blogs.technet.com/b/meamcs/archive/2013/02/25/how-to-configure-sql-database-mail-so-send-emails-using-office-365-exchange-online-a-walkthrough.aspx][/url]

Tell me if I'm crazy - Recursive CTE + View (Materialized, indexed?)

Posted: 17 Jul 2013 02:09 PM PDT

First off, I'm a .NET developer, and not a SQL wonk. Please be gentle. And thanks for your help.So I have a lattice (a multiparent hierarchy) of objects representing sets of people sliced along some axis. Currently these sets exist as documents in RavenDB. Example:Set{ Id: 'some-guid', Name: 'RegionName', Dimension: 'Location', ImmediateParents: ['containing-set-id', 'containing-set-id'], ImmediateChildren: ['contained-set-id', 'contained-set-id'], ContainedBy: ['denormalized', 'list', 'of', 'all', 'sets', 'that', 'contain', 'me']}Person{ ... ImmediateParents: ['sets', 'i', 'am', 'assigned', 'to', 'directly'], ContainedBy: ['denormalized', 'list', 'of', 'all', 'sets', 'that', 'contain', 'me']}The bulk of our queries are either direct reads from one of those lists, or "give me all the people contained by this set" query. Obviously our current implementation makes those queries really fast, but keeping all this updated is a delicate dance. It occurred to me that having a normalized representation in SQL, and then building a view on top of it to denormalize the "contained by" might get me off the dancefloor. And that if I could materialize (usage?) that view and index it, I could get roughly the same performance profile I have now, with the workload heavily weighted toward write time.Then tonight I learned of Recursive CTEs and the whole thing gelled for me. But I'm betting on some pretty big assumptions about features of Views that I don't understand. Am I sniffing up a good tree? Thanks for any insight.

Selecting records in table only when ALL related records in other table satisfy multiple conditions

Posted: 15 Jul 2013 03:23 AM PDT

Hello and thank you for your time. I feel like this should be simple but I can't seem to work it out….maybe I have been working on it so long that I need another perspective.I have a Lab_result_obr_p table that contains labs that have been resulted: The PK is LabNum which is the lab Number and the FK is OrderNum which is the order number. So in this table one order# can have multiple lab#sTable: Lab_result_obr_p:LabNum OrderNum A29E02FE D2C6DDA84A563D24 D2C6DDA80F53BC60 DC01E4EBGS43A689 DC01E4EBF943C7034 DF2654G7The other table is Lab_rsults_obx and it contains the lab components and individual component results. In this table, each lab# (uique_obr_Number) from above will have multiple lab comments and results Table: Lab_rsults_obxLabNem LabC Flag ValueA29E02FE WBC N 3.5A29E02FE RBC N 260 4A563D24 LMP: N 504A563D24 RH TYPE N DNR0F53BC60 BACTERIA N TNPGS43A689 MCV N 30GS43A689 MCH N 40F943C7034 RH TYPE Y NegativeI need to select all ngn_order_num from Lab_result_obr_p where all components of all labs has a Abnormal_Flag of N and does not have a value of TNP. So if an Order has two labs, I need all the components for both labs to have an N for Abnormal_Flag and to not have a value of TNP for the order number to be selected So for the data above my result would look like:ngn_order_num D2C6DDA8

Log shipping: New Index Not carried over

Posted: 17 Jul 2013 02:45 AM PDT

I have a few databases that are log shipping from ServerA to ServerB where they are available as a Standby/Read Only. Yesterday I dropped one index on ServerA.DatabaseA and created a brand new index on ServerA.DatabaseAThe Restore takes place at midnight and according to my history log and according to the errorlog, all the logs were applied to ServerB.DatabaseAYet, I do not see the newly created index and the old index is still on ServerB.DatabaseAPlease advise.

sys.dm_db_index_physical_stats - Doubt in the Results

Posted: 17 Jul 2013 07:45 AM PDT

For a table with 3 indexes, 1 clustered and 2 NC, Index Physical status dmv returned 53 rows with clustered index repeated 4 times with varying % fragmentation, similarly for other 2 indexes too.why multipls rows for one index?

How to get the month wise row of record if transaction present or not

Posted: 14 Jul 2013 03:06 AM PDT

Hi,Is that possible to write in single query to get the below results -I need to get the month wise records from the transaction table. If records for that month is not present, then previous month set of common columns values to be created for current month (Company, Customer ID, Account No, Type) with previous month amount as opening balance and current amount as 0 and closing balance as opening balance.thanks

Sending email to recipients based on result set

Posted: 17 Jul 2013 04:34 AM PDT

I am trying to find an article or guide for creating an SSIS package(best method?) with a for each loop container.I query a table, returning 5 columns, the result has the recipients email address, and name, i need to send the same email message to the entire result set.Using sql server 2008 r2 and database mail. do i place the send email task within the loop and somehow override the fields i want to overrride?Changing the body of the email appears do able as i would insert that into each pass, but changing the recipient seems to be a different story.Thanks :crazy:

Database Mirror Login SID for ##MS_PolicyEventProcessingLogin##

Posted: 16 Jul 2013 10:51 PM PDT

In William Thier's excellent SSC article "Configuring Database Mirroring" ( http://www.sqlservercentral.com/articles/Database+Mirroring/72009/ ) he mentions... "[i]If the mirrored database contains non domain users, SQL_LOGINs should be created on the mirror server with the same sid as the corresponding login on the principal server[/i]"We have a Login on both our PRINCIPAL & MIRROR, created by the SQL Server 2008 R2 E'prise installation, called [b]##MS_PolicyEventProcessingLogin## [/b]--- with different SID's.Does this Login need to be sync'd on the MIRROR w/ the same SID?

using the default trace

Posted: 17 Jul 2013 02:19 AM PDT

HiLate in the afternoon i had a support ticket raised for 'the app'. It was running slow around 9.30am untill 11.00amI checked the machine:processor time using 'sql monitor' and sure enough the processor was pegged up at 80 - 90% usage for the time period where 'the app' was running slow.I have used the back in time feature of sql monitor but all i can see query wise is the sql monitor queries.Can you use the default traces to look at what may have caused this peak in cpu and if so how do i go about it ?ThanksPJ:-)

Join of 2 subqueries

Posted: 17 Jul 2013 12:05 AM PDT

Hi All,I've been trying to do a join of 2 subqueries, but haven't been having much luck and keep getting errors. Can anyone please advise on what I'm doing wrong here? Thanks! Declare @SweepId as int = 10160, @RunType as varchar = 'Initial' Select *from(Select distinct ClaimId , LineNum , 0 as EncounterLineNum , EncounterType , InternalICN , PreviousDpwICN , 0 as ClaimFrequencyCd , EncounterPeriod , 2 as StatusCd , BypassReason , null as EncounterFileNm ,@SweepId as SweepId from IntEncTracking.EncounterList where bypassflag = 1) as q1 join ( Select ClaimId , 0 as LineNum , 0 as EncounterLineNum , EncounterType , InternalICN , PreviousDpwICN , max(ClaimFreqCd) as ClaimFreqCd , max(EncounterPeriod) as EncounterPeriod , case when exists (select 'x' from IntEncTracking.EncounterList el1 where el1.claimid = claimid and BypassFlag = 0) then 1 else 2 end stscd , case when @RunType = 'Initial' then 100 else 300 end as [StatusReasonCd] , null as EncounterFileNm , @SweepId as SweepId from IntEncTracking.EncounterList el where BypassFlag = 0) as q2 on q1.ClaimId = q2.ClaimId and q1.LineNum = q2.LineNum and q1.EncounterLineNum = q2.EncounterLineNum and q1.EncounterType = q2.EncounterType and q1.InternalICN = q2.InternalICN group by q1.ClaimId, q1.EncounterType, q1.InternalICN, q1.PreviousDpwICN order by q2.ClaimId, q2.LineNum, q2.EncounterLineNum, q2.EncounterType, q2.InternalICN

capture currently logged in users from sql

Posted: 17 Jul 2013 12:55 AM PDT

I want to capture users logged int o my database from 9-5 and then get the average per day.can i please have the guidence

parsing data in the middle of a field??

Posted: 17 Jul 2013 12:01 AM PDT

Hi I have a field with a data string similar to ""....@@T_700_ = "text goes here"@@T_301_ = "06/15/2013"@@T_069_ =.....I need to pickup what between @@T_700_ = " and "@@T_301_ so in the above I end up with[b] text goes here[/b]ThanksJoe

How to take backup of Single table and how to restore?

Posted: 13 Jul 2013 03:44 PM PDT

Hi... How to take backup of Single table and how to restore? is there any query like database backup?shivakumar...

Autogrowth of database with it's file name

Posted: 16 Jul 2013 11:09 PM PDT

I use below query to get auto-growth event occurred to databases in a sql server.[code="sql"]DECLARE @trcfilename VARCHAR(1000);SELECT @trcfilename = path FROM sys.traces WHERE is_default = 1SELECT COUNT(*)as no_of_autogrowths,sum(duration/(1000*60)) as time_in_min,FilenameFROM ::fn_trace_gettable(@trcfilename, default)WHERE (EventClass = 92 OR EventClass = 93)GROUP BY Filename[/code]It outputs number of auto-growths,time taken for auto-growth and logical name of the file. But i want physical name of the file(mdf and ldf file name) instead of logical name.I don't know whether from sys.traces i will get physical name or please help me with an alternate way to do it.

How to create database from .mdf file only

Posted: 10 Jan 2012 04:55 AM PST

Hi,I have installed SQL Server 2012 and downloaded sanple database from the below link, which has AdventureWorks2008R2_Data.mdf for download.http://msftdbprodsamples.codeplex.com/releases/view/55330Now, how to restore AdventureWorks database from AdventureWorks2008R2_Data.mdf file?Thanks

Search This Blog