Monday, June 24, 2013

[how to] How can I speed up PgAdmin III Queries?

[how to] How can I speed up PgAdmin III Queries?


How can I speed up PgAdmin III Queries?

Posted: 24 Jun 2013 01:55 PM PDT

I've been noticing that PgAdmin III isn't as responsive as I expected it to be.

For instance a query to select * ... from a table containing 20,397 rows takes 19.531 seconds in PgAdmin III and 7.383 seconds in DbVisualizer 9.0.7. I initially thought this was a coincidence, but it has been consistent across all queries run.

As such it appears that PgAdmin III takes approximately twice as long to query my database over DbVisualizer. So how can I speed up PgAdmin III Queries?

I am using Heroku Postgres 9.1.9 on the Crane package.

No predicate warning

Posted: 24 Jun 2013 02:00 PM PDT

I am getting No predicate warning on this query , can some help ?

SELECT DISTINCT TOP 100 T0.nsid,                           T0.id,                           T0.version,                           T0.name,                           T1.currency,                           T0.disbursmentamount,                           T0.effectivedate,                           T0.enddate,                           T0.contractname,                           T0.scheduledbillingdate,                           T0.dueamount,                           T0.paidamount,                           T0.cashpaidamount,                           T0.paidamount,                           T0.writeoffamount,                           T0.canceled,                           T0.creationdate,                           T0.creationtime,                           T0.mustbepaidat,                           T0.isrebill,                           T1.name,                           T0.status,                           T2.sendtoprint,                           T2.isprinted   FROM   (awfbillingstatementitem T0           INNER JOIN awfbusinessaccount T1                   ON (( ( T0.attachedtoclientaccount_nsid = T1.nsid )                         AND ( T0.attachedtoclientaccount_id = T1.id ) )))          LEFT OUTER JOIN awfbillingstatementitem T2                       ON ( ( ( T2.nsid = T0.nsid )                              AND ( T2.id = T0.id ) )                            AND ( T2.h_iskilled = 0 )                            AND ( T2.h_clid = 218169570 ) ),          awfcontract T3          LEFT OUTER JOIN awfdescversionedbydate T5                       ON ( ( ( T3.versionedpolicyowner_nsid = T5.nsid )                              AND ( T3.versionedpolicyowner_id = T5.id )                              AND ( -T3.versionedpolicyowner_version =                            T5.version ) )                            AND ( T5.h_iscurvers = 1 )                            AND ( T5.h_iskilled = 0 )                            AND ( T5.h_clid = 234821058 ) )          LEFT OUTER JOIN (awfpolicyownerversion T6                           LEFT OUTER JOIN awfdescversionedbydate_versions T8                                        ON ( T8.nsid = T6.nsid                                             AND T8.id = T6.id                                             AND T8.version = -T6.version ))                       ON ( T5.nsid = T8.lonsid                            AND T5.id = T8.loid                            AND T5.versions_version = T8.loversion                            AND ( T6.h_iscurvers = 1 )                            AND ( T6.h_iskilled = 0 ) ),          awfperson T4   WHERE  ( ( 0 = 0 )            AND ( 0 = 0 )            AND ( 0 = 0 )            AND ( 0 = 0 )            AND ( 0 = 0 )            AND ( 0 = 0 )            AND ( 0 = 0 )            AND ( 0 = 0 )            AND (( T0.canceled = 0 ))            AND ( 0 = 0 )            AND ( 0 = 0 )            AND ( 0 = 0 )            AND ( 0 = 0 )            AND ( 0 = 0 )            AND ( 0 = 0 )            AND ( 0 = 0 )            AND ( 0 = 0 )            AND (( ( T0.contract_nsid = T3.nsid )                   AND ( T0.contract_id = T3.id ) ))            AND ( 0 = 0 )            AND ( 1 = 1 )            AND ( (( ( T0.client_nsid = T4.nsid )                     AND ( T0.client_id = T4.id ) ))                   OR (( ( T1.thirdparty_nsid = T4.nsid )                         AND ( T1.thirdparty_id = T4.id ) )) )            AND ( 0 = 0 )            AND ( (( ( T3.subscriber_nsid = T4.nsid )                     AND ( T3.subscriber_id = T4.id ) ))                   OR (( ( T3.policyowner_nsid = T4.nsid )                         AND ( T3.policyowner_id = T4.id ) ))                   OR (( ( T6.policyowner_nsid = T4.nsid )                         AND ( T6.policyowner_id = T4.id ) )) )            AND ( 0 = 0 ) )          AND ( T3.h_iscurvers = 1 )          AND ( T4.h_iscurvers = 1 )          AND ( T0.h_iskilled = 0 )          AND ( T3.h_iskilled = 0 )          AND ( T4.h_iskilled = 0 )          AND ( ( T4.h_clid = 234817316 )                 OR ( T4.h_clid = 234820248 )                 OR ( T4.h_clid = 234816906 )                 OR ( T4.h_clid = 234817143 )                 OR ( T4.h_clid = 234821693 )                 OR ( T4.h_clid = 218169345 ) )   ORDER  BY T0.creationdate DESC,             T0.creationtime DESC,             T0.effectivedate DESC   

How can I write windowing query which sums a column to create discrete buckets?

Posted: 24 Jun 2013 05:50 PM PDT

I have a table which includes a column of decimal values, such as this:

id value size  -- ----- ----   1   100  .02   2    99  .38   3    98  .13   4    97  .35   5    96  .15   6    95  .57   7    94  .25   8    93  .15  

What I need to accomplish is a little difficult to describe, so please bear with me. What I am trying to do is create an aggregate value of the size column which increments by 1 each time the preceding rows sum up to 1, when in descending order according to value. The result would look something like this:

id value size bucket  -- ----- ---- ------   1   100  .02      1   2    99  .38      1   3    98  .13      1   4    97  .35      1   5    96  .15      2   6    95  .57      2   7    94  .25      2   8    93  .15      3  

My naive first attempt was to keep a running SUM and then CEILING that value, however it doesn't handle the case where some records' size end up contributing to the total of two separate buckets. The below example might clarify this:

id value size crude_sum crude_bucket distinct_sum bucket  -- ----- ---- --------- ------------ ------------ ------   1   100  .02       .02            1          .02      1   2    99  .38       .40            1          .40      1   3    98  .13       .53            1          .53      1   4    97  .35       .88            1          .88      1   5    96  .15      1.03            2          .15      2   6    95  .57      1.60            2          .72      2   7    94  .25      1.85            2          .97      2   8    93  .15      2.00            2          .15      3  

As you can see, if I were to simply use CEILING on crude_sum record #8 would be assigned to bucket 2. This is caused by the size of records #5 and #8 being split across two buckets. Instead, the ideal solution is to reset the sum each time it reaches 1, which then increments the bucket column and begins a new SUM operation starting at the size value of the current record. Because the order of the records is important to this operation, I've included the value column, which is intended to be sorted in descending order.

My initial attempts have involved making multiple passes over the data, once to perform the SUM operation, once more to CEILING that, etc. Here is an example of what I did to create the crude_sum column:

SELECT    id,    value,    size,    (SELECT TOP 1 SUM(size) FROM table t2 WHERE t2.value<=t1.value) as crude_sum  FROM    table t1  

Which was used in an UPDATE operation to insert the value into a table to work with later.

How to profile a particularly slow or inefficient query

Posted: 24 Jun 2013 02:57 PM PDT

I'm currently writing queries for my database but this is the first time I've dealt with large data sets of any sort, and I'm now finding that I need to improve the performance of my queries.

Scenario

I have a series of tables for purchases and customers as described below

users (user_id, forename, surname) 20k+ records

customers (customer_id, user_id, etc etc) 20k+ records

purchases (purchase_id, item_id, customer_id, price, timestamp, store_id) 150k+ records

deliveries (purchase_id, delivered_time etc) 150k+ records

store (store_id, name etc) <50 records

The query I am currently trying to perform should retrieve all customer names, the date of their last order, when that order was delivered and their total number of orders for all customers at a particular store whos most recent order was in the last 6 months. Here is what I'm currently using.

SELECT `customers`.`customer_id`, `forename`, `surname`, `delivered_time`, MAX(`purchases`.`timestamp`) as lastPurchase, COUNT(purchases.purchase_id) as totalPurchases FROM (`purchases`)   JOIN `deliveries` ON `purchases`.`purchase_id` = `deliveries`.`purchase_id`   JOIN `customers` ON `customers`.`customer_id` = `purchases`.`customer_id`   JOIN `users` ON `users`.`user_id` = `customers`.`user_id`   WHERE `store_id` = '1' GROUP BY `customer_id` HAVING lastPurchase >= '1372086055'  

However, this is pretty slow and takes 1-2 seconds each time, and I can only assume this time would increase as the amount of data increases.

In an attempt to profile the query I have used EXPLAIN which seems to indicate it is searching through around 20k records in the deliveries table. What should my next steps be to optimize this query? How can I go about reducing the execution time?

How do I rebuild single full text catalog (or why is "rebuild" full text catalog option disabled)

Posted: 24 Jun 2013 11:50 AM PDT

In SSMS. I have Full Text Catalog defined. If I select "Full Text Catalogs" folder - there is an option "Rebuild All" and it is enabled. But if I select individual FTC - "Rebuild" option (in context menu) is disabled.

Question Why is it disabled? and how do I enable it?

TempInsertStateItemShort procedure on ASPState is taking more than 1 second to execute

Posted: 24 Jun 2013 09:31 AM PDT

I'm working on a .NET website who runs in load balance within more than one server and because of that I need to have my session on SQL Server.

In the last 2 weeks I've notice that the procedure TempInsertStateItemShort is taking more than 1 second to run.

Basically this procedure inserts into tempdb.dbo.ASPStateTempSessions

 CREATE PROCEDURE dbo.TempInsertStateItemShort                @id         tSessionId,                @itemShort  tSessionItemShort,                @timeout    int            AS                    DECLARE @now AS datetime                DECLARE @nowLocal AS datetime                  SET @now = GETUTCDATE()                SET @nowLocal = GETDATE()                  INSERT [tempdb].dbo.ASPStateTempSessions                     (SessionId,                      SessionItemShort,                      Timeout,                      Expires,                      Locked,                      LockDate,                     LockDateLocal,                     LockCookie)                 VALUES                     (@id,                      @itemShort,                      @timeout,                      DATEADD(n, @timeout, @now),                      0,                      @now,                     @nowLocal,                     1)                  RETURN 0   

tempdb.dbo.ASPStateTempSessions table has 800k rows and the PK index is about 98% fragmented.

Does the fragmentation be the reason for insert take more than 1 second? Does anyone had experience that? How can I improve performance on this table?

Thanks in advance!!!!

How to enter multiple checkbox list items into database for eventual reporting [closed]

Posted: 24 Jun 2013 12:03 PM PDT

(I tried posting this on SO but went unansered. I'll just post these theory type questions here from now on)

I'm in the process of creating the first of a few CheckBoxList for a manually created form submittal. I've been looking over multiple ways to not only create said CheckBoxList but to enter it into the database. However, the challenge I'm facing (mainly because I haven't encountered this scenario before and I'd like to learn this) is that I not only need to worry about entering the items correctly in the database but I will eventually need to produce a report or make a printable form from these entries.

Let's say I have an order form for a specific type of Barbeque grill and I will need to send this form out to distriution centers across the nation. The distribution centers will need to pull said barbecues if they are highlighted on the form.

Here's what the CheckBoxList for the distibution centers will look like:

  All  Dallas  Miami  Los Angeles  Seattle  New York  Chicago  Phoenix  Montreal  

If the specific city (or all the cities) are checked, then the distribution center will pull the barbecue grill for shipment.

The added part is that I want to:

  1. be able to create a grid view from this database for reporting to note which distribution center got orders for barbecues and

  2. be able to create reports to tell what distribution center sent out barbecue orders in a given month (among other reporting).

What I'm looking for is a theoretical approach to designing this form while keeping in mind the gridview and reports. Do I put the CheckBoxList data in one cell in a specific column? Do I create seprate columns for each distribution center? Am I even on the right track here?

Note: At this point I'm not looking for coding help, I'm just looking for some theory/logic help so I can learn this the right way the first time.

::edit::

First off, please note that the database design I have now is fully in sandbox mode. Nothing is finalized. Everything (again this is in sandbox mode) is in one single table. Also, I'm in now way looking for coding help. I'm looking for the right theoretical/logical approach to this puzzle so I can go in and play with the coding myself. I learn better that way. If I need coding help, I'll come back here for further assistance.

Here's what I'm working with right now.

In my aspx page I have a checkboxlist programmed with all the distribution centers entered as a listitem as well as an option for 'ALL' (of the distribution centers).

I also created a dedicated column in this table that holds all the information in the listitem and programmed a sqldataconnection to this table to play with the programmability of leveraging the database for this purpose.

When it comes to writing the selections to the database, I originally created a column for each destination city including the 'All' option. I was toying around with just putting the selections into one single column but with some of the information I've been reading today about Database Normalization, the former options seems to be a better one than the latter. Is this correct practice for situations such as this especially if I need to think about reporting?

export (mysqldump) databases from MySQL version 4.0?

Posted: 24 Jun 2013 09:27 AM PDT

Goal: export the schema and data from a MySQL version 4.0.16 DB.

Background:

I was asked to migrate a database from MySQL version 4.0.16. The database was installed on a Windows XP machine, and I just could not find mysqldump tool on that machine.

I then putty into a linux machine that has mysqldump. I ran the command, but got an error:

-bash-4.1$ mysqldump --databases db_2_dump --host='myhost' --port='6012' -u <user> -p >db-export.sql  Enter password:  mysqldump: Error: 'You have an error in your SQL syntax.  Check the manual that corresponds to your MySQL server version for the right syntax to use near 'SELECT DISTINCT LOGFILE_GROUP_NAME FROM INFORMATION_SCHEMA.FILE' when trying to dump tablespaces  

I looked up mysql reference. INFORMATION_SCHEMA is something new to version 5 and above.

This means the mysqldump on the linux box is incompatible with the production MySQL server which is on v4.0.16.

I tried to download Mysql 4.0 server and install it on a windows machine so that I can get hold of a compatible version, but searching on Mysql website found nothing older than 5.0.

(I also tried Mysql workbench to connect to this DB. Failed. "Unknown character set utf8")

So how can I export the schema and data at all from this legacy mysql DB???...

Can Someone Decrypt this? [closed]

Posted: 24 Jun 2013 09:11 AM PDT

config = {}    config["db_host"]   = "192.168.1.1"  config["db_user"]   = "ref"  config["db_passwd"] = "jVNEVfArZAzQqv5j"  config["db_name"]   = "brandb"  config["pc_name"]   = "ref11"  

It's from config.py file. Thank you very much.

Also, explain how did you get the db_password?

Total of select by all current session and by date

Posted: 24 Jun 2013 08:07 AM PDT

In MySQL :

This query allows us to have the total number of "select" on the server (query 1) :

mysql> SELECT VARIABLE_VALUE FROM information_schema.global_status WHERE variable_name like 'com_select';  

and this query allows us to know the number of "select" for the current session :

mysql> show status like 'Com_select';  

or

mysql> SHOW SESSION STATUS LIKE 'Com_select';  

please, how to have the number of "select" for all current sessions (total) and a query like the first (query 1) , but by date. Thanks.

Select data one by one from table and data need to repeat custom number of times [closed]

Posted: 24 Jun 2013 09:00 AM PDT

I have table data like

data1  data2  data3  data4  

and I need to select these rows like this:

data1  data1  data1  data1  data2  data2  data2  data3  data3  

and how to perform this in SQL Server 2005?

How can I send HTTP requests from SQL Server 2012 to a non-.Net application? [duplicate]

Posted: 24 Jun 2013 08:33 AM PDT

This question is an exact duplicate of:

I know basic only concepts (DDL,DML queries) of the SQL Server 2012. I explain the workflows of both my application and the database below.

Workflow of the application:

I build web application in Java. Whenever it receives an HTTP request, it fetches data from the database, does some computations on the data and the computed data will be stored in another database.

Before explaining the database workflow, I will explain what a database modification is from my point of view: If any operation happens like creating new tables, data modification, data deletion in tables, etc. indirectly all these are effects on the database.

Workflow of the database:

I have SQL Server 2012. Whenever the database is modified, it will send an HTTP request to the application. The Application runs on an Apache Tomcat server. Once the application receives the HTTP request, it continues that individual workflow.

If you understand my question, we are not sending HTTP requests from a browser (i.e. end-users can't access this application), we are sending HTTP request from SQL Server.

I hope you understand what my question is.

I read some tutorials for this, but I didn't get anything.

Can anyone suggest the correct way of doing this?

how to do Tungsten replication trouble shooting in mysql? [closed]

Posted: 24 Jun 2013 11:18 AM PDT

we have done replication setup(master to slave) using tungsten replicator in mysql.we have done setup successfully.But i don't know the trouble shooting about replication issues.

i want to learn about tungsten replication trouble shooting(regarding replication issues). How can i learn.Anybody Could you please suggest me. i was goggled so much but i didn't find any proper guides. Please help me.

SQL Server Agent Jobs and Availability Groups

Posted: 24 Jun 2013 11:42 AM PDT

I'm looking for best practice in dealing with scheduled SQL Server Agent jobs in SQL Server 2012 availability groups. Maybe I missed something, however at the current state I feel that SQL Server Agent is not really integrated with this great SQL2012 feature.

How can I make a scheduled SQL agent job aware of a node switch? For example I have a job running on the primary node which loads data each hour. Now if the primary goes down, how can I activate the job on the secondary which now becomes primary?

If I schedule the job always on the secondary it fails because then the secondary is read-only.

copy only tables with data from one database to another database [closed]

Posted: 24 Jun 2013 09:43 AM PDT

I have two database , dbOne(version - 10.50.1600 - locate in office server ) and dbTwo(version - 10.0.1600 - locate in my local server) .
I want to copy dbOne's tables with data to dbTwo .
Is there any way or script to do it ? I don't want to upgrade my local server-version !

How to determine that the chain of log degraded

Posted: 24 Jun 2013 09:03 AM PDT

SQL Server 2005.

I have DB with full recovery model. If i execute on my base

ALTER DATABASE test SET RECOVERY SIMPLE  ALTER DATABASE test SET RECOVERY FULL  

or

BACKUP LOG test WITH  truncate_only  

then if I try run log backup

BACKUP LOG test TO  DISK = 'backupfile.bak'  

I receive error message

BACKUP LOG cannot be performed because there is no current database backup.

Question:
How do I check that I was not able to make a log backup. Without running backup log.

Full text search on multiple tables in PostgreSQL

Posted: 24 Jun 2013 12:52 PM PDT

I have several tables that have related data. I want to have a search input box on the dashboard of the system that will search for anything (client, invoice, delivery note, ...).

Is it possible to have a FTS implementation in PostgreSQL that will get the item searched for in more than one table and get the results?

For example, if the user typed 45, the query should look in the clients table and return the clients with id 45 or name has 45, also look in the invoices table and return the invoices with number 45, and also look into the delivery notes table and return the delivery notes with number 45.

Or is there a better way of solving this problem without using full text search?

WITH ROLLUP WHERE NULL = x

Posted: 24 Jun 2013 12:37 PM PDT

I tried to find this question somewhere else on here, and found a couple that were similar to it -- but all the answers seemed pretty convoluted, and I think it was surprising to me because I just imagined that SQL would have a way easier way to do this.

Essentially, I'm using a GROUP BY to group two values on top of each other. The coding is simple and works well, where I get these results using WITH ROLLUP at the end:

Type    Person  Count  Type A  Person 1    10  Type A  Person 2    91  Type A  Person 3    13  Type A  Person 4    10  Type A  Person 5    2  Type A  NULL        126  Type B  Person 6    16  Type B  Person 7    63  Type B  Person 8    15  Type B  Person 9    22  Type B  Person 10   16  Type B  Person 11   3  Type B  Person 12   20  Type B  Person 13   44  Type B  NULL        198  NULL    NULL        360  

Is there an easy way to substitute something saying if "NULL" is in the "Person" column, then make NULL AS "Subtotal" and if "NULL" is in the "Type" column, NULL AS "Grand Total?"

And if that's weirdly complicated, I'd be totally fine with just calling all of them "Total."

Thanks!

why would command text not execute at all?

Posted: 24 Jun 2013 05:37 PM PDT

in my report i have::

<CommandText>SELECT         column1,          column2,           'poop'  from mytable  </CommandText>  

i am using sql server profiler to see exactly what statement is being set.

i have set only two filters:

  1. databaseName
  2. enter image description here

yet after running the report, no statement gets intercepted.

i suspect that because i am a beginner at SSRS, i am missing something crucial here.

for what reason would commandtext not be executed at all?

i did follow this question, to make sure that i am using sql profiler correctly, and indeed, i am: http://stackoverflow.com/questions/9107383/sql-server-profiler-capture-calls-to-your-databases-stored-procs-during-ssrs

another bit of important information is although the chart shows no data:

enter image description here

i actually am indeed showing data when i run the commandtext from ssms!

Avoiding performance hit from GROUP BY during FULLTEXT search?

Posted: 24 Jun 2013 09:37 AM PDT

Is there any clever way to avoid the performance hit from using group by during fulltext search?

SELECT p.topic_id, min(p.post_id)   FROM forum_posts AS p   WHERE MATCH (p.post_text) AGAINST ('baby shoes' IN BOOLEAN MODE)  GROUP BY p.topic_id  LIMIT 20;  

In this example it's fetching the lowest post_id for unique topic_ids that match the text.

With the group by to find the min, it's taking 600ms in a million row database, with about 50K rows examined.

If I remove the MIN but leave the GROUP BY, it's the same slowness, so it's the GROUP hit.

I suspect this is because it can only use one index, the fulltext ?

key: post_text | Using where; Using temporary; Using filesort    Query_time: 0.584685  Lock_time: 0.000137  Rows_sent: 20  Rows_examined: 57751  Full_scan: No  Full_join: No  Tmp_table: Yes  Tmp_table_on_disk: No  Filesort: Yes  Filesort_on_disk: No  Merge_passes: 0  

Without the GROUP BY it's 1ms so this has to be filesort speed?

(I've removed ORDER BY and everything else to isolate where the hit is)

Thanks for any insight and ideas.

(using MyISAM under mariadb if it matters)

AWS performance of RDS with provisioned IOPS vs EC2

Posted: 24 Jun 2013 11:37 AM PDT

Has anyone done a performance comparison of AWS RDS with the new provisioned IOPS vs EC2? I've found plenty of non-high IOPS RDS vs EC2 but nothing with the new high IOPS feature in RDS.

How to troubleshoot SQL Server Merge Replication Conflict?

Posted: 24 Jun 2013 08:38 AM PDT

I have SQL Server 2008 R2 Merge Replication setup.

I have 2 publications for the same database with 20+ subscribers.

The logic of our application prevents changing the same data from 2 subscribers at the same time.

Lately I started getting conflicts for some specific tables & almost all the time the contents of the 2 conflicting rows ( the publication row & the subscriber row ) are identical with no differences at all!!

My question is: What is the standard steps to troubleshoot conflicts & trace down to the point that is causing it?

Open source and commercial tools to build dynamic queries of fact tables in star schema

Posted: 24 Jun 2013 03:08 PM PDT

What user interfaces or tools (commercial and/or open source) can query a star schema model (Ralph Kimball dimensional model)? So for example, if we have X fact tables, and Y dimension tables, and Z cross reference tables, we'd like to query a random fact table. If you are aware of such tools, can you provide a reference of each.

sp_startpublication_snapshot Parameter(s)

Posted: 24 Jun 2013 06:37 PM PDT

I am creating a stored procedure that:

  1. Restores a DB from a .bak giving the .mdf and .ldf a new name (so we have have several copies of the same DB up
  2. (If specified in the SP's parameter) Creates three merge replication publications
  3. (What I need help doing) Generating the snapshots for the three publications using sp_startpublication_snapshot

Here is my new brick wall... On this DB server, I have a 'shell' db that they will be running the SP from, that has a history table so I can keep track of who created/deleted databases using my SP's... The only parameter for sp_startpublication_snapshot is @publication... I can give it the publication name, but since I am not running it from the publishing database, how do I specify the publishing database?

i.e.: the publication shows up as:

[WC48_Database1]: upb_Inspection_PrimaryArticles  

but I am running the script from the database [WC_QACatalog]

Any ideas about how to accomplish this?

Thank you, Wes

Passing path from batch file to mysql [closed]

Posted: 24 Jun 2013 08:36 AM PDT

My batch file will contain some thing like this :

    set MUSER=root       set MPASS=xprd123       set MHOST=172.33.49.55       set DB=DBName1      set MBIN=C:\Program Files\MySQL\MySQL Server 5.5\bin      set scriptpath=D:\STATS_COLLECTOR       "%MBIN%\mysql.exe" -u"%MUSER%" -p"%MPASS%" -h "%MHOST%" "%DB%" -e "set @2:='%scriptpath%'; source DBQuery1.sql;"  

Content of DBQuery1.sql will be :

SELECT 'INNODB_OS_LOG_WRITTEN','QUESTIONS'   UNION   SELECT  MAX(IF(variable_name = 'INNODB_OS_LOG_WRITTEN', VARIABLE_VALUE, NULL)) AS INNODB_OS_LOG_WRITTEN,   MAX(IF(variable_name = 'QUESTIONS', VARIABLE_VALUE, NULL)) AS QUESTIONS   FROM information_schema.GLOBAL_STATUS   INTO OUTFILE '@2/stats.csv'       FIELDS TERMINATED BY ','       ENCLOSED BY '"'       LINES TERMINATED BY '\n'  ;  

I am not sure this is the way to pass path to the SELECT query inside DBQuery1.sql.

This code is not working.

I looked at other posts on passing path to batch file, but whatever I tried, I am not able to solve this issue.

When I execute the batch file, I am not sure if the path is getting passed into the sql file and if yes, why @2 is not getting the path passed.

Could anybody kindly help on this.

SQL Server 2012 Standard Edition - multiple instances and memory utilization

Posted: 24 Jun 2013 09:43 AM PDT

If we have multiple instances of a SQL Server 2012 Standard Edition (which has a 64 GB memory limit) on one server which has 192 GBs of RAM, do both instances only have access to the first 64 GBs of memory, or can they access different portions of memory, so that they can each have their own 64 GB "chunk".

This is in consideration for an Active/Active cluster if both nodes fail over to a single node.

Flashback feature in Oracle 11g?

Posted: 24 Jun 2013 11:17 AM PDT

I am using Oracle 11gR1. As a part of a project, I have to execute multiple update statements by disabling a few constraints (triggers and check constraints) which could result in a corrupted database.

If the database gets corrupted, I have to re-import it entirely, which is an additional task that is time consuming.

I have very little knowledge about the Oracle Flashback feature. Will Flashback help me to avoid these unnecessary imports? Can I set some stable point and switch back to there if anything goes wrong?

Can someone please give me brief overview on this feature?

If it fits my requirements, I will try to learn.

PostgreSQL 9.1 Hot Backup Error: the database system is starting up

Posted: 24 Jun 2013 08:24 PM PDT

I have been working on a hot backup for Postgres 9.1 for awhile and have run into a consistent issue. After restarting Postgres on the slave server the pgstartup log file and the daily log file under the pg_log directory reads with no errors. However, when I try to enter into the database using the psql command, I get the error:

  FATAL: the database system is starting up.  

The recovery.conf file also does not turn to recovery.done. I have extensively researched this error and consistently find the same response: the database has not been cleanly shut down before I tried to restart Postgres. The only ways I have restarted Postgres is through the service postgresql-9.1 restart or /etc/init.d/postgresql-9.1 restart commands. After I receive this error, I kill all processes and again try to restart the database and still receive the same error. I am at a loss of where to go from here and how to fix this issue. Below is the exact process that I have done to complete the hot backup.

Master Server Configurations:

pg_hba.conf, added the line:

  host replication postgres IPAddressOfSlaveServer trust  

postgresql.conf:

  wal_level = hot_standby  max_wal_senders = 5  listen_address = '*'  port = 5432  max_wal_senders = 5  wal_keep_segments = 32  

Slave Server Configurations:

postgresql.conf:

  hot_standby = on  

recovery.conf:

  standby_mode = on  primary_conninfo = host=IPAddressOfMasterServer  port = 5432  user = postgres  restore_command = 'cp /var/lib/pgsql/9.1/data/pg_xlog/%f "%p"'  

After configuring both servers

I change to the postgres user on the master server and run the commands:

  psql -c "Select pg_start_backup('label',true);";  rsync -a -v -e ssh /var/lib/pgsql/9.1/data slave:/var/lib/pgsql/9.1/data \          --exclude postmaster.pid  pgsql -c "select pg_stop_backup();";  

After syncing database with slave server

I restart the slave server and the start up does not fail. The pgstartup.log reads:

  Success. You can now start the database server using:        /usr/pgsql-9.1/bin/postgres -D /var/lib/pgsql/9.1/data  or      /usr/pgsql/9.1/bin/pg_ctl -D /var/lib/pgsql/9.1/data -l logfile start  

the current day log file, postgresql-Thu.log, reads:

  Log: shutting down  Log: Database system is shut down  Log: database system was shut down in recovery at 2012-4-10  Log: entering standby mode  Log: restored log file "logFileName" from archive  Log: consistent recovery state reached at 0/BF0000B0  Log: redo starts at 0/BF000020  Log: restored log file "logFileName" from archive  Log: unexpected pageaddr 0/85000000 in log file 0, segment 192, offset 0  Log: unexpected pageaddr 0/85000000 in log file 0, segment 192, offset 0  Log: streaming replication successfully connected to primary  

I researched unexpected pageaddr and from the postgres archives, it is my understanding that it is quite normal and one of the expected ways to detect end-of-WAL.

Any advice would be greatly appreciated.

[SQL Server] How to make hierarchical structure

[SQL Server] How to make hierarchical structure


How to make hierarchical structure

Posted: 24 Jun 2013 12:57 AM PDT

I have 5 tables and query has returned from these tables.Table1=tldDocuments{Docid (pk, Auto number)Title (text)Doc (text)}Docid Docno1 doc-0012 doc-005 1 Table2-tblTransmittalNo{Transid (pk, Auto number)Tansmittalno (text)}Transid transmittalno1 tt-0002Table3=tblTransmittals{Docid (fk to tblDocuments, number)Transid (fk to tblTransmittalNo, number)}Docid Transid1 12 1Tables 4= tblTranstoCon{TransmittalToConId (pk,auto number)TransmittaltoCon (text)}TransmittalToConId TransmittaltoCon1 CON-00032 CON-00053 CON-0007 Table 5{Docid (number,PK,fk to tblTransmittals)Transid (number,PK, fk to tblTransmittals)TransToCon (number,PK,fk to tblTranstoCon)}Docid Transid transmittalToConid1 1 12 1 21 1 3Table 5 has 2 fks (combine fields) to table3 because I have for same (docid,transid) different tranmittaltoConIdI have designed tables 2 years ago I do not know it is completely correct or no and I have entered data to the tables, if design of tables also has problem please help.But now I like to make a hierarchical structure for the returned data with below query with adding fields parenntid,positionQuery:SELECT tldDocuments.Docno, tblTransmittalNo.Tansmittalno, tblTranstoCon.TransmittaltoConFROM tblTranstoCon INNER JOIN ((tblTransmittalNo INNER JOIN (tldDocuments INNER JOIN tblTransmittals ON tldDocuments.Docid = tblTransmittals.Docid) ON tblTransmittalNo.Transid = tblTransmittals.Transid) INNER JOIN tblDocTrantoCon ON (tblTransmittals.Transid = tblDocTrantoCon.Transid) AND (tblTransmittals.Docid = tblDocTrantoCon.DocID)) ON tblTranstoCon.TransmittalToConId = tblDocTrantoCon.TransToCon;DocNo TransmittalNo TransmittaltoConDoc-001 tt-0002 con-0003Doc-001 tt-0002 con-0007Doc-005 tt-0002 con-0005 ID PARENT POSITION OPTION1 0 0 DOC-00012 1 0 TT-00023 0 1 DOC-00054 2 1 CON-00075 3 0 TT-00026 5 0 CON-0003

Data from two different databases

Posted: 24 Jun 2013 03:08 AM PDT

I'm new to the forum so I hope this is the right place to ask this question. I have a production and an archive database. I want to create a way to pull a combined report with data from both databases. The databases are identical and on the same SQL instance. I'm looking for the best solution to fit my needs. Can I create a view that will pull from views already created in both databases? Would SSAS or SSIS be more helpful in this? Ultimately I need to be able to run a report that pulls information from both databases into a single report where it looks like it comes from one data source. Thanks!Clint

Is it POSSIBLE to automatic change the language in reports?

Posted: 23 Jun 2013 10:18 PM PDT

Hello SQL Servercentral Guru's, I have a quick question about choosing languages. I have a number of reports that were created in the Dutch. Now I want that when opening the reports the possibility is to put it in English. If you have chosen that it automatically to English. Is that possible in visual studio 2010?greetzME.Be love be hate but still SQL!

Problem in SQL Server - Service Account

Posted: 23 Jun 2013 04:19 PM PDT

Hi All,Recently I am getting below error in all my SQL server error log, the error occured repeatedly. I just googled for this issues, so far i didn't find the clear solution. Can any one please help me to fix this issue. ThanksError Log:2013-06-24 12:30:41.49 Logon Login failed for user 'DOMAIN\SERVICEACCOUNT'. Reason: Failed to open the explicitly specified database. [CLIENT: <local machine>]2013-06-24 12:31:41.59 Logon Error: 18456, Severity: 14, State: 38.Regards,Amar

ERD in Visio

Posted: 23 Jun 2013 03:20 PM PDT

I am preparing the ERD in Ms. Visio 2010 premium edition.Apparently after I created the entities I figured out that am unable to resize them. I can move them but cannot resize. Tried the developer mode , snap sheet as suggested in other forums but its not helping...Any suggestions pls.

[Articles] When Is Work, Work?

[Articles] When Is Work, Work?


When Is Work, Work?

Posted: 23 Jun 2013 11:00 PM PDT

Where is the dividing line between work time and personal time? As more and more employees get new phones, such as the Blackberry and iPhone, is the line becoming too blurred? Are there legal implications?

sqlbackup Want faster, smaller backups you can rely on?
Use SQL Backup Pro for up to 95% compression, faster file transfer and integrated DBCC CHECKDB. Download a free trial now.

[MS SQL Server] what are the high level steps to move a server from one datacenter to another datacenter

[MS SQL Server] what are the high level steps to move a server from one datacenter to another datacenter


what are the high level steps to move a server from one datacenter to another datacenter

Posted: 23 Jun 2013 07:42 PM PDT

what are the high level steps to move a server from one datacenter to another datacenter ?Thanks

Error importing SharePoint 2010 list into SQL Server 2008 R2 database through SSIS

Posted: 23 Jun 2013 01:07 PM PDT

Hi,I want to import SharePoint 2010 list data into a different SQL Server 2008 R2 database. The table structure for each list was created through exporting lists to excel and then using Import and Export wizard. I created an SSIS package using [url=http://sqlsrvintegrationsrv.codeplex.com/releases/view/17652]SharePoint SSIS Adapter[/url]. When I execute the package to test from BIDS, I get following error through SendMail task:Error code: -1073450910Error description: System.Runtime.InteropServices.COMException (0xC0047020): Exception from HRESULT: 0xC0047020 at Microsoft.SqlServer.Dts.Pipeline.Wrapper.IDTSBuffer100.AddRow(IntPtr ppRowStart) at Microsoft.SqlServer.Dts.Pipeline.PipelineBuffer.AddRow() at Microsoft.Samples.SqlServer.SSIS.SharePointListAdapters.SharePointListSource.PrimeOutput(Int32 outputs, Int32[] outputIDs, PipelineBuffer[] buffers) at Microsoft.SqlServer.Dts.Pipeline.ManagedComponentHost.HostPrimeOutput(IDTSManagedComponentWrapper100 wrapper, Int32 outputs, Int32[] outputIDs, IDTSBuffer100[] buffers, IntPtr ppBufferWirePacket)Error code: -1073450952Error description: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on component "SharePoint List Source - SP Sample" (2806) returned error code 0xC0047020. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure.Error code: -1073450910Error description: Microsoft.Samples.SqlServer.SSIS.SharePointUtility.SharePointUnhandledException: Unspecified SharePoint Error. A possible reason might be you are trying to retrieve too many items at a time (Batch size) ---> System.ServiceModel.FaultException: Exception of type 'Microsoft.SharePoint.SoapServer.SoapServerException' was thrown.Server stack trace: at System.ServiceModel.Channels.ServiceChannel.HandleReply(ProxyOperationRuntime operation, ProxyRpc& rpc) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs) at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation) at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message)Exception rethrown at [0]: at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg) at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type) at Microsoft.Samples.SqlServer.SSIS.SharePointUtility.ListsService.ListsSoap.GetListItems(GetListItemsRequest request) at Microsoft.Samples.SqlServer.SSIS.SharePointUtility.ListsService.ListsSoapClient.ListsService_ListsSoap_GetListItems(GetListItemsRequest request) at Microsoft.Samples.SqlServer.SSIS.SharePointUtility.ListsService.ListsSoapClient.GetListItems(String listName, String viewName, XElement query, XElement viewFields, String rowLimit, XElement queryOptions, String webID) at Microsoft.Samples.SqlServer.SSIS.SharePointUtility.Adapter.ListsAdapter.GetSharePointListItems(String listName, String viewId, XElement queryXml, XElement viewXml, Int16 pagingSize, XElement queryOptionsXml) --- End of inner exception stack trace --- at Microsoft.Samples.SqlServer.SSIS.SharePointUtility.Adapter.ListsAdapter.GetSharePointListItems(String listName, String viewId, XElement queryXml, XElement viewXml, Int16 pagingSize, XElement queryOptionsXml) at Microsoft.Samples.SqlServer.SSIS.SharePointUtility.Adapter.ListsAdapter.GetSharePointListItemData(String listName, String viewId, IEnumerable`1 fieldNames, XElement query, Boolean isRecursive, Int16 pagingSize) at Microsoft.Samples.SqlServer.SSIS.SharePointUtility.ListServiceUtility.GetListItemData(Uri sharepointUri, NetworkCredential credentials, String listName, String viewName, IEnumerable`1 fieldNames, XElement query, Boolean isRecursive, Int16 pagingSize) at Microsoft.Samples.SqlServer.SSIS.SharePointListAdapters.SharePointListSource.PrimeOutput(Int32 outputs, Int32[] outputIDs, PipelineBuffer[] buffers) at Microsoft.SqlServer.Dts.Pipeline.ManagedComponentHost.HostPrimeOutput(IDTSManagedComponentWrapper100 wrapper, Int32 outputs, Int32[] outputIDs, IDTSBuffer100[] buffers, IntPtr ppBufferWirePacket)BatchSize 1000 (some of the lists had 5000 records)DelayValidation = TrueRun64BitRuntime = FalsePlease point out what wrong am I doing.Thanks in advance,Puja

SSIS FTP Package Fails as a Job but Runs Manually

Posted: 23 Jun 2013 05:45 AM PDT

I wrote an SSIS package that pulld files from a client's FTP site. When run manually through "Execute Package Utility" -> "Execute" it runs to completion.If I try and run it in a query window[code="sql"]exec xp_cmdshell 'dtexec /DTS "\MSDB\Website_FTP_Import" /SERVER MySQLServer Name /CHECKPOINTING OFF /REPORTING V '[/code], I get the following error, saying the password is not allowed. But the password is in the package and it works when run the other way.:[code="other"]Microsoft (R) SQL Server Execute Package UtilityVersion 10.0.2520.0 for 32-bitCopyright (C) Microsoft Corp 1984-2005. All rights reserved.NULLStarted: 14:25:49Error: 2013-06-23 14:25:49.97 Code: 0xC0016016 Source: Description: Failed to decrypt protected XML node "DTS:Property" with error 0x8009000B "Key not valid for use in specified state.". You may not be authorized to access this information. This error occurs when there is a cryptographic error. Verify that the correct key is available.End ErrorError: 2013-06-23 14:25:54.08 Code: 0xC001602A Source: Website_FTP_Import Connection manager "FTP Connection Manager" Description: An error occurred in the requested FTP operation. Detailed error description: The password was not allowed.End ErrorError: 2013-06-23 14:25:54.08 Code: 0xC002918F Source: FTP Task - RoundTable FTP Task Description: Unable to connect to FTP server using "FTP Connection Manager".End ErrorWarning: 2013-06-23 14:25:54.08 Code: 0x80019002 Source: Website_FTP_Import Description: SSIS Warning Code DTS_W_MAXIMUMERRORCOUNTREACHED. The Execution method succeeded, but the number of errors raised (1) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.End WarningDTExec: The package execution returned DTSER_FAILURE (1).Started: 14:25:49Finished: 14:25:54Elapsed: 4.203 secondsNULL[/code]

[SQL 2012] Intermittent Failures with new 2012 SSIS Job

[SQL 2012] Intermittent Failures with new 2012 SSIS Job


Intermittent Failures with new 2012 SSIS Job

Posted: 24 Jun 2013 01:58 AM PDT

Our team has several jobs in production using the old package deployment model. They have been running successfully and we haven't had any issues. However, we recently developed a new job that uses the project deployment model in SSIS 2012. It's a quick job that typically takes less than a minute. We set it up to run at 6:00 and 6:30 and it never fails consecutively. Since the beginning of the month, it has failed at: -6/22 at 6:30 -6/18 at 6:30 -6/16 at 6:00-6/13 at 6:00 -6/11 at 6:00-6/10 at 6:00Every other day and time not listed, it ran successfully. The job calls a few SSIS packages and processes a cube. The logs for the failures give almost no information. All that it says when it does fail is, "ended unexpectedly." When the failures occur, they are at different stages in the job and can happen after 6 seconds, 8 seconds, or 10 seconds, it appears to be completely random.I know this is kind of a shot in the dark, but has anyone experienced similar problems with the new project deployment model? Any tips or ideas for ways to resolve the failures?

how make online index

Posted: 23 Jun 2013 11:02 PM PDT

Hi,I want online index rebuild on all datbase in server

Tablix width changing when expanding field

Posted: 23 Jun 2013 10:26 PM PDT

I have a tablix with the middle column text hidden until the left column is expanded. All the text fits, but when I click the expand "+" the whole tablix width expands, cause other items on the report to move with it. Strangely, it never happens for the 1st row, but for any other row the problem occurs. There's no reason for it to widen, as there's no text which is longer than the fields. I've tried with 2 different browsers and I get the same problem. However, it doesn't occur in the "Preview" view in BIDS/Data Tools.

sql 2012 mirroring using sql xpress as witness

Posted: 23 Jun 2013 09:19 AM PDT

Hi,I'm trying to set up mirroring between two sql server 2012 instances and use sql server 2012 xpress as the witness. the primary and secondary instances do not see the xpress instance, which is a vm btw. but the witness sees the primary and secondary, kinda like a compatibility issue. Can i use xpress as a witness when setting up sql server 2012 mirroring? all are in the same domain.

[T-SQL] help with datediff

[T-SQL] help with datediff


help with datediff

Posted: 23 Jun 2013 11:20 PM PDT

hello theregot 2 time fields *start date*end datethe format of these columns is [b]hhmmss[u][/u][/b]i want to calculate the duration . (end date-start date)could i do this with datdiff or any other function?pay attention that a situation like this can be happened :*start date 235641*end date 001612so if you subtract ( 001612-235641) you got a negative number , in that case, i think you need to add 24 hours...thank you

Delete Operation on table with 60 Million records

Posted: 23 Jun 2013 05:41 PM PDT

Hi ,[b]Environment: SQL Server 2008 Enterprise Edition with sufficient hardware.[/b]I have a table with > 60 Million records on which I am performing delete operation using a stored proc. I am deleting the records by batch using TOP clause by joining with a #temp table (with no indexes). I have 1 Cluster Index and 3 non-cluseter index on this main table. Issue is the block itself is taking 95% of the stored proc's total execution. Below is the query that I am using to delete records. WHILE 1 = 1 BEGIN DELETE TOP(10000) t FROM EXPORT.SE_VARCS_DBF t -- Main table with > 60 Million Recs INNER JOIN #PatchIdentifiers p -- Temp Table ON t.M_SCNTYPE = p.M_SCNTYPE AND t.M_FAMILY0 = p.M_FAMILY0 AND t.M_MLABEL0 = p.M_MLABEL0 AND t.M_SLABEL0 = p.M_SLABEL0 WHERE t.ContainerID = @TargetContainerID IF @@ROWCOUNT = 0 BREAK ENDCould you please help me out with the advice.Thanks,Arjun.

GetDate() 5 Years ago without time formated 'YYYY-MM-DD'

Posted: 23 Jun 2013 03:40 AM PDT

I'm trying to restrict a query to go back five years ago.I have the following query which is not in the format that I need.[code="plain"]SELECT convert(varchar(11), DateAdd(yy, -5, GetDate()))[/code]Returns:[code="plain"]Jun 23 2008[/code]What I need is the following:[code="plain"]'2008-06-23'[/code]Thank you.

Need help with the code

Posted: 23 Jun 2013 02:39 PM PDT

Hi,When I am running the below stored procedure through a job I am getting the below error message.Executed as user: NT AUTHORITY\SYSTEM. Conversion failed when converting the varchar value '2314K' to data type int. [SQLSTATE 22018] (Error 245). The step failed.Can you please help me in fixing the above mentioned error.Stored procedure code:USE [Nice]GOSET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER OFFGOCREATE PROCEDURE [dbo].[SP_Test1] ASBEGIN BULK INSERT Table_Test FROM 'C:\Test\Test.csv' WITH ( FIELDTERMINATOR = '","', ROWTERMINATOR = '\"' ) INSERT INTO [dbo].[Table_Test1] ([emp_code] ,[first_name] ,[last_name] ,[middle_name] ,[initials] ,[pay_code] ,[pay_rate] ,[paid_by] ,[birth_date] ,[gender] ,[ssn] ,[street_addr1] ,[street_addr2] ,[city] ,[state] ,[zip] ,[phone1] ,[phone2] ,[hire_date] ,[sen_date] ,[dept_date] ,[term_date] ,[status] ,[wrk_status] ,[hrs_week] ,[homedept_id] ,[Test_Code] ,[user_name] ,[Testing_Code2_1] ,[Testing_Code2_2] ,[Testing_Code2_3] ,[Testing_Code2_4] ,[Testing_Code2_5]) SELECT left([col3],6) ,[col5] ,[col4] ,null ,null ,null ,0 ,[col21] ,null ,'M' ,right([col26],4) ,null ,null ,null ,null ,null ,[col10] ,[col11] ,[col6] ,[col7] ,null ,[col8] ,CASE [col22] WHEN 'B' THEN 'U' WHEN 'U' THEN 'U' WHEN 'T' then 'C' ELSE 'C' END ,[col9] ,null ,-1 ,[col16] ,left([col25],len([col25])-len('NSTES')) ,null ,[col20] ,[col12] ,[col14] ,left([col15],30) FROM [dbo].[Table_Test]update Table_Test1 set Test_Code=1001 where Test_Code=1000ENDThank You,

Search This Blog