Monday, June 24, 2013

[how to] How can I speed up PgAdmin III Queries?

[how to] How can I speed up PgAdmin III Queries?


How can I speed up PgAdmin III Queries?

Posted: 24 Jun 2013 01:55 PM PDT

I've been noticing that PgAdmin III isn't as responsive as I expected it to be.

For instance a query to select * ... from a table containing 20,397 rows takes 19.531 seconds in PgAdmin III and 7.383 seconds in DbVisualizer 9.0.7. I initially thought this was a coincidence, but it has been consistent across all queries run.

As such it appears that PgAdmin III takes approximately twice as long to query my database over DbVisualizer. So how can I speed up PgAdmin III Queries?

I am using Heroku Postgres 9.1.9 on the Crane package.

No predicate warning

Posted: 24 Jun 2013 02:00 PM PDT

I am getting No predicate warning on this query , can some help ?

SELECT DISTINCT TOP 100 T0.nsid,                           T0.id,                           T0.version,                           T0.name,                           T1.currency,                           T0.disbursmentamount,                           T0.effectivedate,                           T0.enddate,                           T0.contractname,                           T0.scheduledbillingdate,                           T0.dueamount,                           T0.paidamount,                           T0.cashpaidamount,                           T0.paidamount,                           T0.writeoffamount,                           T0.canceled,                           T0.creationdate,                           T0.creationtime,                           T0.mustbepaidat,                           T0.isrebill,                           T1.name,                           T0.status,                           T2.sendtoprint,                           T2.isprinted   FROM   (awfbillingstatementitem T0           INNER JOIN awfbusinessaccount T1                   ON (( ( T0.attachedtoclientaccount_nsid = T1.nsid )                         AND ( T0.attachedtoclientaccount_id = T1.id ) )))          LEFT OUTER JOIN awfbillingstatementitem T2                       ON ( ( ( T2.nsid = T0.nsid )                              AND ( T2.id = T0.id ) )                            AND ( T2.h_iskilled = 0 )                            AND ( T2.h_clid = 218169570 ) ),          awfcontract T3          LEFT OUTER JOIN awfdescversionedbydate T5                       ON ( ( ( T3.versionedpolicyowner_nsid = T5.nsid )                              AND ( T3.versionedpolicyowner_id = T5.id )                              AND ( -T3.versionedpolicyowner_version =                            T5.version ) )                            AND ( T5.h_iscurvers = 1 )                            AND ( T5.h_iskilled = 0 )                            AND ( T5.h_clid = 234821058 ) )          LEFT OUTER JOIN (awfpolicyownerversion T6                           LEFT OUTER JOIN awfdescversionedbydate_versions T8                                        ON ( T8.nsid = T6.nsid                                             AND T8.id = T6.id                                             AND T8.version = -T6.version ))                       ON ( T5.nsid = T8.lonsid                            AND T5.id = T8.loid                            AND T5.versions_version = T8.loversion                            AND ( T6.h_iscurvers = 1 )                            AND ( T6.h_iskilled = 0 ) ),          awfperson T4   WHERE  ( ( 0 = 0 )            AND ( 0 = 0 )            AND ( 0 = 0 )            AND ( 0 = 0 )            AND ( 0 = 0 )            AND ( 0 = 0 )            AND ( 0 = 0 )            AND ( 0 = 0 )            AND (( T0.canceled = 0 ))            AND ( 0 = 0 )            AND ( 0 = 0 )            AND ( 0 = 0 )            AND ( 0 = 0 )            AND ( 0 = 0 )            AND ( 0 = 0 )            AND ( 0 = 0 )            AND ( 0 = 0 )            AND (( ( T0.contract_nsid = T3.nsid )                   AND ( T0.contract_id = T3.id ) ))            AND ( 0 = 0 )            AND ( 1 = 1 )            AND ( (( ( T0.client_nsid = T4.nsid )                     AND ( T0.client_id = T4.id ) ))                   OR (( ( T1.thirdparty_nsid = T4.nsid )                         AND ( T1.thirdparty_id = T4.id ) )) )            AND ( 0 = 0 )            AND ( (( ( T3.subscriber_nsid = T4.nsid )                     AND ( T3.subscriber_id = T4.id ) ))                   OR (( ( T3.policyowner_nsid = T4.nsid )                         AND ( T3.policyowner_id = T4.id ) ))                   OR (( ( T6.policyowner_nsid = T4.nsid )                         AND ( T6.policyowner_id = T4.id ) )) )            AND ( 0 = 0 ) )          AND ( T3.h_iscurvers = 1 )          AND ( T4.h_iscurvers = 1 )          AND ( T0.h_iskilled = 0 )          AND ( T3.h_iskilled = 0 )          AND ( T4.h_iskilled = 0 )          AND ( ( T4.h_clid = 234817316 )                 OR ( T4.h_clid = 234820248 )                 OR ( T4.h_clid = 234816906 )                 OR ( T4.h_clid = 234817143 )                 OR ( T4.h_clid = 234821693 )                 OR ( T4.h_clid = 218169345 ) )   ORDER  BY T0.creationdate DESC,             T0.creationtime DESC,             T0.effectivedate DESC   

How can I write windowing query which sums a column to create discrete buckets?

Posted: 24 Jun 2013 05:50 PM PDT

I have a table which includes a column of decimal values, such as this:

id value size  -- ----- ----   1   100  .02   2    99  .38   3    98  .13   4    97  .35   5    96  .15   6    95  .57   7    94  .25   8    93  .15  

What I need to accomplish is a little difficult to describe, so please bear with me. What I am trying to do is create an aggregate value of the size column which increments by 1 each time the preceding rows sum up to 1, when in descending order according to value. The result would look something like this:

id value size bucket  -- ----- ---- ------   1   100  .02      1   2    99  .38      1   3    98  .13      1   4    97  .35      1   5    96  .15      2   6    95  .57      2   7    94  .25      2   8    93  .15      3  

My naive first attempt was to keep a running SUM and then CEILING that value, however it doesn't handle the case where some records' size end up contributing to the total of two separate buckets. The below example might clarify this:

id value size crude_sum crude_bucket distinct_sum bucket  -- ----- ---- --------- ------------ ------------ ------   1   100  .02       .02            1          .02      1   2    99  .38       .40            1          .40      1   3    98  .13       .53            1          .53      1   4    97  .35       .88            1          .88      1   5    96  .15      1.03            2          .15      2   6    95  .57      1.60            2          .72      2   7    94  .25      1.85            2          .97      2   8    93  .15      2.00            2          .15      3  

As you can see, if I were to simply use CEILING on crude_sum record #8 would be assigned to bucket 2. This is caused by the size of records #5 and #8 being split across two buckets. Instead, the ideal solution is to reset the sum each time it reaches 1, which then increments the bucket column and begins a new SUM operation starting at the size value of the current record. Because the order of the records is important to this operation, I've included the value column, which is intended to be sorted in descending order.

My initial attempts have involved making multiple passes over the data, once to perform the SUM operation, once more to CEILING that, etc. Here is an example of what I did to create the crude_sum column:

SELECT    id,    value,    size,    (SELECT TOP 1 SUM(size) FROM table t2 WHERE t2.value<=t1.value) as crude_sum  FROM    table t1  

Which was used in an UPDATE operation to insert the value into a table to work with later.

How to profile a particularly slow or inefficient query

Posted: 24 Jun 2013 02:57 PM PDT

I'm currently writing queries for my database but this is the first time I've dealt with large data sets of any sort, and I'm now finding that I need to improve the performance of my queries.

Scenario

I have a series of tables for purchases and customers as described below

users (user_id, forename, surname) 20k+ records

customers (customer_id, user_id, etc etc) 20k+ records

purchases (purchase_id, item_id, customer_id, price, timestamp, store_id) 150k+ records

deliveries (purchase_id, delivered_time etc) 150k+ records

store (store_id, name etc) <50 records

The query I am currently trying to perform should retrieve all customer names, the date of their last order, when that order was delivered and their total number of orders for all customers at a particular store whos most recent order was in the last 6 months. Here is what I'm currently using.

SELECT `customers`.`customer_id`, `forename`, `surname`, `delivered_time`, MAX(`purchases`.`timestamp`) as lastPurchase, COUNT(purchases.purchase_id) as totalPurchases FROM (`purchases`)   JOIN `deliveries` ON `purchases`.`purchase_id` = `deliveries`.`purchase_id`   JOIN `customers` ON `customers`.`customer_id` = `purchases`.`customer_id`   JOIN `users` ON `users`.`user_id` = `customers`.`user_id`   WHERE `store_id` = '1' GROUP BY `customer_id` HAVING lastPurchase >= '1372086055'  

However, this is pretty slow and takes 1-2 seconds each time, and I can only assume this time would increase as the amount of data increases.

In an attempt to profile the query I have used EXPLAIN which seems to indicate it is searching through around 20k records in the deliveries table. What should my next steps be to optimize this query? How can I go about reducing the execution time?

How do I rebuild single full text catalog (or why is "rebuild" full text catalog option disabled)

Posted: 24 Jun 2013 11:50 AM PDT

In SSMS. I have Full Text Catalog defined. If I select "Full Text Catalogs" folder - there is an option "Rebuild All" and it is enabled. But if I select individual FTC - "Rebuild" option (in context menu) is disabled.

Question Why is it disabled? and how do I enable it?

TempInsertStateItemShort procedure on ASPState is taking more than 1 second to execute

Posted: 24 Jun 2013 09:31 AM PDT

I'm working on a .NET website who runs in load balance within more than one server and because of that I need to have my session on SQL Server.

In the last 2 weeks I've notice that the procedure TempInsertStateItemShort is taking more than 1 second to run.

Basically this procedure inserts into tempdb.dbo.ASPStateTempSessions

 CREATE PROCEDURE dbo.TempInsertStateItemShort                @id         tSessionId,                @itemShort  tSessionItemShort,                @timeout    int            AS                    DECLARE @now AS datetime                DECLARE @nowLocal AS datetime                  SET @now = GETUTCDATE()                SET @nowLocal = GETDATE()                  INSERT [tempdb].dbo.ASPStateTempSessions                     (SessionId,                      SessionItemShort,                      Timeout,                      Expires,                      Locked,                      LockDate,                     LockDateLocal,                     LockCookie)                 VALUES                     (@id,                      @itemShort,                      @timeout,                      DATEADD(n, @timeout, @now),                      0,                      @now,                     @nowLocal,                     1)                  RETURN 0   

tempdb.dbo.ASPStateTempSessions table has 800k rows and the PK index is about 98% fragmented.

Does the fragmentation be the reason for insert take more than 1 second? Does anyone had experience that? How can I improve performance on this table?

Thanks in advance!!!!

How to enter multiple checkbox list items into database for eventual reporting [closed]

Posted: 24 Jun 2013 12:03 PM PDT

(I tried posting this on SO but went unansered. I'll just post these theory type questions here from now on)

I'm in the process of creating the first of a few CheckBoxList for a manually created form submittal. I've been looking over multiple ways to not only create said CheckBoxList but to enter it into the database. However, the challenge I'm facing (mainly because I haven't encountered this scenario before and I'd like to learn this) is that I not only need to worry about entering the items correctly in the database but I will eventually need to produce a report or make a printable form from these entries.

Let's say I have an order form for a specific type of Barbeque grill and I will need to send this form out to distriution centers across the nation. The distribution centers will need to pull said barbecues if they are highlighted on the form.

Here's what the CheckBoxList for the distibution centers will look like:

  All  Dallas  Miami  Los Angeles  Seattle  New York  Chicago  Phoenix  Montreal  

If the specific city (or all the cities) are checked, then the distribution center will pull the barbecue grill for shipment.

The added part is that I want to:

  1. be able to create a grid view from this database for reporting to note which distribution center got orders for barbecues and

  2. be able to create reports to tell what distribution center sent out barbecue orders in a given month (among other reporting).

What I'm looking for is a theoretical approach to designing this form while keeping in mind the gridview and reports. Do I put the CheckBoxList data in one cell in a specific column? Do I create seprate columns for each distribution center? Am I even on the right track here?

Note: At this point I'm not looking for coding help, I'm just looking for some theory/logic help so I can learn this the right way the first time.

::edit::

First off, please note that the database design I have now is fully in sandbox mode. Nothing is finalized. Everything (again this is in sandbox mode) is in one single table. Also, I'm in now way looking for coding help. I'm looking for the right theoretical/logical approach to this puzzle so I can go in and play with the coding myself. I learn better that way. If I need coding help, I'll come back here for further assistance.

Here's what I'm working with right now.

In my aspx page I have a checkboxlist programmed with all the distribution centers entered as a listitem as well as an option for 'ALL' (of the distribution centers).

I also created a dedicated column in this table that holds all the information in the listitem and programmed a sqldataconnection to this table to play with the programmability of leveraging the database for this purpose.

When it comes to writing the selections to the database, I originally created a column for each destination city including the 'All' option. I was toying around with just putting the selections into one single column but with some of the information I've been reading today about Database Normalization, the former options seems to be a better one than the latter. Is this correct practice for situations such as this especially if I need to think about reporting?

export (mysqldump) databases from MySQL version 4.0?

Posted: 24 Jun 2013 09:27 AM PDT

Goal: export the schema and data from a MySQL version 4.0.16 DB.

Background:

I was asked to migrate a database from MySQL version 4.0.16. The database was installed on a Windows XP machine, and I just could not find mysqldump tool on that machine.

I then putty into a linux machine that has mysqldump. I ran the command, but got an error:

-bash-4.1$ mysqldump --databases db_2_dump --host='myhost' --port='6012' -u <user> -p >db-export.sql  Enter password:  mysqldump: Error: 'You have an error in your SQL syntax.  Check the manual that corresponds to your MySQL server version for the right syntax to use near 'SELECT DISTINCT LOGFILE_GROUP_NAME FROM INFORMATION_SCHEMA.FILE' when trying to dump tablespaces  

I looked up mysql reference. INFORMATION_SCHEMA is something new to version 5 and above.

This means the mysqldump on the linux box is incompatible with the production MySQL server which is on v4.0.16.

I tried to download Mysql 4.0 server and install it on a windows machine so that I can get hold of a compatible version, but searching on Mysql website found nothing older than 5.0.

(I also tried Mysql workbench to connect to this DB. Failed. "Unknown character set utf8")

So how can I export the schema and data at all from this legacy mysql DB???...

Can Someone Decrypt this? [closed]

Posted: 24 Jun 2013 09:11 AM PDT

config = {}    config["db_host"]   = "192.168.1.1"  config["db_user"]   = "ref"  config["db_passwd"] = "jVNEVfArZAzQqv5j"  config["db_name"]   = "brandb"  config["pc_name"]   = "ref11"  

It's from config.py file. Thank you very much.

Also, explain how did you get the db_password?

Total of select by all current session and by date

Posted: 24 Jun 2013 08:07 AM PDT

In MySQL :

This query allows us to have the total number of "select" on the server (query 1) :

mysql> SELECT VARIABLE_VALUE FROM information_schema.global_status WHERE variable_name like 'com_select';  

and this query allows us to know the number of "select" for the current session :

mysql> show status like 'Com_select';  

or

mysql> SHOW SESSION STATUS LIKE 'Com_select';  

please, how to have the number of "select" for all current sessions (total) and a query like the first (query 1) , but by date. Thanks.

Select data one by one from table and data need to repeat custom number of times [closed]

Posted: 24 Jun 2013 09:00 AM PDT

I have table data like

data1  data2  data3  data4  

and I need to select these rows like this:

data1  data1  data1  data1  data2  data2  data2  data3  data3  

and how to perform this in SQL Server 2005?

How can I send HTTP requests from SQL Server 2012 to a non-.Net application? [duplicate]

Posted: 24 Jun 2013 08:33 AM PDT

This question is an exact duplicate of:

I know basic only concepts (DDL,DML queries) of the SQL Server 2012. I explain the workflows of both my application and the database below.

Workflow of the application:

I build web application in Java. Whenever it receives an HTTP request, it fetches data from the database, does some computations on the data and the computed data will be stored in another database.

Before explaining the database workflow, I will explain what a database modification is from my point of view: If any operation happens like creating new tables, data modification, data deletion in tables, etc. indirectly all these are effects on the database.

Workflow of the database:

I have SQL Server 2012. Whenever the database is modified, it will send an HTTP request to the application. The Application runs on an Apache Tomcat server. Once the application receives the HTTP request, it continues that individual workflow.

If you understand my question, we are not sending HTTP requests from a browser (i.e. end-users can't access this application), we are sending HTTP request from SQL Server.

I hope you understand what my question is.

I read some tutorials for this, but I didn't get anything.

Can anyone suggest the correct way of doing this?

how to do Tungsten replication trouble shooting in mysql? [closed]

Posted: 24 Jun 2013 11:18 AM PDT

we have done replication setup(master to slave) using tungsten replicator in mysql.we have done setup successfully.But i don't know the trouble shooting about replication issues.

i want to learn about tungsten replication trouble shooting(regarding replication issues). How can i learn.Anybody Could you please suggest me. i was goggled so much but i didn't find any proper guides. Please help me.

SQL Server Agent Jobs and Availability Groups

Posted: 24 Jun 2013 11:42 AM PDT

I'm looking for best practice in dealing with scheduled SQL Server Agent jobs in SQL Server 2012 availability groups. Maybe I missed something, however at the current state I feel that SQL Server Agent is not really integrated with this great SQL2012 feature.

How can I make a scheduled SQL agent job aware of a node switch? For example I have a job running on the primary node which loads data each hour. Now if the primary goes down, how can I activate the job on the secondary which now becomes primary?

If I schedule the job always on the secondary it fails because then the secondary is read-only.

copy only tables with data from one database to another database [closed]

Posted: 24 Jun 2013 09:43 AM PDT

I have two database , dbOne(version - 10.50.1600 - locate in office server ) and dbTwo(version - 10.0.1600 - locate in my local server) .
I want to copy dbOne's tables with data to dbTwo .
Is there any way or script to do it ? I don't want to upgrade my local server-version !

How to determine that the chain of log degraded

Posted: 24 Jun 2013 09:03 AM PDT

SQL Server 2005.

I have DB with full recovery model. If i execute on my base

ALTER DATABASE test SET RECOVERY SIMPLE  ALTER DATABASE test SET RECOVERY FULL  

or

BACKUP LOG test WITH  truncate_only  

then if I try run log backup

BACKUP LOG test TO  DISK = 'backupfile.bak'  

I receive error message

BACKUP LOG cannot be performed because there is no current database backup.

Question:
How do I check that I was not able to make a log backup. Without running backup log.

Full text search on multiple tables in PostgreSQL

Posted: 24 Jun 2013 12:52 PM PDT

I have several tables that have related data. I want to have a search input box on the dashboard of the system that will search for anything (client, invoice, delivery note, ...).

Is it possible to have a FTS implementation in PostgreSQL that will get the item searched for in more than one table and get the results?

For example, if the user typed 45, the query should look in the clients table and return the clients with id 45 or name has 45, also look in the invoices table and return the invoices with number 45, and also look into the delivery notes table and return the delivery notes with number 45.

Or is there a better way of solving this problem without using full text search?

WITH ROLLUP WHERE NULL = x

Posted: 24 Jun 2013 12:37 PM PDT

I tried to find this question somewhere else on here, and found a couple that were similar to it -- but all the answers seemed pretty convoluted, and I think it was surprising to me because I just imagined that SQL would have a way easier way to do this.

Essentially, I'm using a GROUP BY to group two values on top of each other. The coding is simple and works well, where I get these results using WITH ROLLUP at the end:

Type    Person  Count  Type A  Person 1    10  Type A  Person 2    91  Type A  Person 3    13  Type A  Person 4    10  Type A  Person 5    2  Type A  NULL        126  Type B  Person 6    16  Type B  Person 7    63  Type B  Person 8    15  Type B  Person 9    22  Type B  Person 10   16  Type B  Person 11   3  Type B  Person 12   20  Type B  Person 13   44  Type B  NULL        198  NULL    NULL        360  

Is there an easy way to substitute something saying if "NULL" is in the "Person" column, then make NULL AS "Subtotal" and if "NULL" is in the "Type" column, NULL AS "Grand Total?"

And if that's weirdly complicated, I'd be totally fine with just calling all of them "Total."

Thanks!

why would command text not execute at all?

Posted: 24 Jun 2013 05:37 PM PDT

in my report i have::

<CommandText>SELECT         column1,          column2,           'poop'  from mytable  </CommandText>  

i am using sql server profiler to see exactly what statement is being set.

i have set only two filters:

  1. databaseName
  2. enter image description here

yet after running the report, no statement gets intercepted.

i suspect that because i am a beginner at SSRS, i am missing something crucial here.

for what reason would commandtext not be executed at all?

i did follow this question, to make sure that i am using sql profiler correctly, and indeed, i am: http://stackoverflow.com/questions/9107383/sql-server-profiler-capture-calls-to-your-databases-stored-procs-during-ssrs

another bit of important information is although the chart shows no data:

enter image description here

i actually am indeed showing data when i run the commandtext from ssms!

Avoiding performance hit from GROUP BY during FULLTEXT search?

Posted: 24 Jun 2013 09:37 AM PDT

Is there any clever way to avoid the performance hit from using group by during fulltext search?

SELECT p.topic_id, min(p.post_id)   FROM forum_posts AS p   WHERE MATCH (p.post_text) AGAINST ('baby shoes' IN BOOLEAN MODE)  GROUP BY p.topic_id  LIMIT 20;  

In this example it's fetching the lowest post_id for unique topic_ids that match the text.

With the group by to find the min, it's taking 600ms in a million row database, with about 50K rows examined.

If I remove the MIN but leave the GROUP BY, it's the same slowness, so it's the GROUP hit.

I suspect this is because it can only use one index, the fulltext ?

key: post_text | Using where; Using temporary; Using filesort    Query_time: 0.584685  Lock_time: 0.000137  Rows_sent: 20  Rows_examined: 57751  Full_scan: No  Full_join: No  Tmp_table: Yes  Tmp_table_on_disk: No  Filesort: Yes  Filesort_on_disk: No  Merge_passes: 0  

Without the GROUP BY it's 1ms so this has to be filesort speed?

(I've removed ORDER BY and everything else to isolate where the hit is)

Thanks for any insight and ideas.

(using MyISAM under mariadb if it matters)

AWS performance of RDS with provisioned IOPS vs EC2

Posted: 24 Jun 2013 11:37 AM PDT

Has anyone done a performance comparison of AWS RDS with the new provisioned IOPS vs EC2? I've found plenty of non-high IOPS RDS vs EC2 but nothing with the new high IOPS feature in RDS.

How to troubleshoot SQL Server Merge Replication Conflict?

Posted: 24 Jun 2013 08:38 AM PDT

I have SQL Server 2008 R2 Merge Replication setup.

I have 2 publications for the same database with 20+ subscribers.

The logic of our application prevents changing the same data from 2 subscribers at the same time.

Lately I started getting conflicts for some specific tables & almost all the time the contents of the 2 conflicting rows ( the publication row & the subscriber row ) are identical with no differences at all!!

My question is: What is the standard steps to troubleshoot conflicts & trace down to the point that is causing it?

Open source and commercial tools to build dynamic queries of fact tables in star schema

Posted: 24 Jun 2013 03:08 PM PDT

What user interfaces or tools (commercial and/or open source) can query a star schema model (Ralph Kimball dimensional model)? So for example, if we have X fact tables, and Y dimension tables, and Z cross reference tables, we'd like to query a random fact table. If you are aware of such tools, can you provide a reference of each.

sp_startpublication_snapshot Parameter(s)

Posted: 24 Jun 2013 06:37 PM PDT

I am creating a stored procedure that:

  1. Restores a DB from a .bak giving the .mdf and .ldf a new name (so we have have several copies of the same DB up
  2. (If specified in the SP's parameter) Creates three merge replication publications
  3. (What I need help doing) Generating the snapshots for the three publications using sp_startpublication_snapshot

Here is my new brick wall... On this DB server, I have a 'shell' db that they will be running the SP from, that has a history table so I can keep track of who created/deleted databases using my SP's... The only parameter for sp_startpublication_snapshot is @publication... I can give it the publication name, but since I am not running it from the publishing database, how do I specify the publishing database?

i.e.: the publication shows up as:

[WC48_Database1]: upb_Inspection_PrimaryArticles  

but I am running the script from the database [WC_QACatalog]

Any ideas about how to accomplish this?

Thank you, Wes

Passing path from batch file to mysql [closed]

Posted: 24 Jun 2013 08:36 AM PDT

My batch file will contain some thing like this :

    set MUSER=root       set MPASS=xprd123       set MHOST=172.33.49.55       set DB=DBName1      set MBIN=C:\Program Files\MySQL\MySQL Server 5.5\bin      set scriptpath=D:\STATS_COLLECTOR       "%MBIN%\mysql.exe" -u"%MUSER%" -p"%MPASS%" -h "%MHOST%" "%DB%" -e "set @2:='%scriptpath%'; source DBQuery1.sql;"  

Content of DBQuery1.sql will be :

SELECT 'INNODB_OS_LOG_WRITTEN','QUESTIONS'   UNION   SELECT  MAX(IF(variable_name = 'INNODB_OS_LOG_WRITTEN', VARIABLE_VALUE, NULL)) AS INNODB_OS_LOG_WRITTEN,   MAX(IF(variable_name = 'QUESTIONS', VARIABLE_VALUE, NULL)) AS QUESTIONS   FROM information_schema.GLOBAL_STATUS   INTO OUTFILE '@2/stats.csv'       FIELDS TERMINATED BY ','       ENCLOSED BY '"'       LINES TERMINATED BY '\n'  ;  

I am not sure this is the way to pass path to the SELECT query inside DBQuery1.sql.

This code is not working.

I looked at other posts on passing path to batch file, but whatever I tried, I am not able to solve this issue.

When I execute the batch file, I am not sure if the path is getting passed into the sql file and if yes, why @2 is not getting the path passed.

Could anybody kindly help on this.

SQL Server 2012 Standard Edition - multiple instances and memory utilization

Posted: 24 Jun 2013 09:43 AM PDT

If we have multiple instances of a SQL Server 2012 Standard Edition (which has a 64 GB memory limit) on one server which has 192 GBs of RAM, do both instances only have access to the first 64 GBs of memory, or can they access different portions of memory, so that they can each have their own 64 GB "chunk".

This is in consideration for an Active/Active cluster if both nodes fail over to a single node.

Flashback feature in Oracle 11g?

Posted: 24 Jun 2013 11:17 AM PDT

I am using Oracle 11gR1. As a part of a project, I have to execute multiple update statements by disabling a few constraints (triggers and check constraints) which could result in a corrupted database.

If the database gets corrupted, I have to re-import it entirely, which is an additional task that is time consuming.

I have very little knowledge about the Oracle Flashback feature. Will Flashback help me to avoid these unnecessary imports? Can I set some stable point and switch back to there if anything goes wrong?

Can someone please give me brief overview on this feature?

If it fits my requirements, I will try to learn.

PostgreSQL 9.1 Hot Backup Error: the database system is starting up

Posted: 24 Jun 2013 08:24 PM PDT

I have been working on a hot backup for Postgres 9.1 for awhile and have run into a consistent issue. After restarting Postgres on the slave server the pgstartup log file and the daily log file under the pg_log directory reads with no errors. However, when I try to enter into the database using the psql command, I get the error:

  FATAL: the database system is starting up.  

The recovery.conf file also does not turn to recovery.done. I have extensively researched this error and consistently find the same response: the database has not been cleanly shut down before I tried to restart Postgres. The only ways I have restarted Postgres is through the service postgresql-9.1 restart or /etc/init.d/postgresql-9.1 restart commands. After I receive this error, I kill all processes and again try to restart the database and still receive the same error. I am at a loss of where to go from here and how to fix this issue. Below is the exact process that I have done to complete the hot backup.

Master Server Configurations:

pg_hba.conf, added the line:

  host replication postgres IPAddressOfSlaveServer trust  

postgresql.conf:

  wal_level = hot_standby  max_wal_senders = 5  listen_address = '*'  port = 5432  max_wal_senders = 5  wal_keep_segments = 32  

Slave Server Configurations:

postgresql.conf:

  hot_standby = on  

recovery.conf:

  standby_mode = on  primary_conninfo = host=IPAddressOfMasterServer  port = 5432  user = postgres  restore_command = 'cp /var/lib/pgsql/9.1/data/pg_xlog/%f "%p"'  

After configuring both servers

I change to the postgres user on the master server and run the commands:

  psql -c "Select pg_start_backup('label',true);";  rsync -a -v -e ssh /var/lib/pgsql/9.1/data slave:/var/lib/pgsql/9.1/data \          --exclude postmaster.pid  pgsql -c "select pg_stop_backup();";  

After syncing database with slave server

I restart the slave server and the start up does not fail. The pgstartup.log reads:

  Success. You can now start the database server using:        /usr/pgsql-9.1/bin/postgres -D /var/lib/pgsql/9.1/data  or      /usr/pgsql/9.1/bin/pg_ctl -D /var/lib/pgsql/9.1/data -l logfile start  

the current day log file, postgresql-Thu.log, reads:

  Log: shutting down  Log: Database system is shut down  Log: database system was shut down in recovery at 2012-4-10  Log: entering standby mode  Log: restored log file "logFileName" from archive  Log: consistent recovery state reached at 0/BF0000B0  Log: redo starts at 0/BF000020  Log: restored log file "logFileName" from archive  Log: unexpected pageaddr 0/85000000 in log file 0, segment 192, offset 0  Log: unexpected pageaddr 0/85000000 in log file 0, segment 192, offset 0  Log: streaming replication successfully connected to primary  

I researched unexpected pageaddr and from the postgres archives, it is my understanding that it is quite normal and one of the expected ways to detect end-of-WAL.

Any advice would be greatly appreciated.

No comments:

Post a Comment

Search This Blog