Friday, September 20, 2013

[how to] postgresql replication - pg_stat_replication is showing empty columns

[how to] postgresql replication - pg_stat_replication is showing empty columns


postgresql replication - pg_stat_replication is showing empty columns

Posted: 20 Sep 2013 08:16 PM PDT

I've a postgresql 9.2 streaming replication setup. It appears that the slave is getting the updates from master and is in sync. I've verified it by looking at pg_xlog dir and process list.

  $ ps aux | grep 'postgres.*rec'  postgres 26349  2.3 42.9 38814656 18604176 ?   Ss   Sep20  24:06 postgres: startup process   recovering 000000010000026E00000073  postgres 26372  4.9  0.1 38959108 78880 ?      Ss   Sep20  51:27 postgres: wal receiver process   streaming 26E/731E05F0  

And the startup logs on the slave also look alright.

  2013-09-21 03:02:38 UTC LOG:  database system was shut down in recovery at 2013-09-21 03:02:32 UTC  2013-09-21 03:02:38 UTC LOG:  incomplete startup packet  2013-09-21 03:02:38 UTC FATAL:  the database system is starting up  2013-09-21 03:02:38 UTC LOG:  entering standby mode  2013-09-21 03:02:38 UTC LOG:  redo starts at 26E/71723BB8  2013-09-21 03:02:39 UTC FATAL:  the database system is starting up  2013-09-21 03:02:39 UTC LOG:  consistent recovery state reached at 26E/75059C90  2013-09-21 03:02:39 UTC LOG:  invalid xlog switch record at 26E/75059E98  2013-09-21 03:02:39 UTC LOG:  database system is ready to accept read only connections  2013-09-21 03:02:39 UTC LOG:  streaming replication successfully connected to primary  

What worries me is that the pg_stat_replication table on the master. It shows that there is a client connected, but doesn't show anything more.

  archive=> select * from pg_stat_replication;   pid | usesysid |  usename   | application_name | client_addr | client_hostname | client_port | backend_start | state | sent_location | write_location | flush_location | replay_location | sync_priority | sync_state  -----+----------+------------+------------------+-------------+-----------------+-------------+---------------+-------+---------------+----------------+----------------+-----------------+---------------+------------   999 |    16384 | replicator | walreceiver      |             |                 |             |               |       |               |                |                |                 |               |  (1 row)  

Is this the expected behavior? I remember seeing values for client_addr, sent_location, replay_location etc. when I did a test run some time back. Is there anything that I'm missing?

How to allow SqlAgent Job to access all databases (e.g. sys.databases)

Posted: 20 Sep 2013 05:09 PM PDT

After creating a SQL Agent Job using SSMS with a single step that starts by calling:

select * from Master.sys.databases;  

Instead of the job listing all the DBs it only lists 3:

  • The current one (as was specified in the Agent)
  • tempdb
  • master

From within the job it reports:

original_login() = Domain\MACHINE$  -- assuming this is the agent account  suser_sname() = Domain\DomainSqlAdminAccountName  

Each of these exhibits the same behavior from within the job:

SELECT * FROM master..sysdatabases;  EXEC sp_databases;  EXEC sp_helpdb;  EXEC sp_msForEachDB 'PRINT ''?'''  

The account 'Domain\DomainSqlAdminAccountName' has serverole 'sysadmin' and has a user associated with it for most user databases and master but not model, msdb, tempdb.

The SQL Agent account shows it has server role 'sysadmin' but does not have users mapped to each for any database.

The job has "Owner": 'Domain\DomainSqlAdminAccountName', also tried Agent account and self account.

From the SSMS dialog "Edit Step" -> "General tab", "Run as" is blank (it offers no choices). But from the Advanced tab I am able to specify the 'Domain\DomainSqlAdminAccountName' account.

Side experiment that seems related:

  1. Logged in as sysadmin,
  2. use SomeUserDatabase
  3. execute as login = 'Domain\DomainSqlAdminAccountName', can see all.
  4. revert;
  5. Then execute as user = 'Domain\DomainSqlAdminAccountName', can only see 3.
  6. revert;
  7. can see all

For 'SomeUserDatabase' the account has role membership's: db_owner, db_securityadmin, db_ddladmin, db_datawriter, db_datareader, db_backupoperator (and the associated login is sysadmin).

How does one allow access to all databases via a SQL Agent Job? In the end this procedure will actually be calling into each of the (~30) DBs.

Random Problem with Variable

Posted: 20 Sep 2013 06:43 PM PDT

I managed to fix my old error so I'm going to update this post with the new error. Basically I have to manually add a structure in phpmyadmin with my users ID in order for it to update using this code. Because I set it to update on whatever column has the same as the users id.

Any idea how I could do it differently so there is some sorta default automatically? I tried doing it so when you register it would do it and that works out well but I am going to have up to 10 different subjects so how would I do this in a more efficient way?

<?php  include 'connect.php';  include 'main.php';  $id = $_SESSION['id'];  $result3 = mysql_query("SELECT * FROM html WHERE id='$id'") or die("MYSQL Query Failed : " .mysql_error());  while($row3 = mysql_fetch_array($result3))  {  $lastpage=$row3['lastpage'];  }  $page = "page1.php";  $sql = "UPDATE html SET id='$id', lastpage='$page' WHERE id='$id'";  mysql_query($sql) or die("MYSQL Query Failed : " . mysql_error());  echo $lastpage;  ?>  

Another thing I'm trying to figure out is if I can make it check the database for the value under 'lastpage' like I did but then check if it is equal to page2 and above than it will not update the values at all. Basically it will only update the values if thats your first time going to the page on your account. Get it?

Anyone got any ideas?!

Help with Cybernetics SAN Configuration

Posted: 20 Sep 2013 05:37 PM PDT

There are tons of articles out there recommending RAID/ LUN/ drive configurations for SQL Servers, but most leave me still questioning as they usually state "it depends" and for this reason I'd like to provide some specifics and get some more direct recommendations:

I have a client who already installed SQL Server on a Cybernetics SAN and here are the specifics:

• SQL Server 2008 R2 installed on a virtual server

• Storage:

 o       Cybernetics miSAN-D8/T16 SAN iSCSI storage appliance    o       4+1 GbE ports    o       8 GB SAN controller cache   o       USB and eSATA ports   o       Two miniSAS device   o       8 drive bays all filled with 600 GB SATA-II 7.2K drives (5.4 TB total)    o       All 8 drives set up in RAID 6   o       Three logical drives:         1. C (156 GB) – for system and apps (IIS, web app, SQL, SSIS, CA ArcServe backup app, VIPRE antivirus)         2. E (1.64 TB) – for data, logs and tempdb         3. F (1.80 TB) – for backups  

The SQL Server is more read intensive, with data bulk loaded nightly like a data warehouse and the web app used for looking up and reporting on the data.

Periodically the client experiences sluggishness, sometimes severely, for short periods with the web application. I've also noticed Brent Ozar's Blitz script reveals slow writes to the E drive at times.

I am wanting to suggest the client reconfigure their SAN as follows:

  1. C ( 156 GB, raid 1) – for system and app files.
  2. D ( 1200 GB, raid 10/5) – for sql data files.
  3. L ( 444 GB, raid 1) – for log files.
  4. T ( 1200 GB, raid 5) – for tempdb data files.
  5. Backup to a different san or network location and archive to tape.

But at the very least I believe the client should do the following:

  1. Move backups to a different SAN or network location, in order to free up the F drive for SQL use as well as for better DR.
  2. Move log files and tempdb data files to the F drive.

Suggestions?

Should snapshot agent continue to run in transactional replication?

Posted: 20 Sep 2013 01:46 PM PDT

We have transactional replication running for a large number of publications on sql server 2008 R2 2 node active/active cluster. I have noticed that the snapshot agent job runs hourly and it looks like it does a refresh of the publications (literally, a new snapshot?). During this time we experience locking issues with the blocker being this job. Should this be doing a refresh this often if no new articles are being added or changed?

Error: Unable to write inside TEMP environment variable path

Posted: 20 Sep 2013 02:09 PM PDT

I am installing PostgreSQL 9.2.4-1-windows.exe on Windows 7 64 with Service Pack 1.

I get the following, rather famous it seems, error:

There has been an error.

Unable to write inside TEMP environment variable path.

This occurs immediately upon launching. I see the postgresql splash, then this error occurs.

Everything I have found on the interwebs so far, i have tried. It includes:

  1. Disabled all anti-virus software. I have also stopped everything in the startup.
  2. Checked file associations for .VBS scripts. Nothing irregular (Windows Script Host registered)
  3. Windows Script Host is Enabled.
  4. I've tried installing as Administrator and other users that have admin rights.
  5. I have created the key HKLM\SOFTWARE\PostgreSQL\Installations manually and give it full rights.
  6. I have given Everyone full access to C:\Users\[username]\AppData\Local\Temp and c:\temp

Basically everything in the top 10 google hits.

I am working with the 32 bit version, but the 64 bit version install fails with the same error. I'm able to install successfully on other machines with similar config.

What else can I try?

Install log file:

Log started 09/20/2013 at 11:55:16 Preferred installation mode : qt Trying to init installer in mode qt Mode qt successfully initialized Executing C:\Users\chad\AppData\Local\Temp/postgresql_installer_67b274715d/temp_check_comspec.bat Script exit code: 0

Script output: "test ok"

Script stderr:

Could not find registry key HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.2 Data Directory. Setting variable iDataDirectory to empty value Could not find registry key HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.2 Base Directory. Setting variable iBaseDirectory to empty value Could not find registry key HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.2 Service ID. Setting variable iServiceName to empty value Could not find registry key HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.2 Service Account. Setting variable iServiceAccount to empty value Could not find registry key HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.2 Super User. Setting variable iSuperuser to empty value Could not find registry key HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.2 Branding. Setting variable iBranding to empty value Could not find registry key HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.2 Version. Setting variable brandingVer to empty value Could not find registry key HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.2 Shortcuts. Setting variable iShortcut to empty value Could not find registry key HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.2 DisableStackBuilder. Setting variable iDisableStackBuilder to empty value [11:55:21] Existing base directory: [11:55:21] Existing data directory: [11:55:21] Using branding: PostgreSQL 9.2 (x86) [11:55:21] Using Super User: postgres and Service Account: NT AUTHORITY\NetworkService [11:55:21] Using Service Name: postgresql-9.2 Executing cscript //NoLogo "C:\Users\chad\AppData\Local\Temp\postgresql_installer_67b274715d\prerun_checks.vbs" Script exit code: 1

Script output: Input Error: There is no script engine for file extension ".vbs".

Script stderr: Program ended with an error exit code

Error running cscript //NoLogo "C:\Users\chad\AppData\Local\Temp\postgresql_installer_67b274715d\prerun_checks.vbs" : Program ended with an error exit code

filtered index null and > 0 disparity -- need explanation

Posted: 20 Sep 2013 05:56 PM PDT

I'm getting behavior in a query plan I cannot explain. The difference is between two filtered indexes I'm testing with. One uses a where id is not null and the other uses where id > 0. In my actual data I get a 95% favorable runtime using the > 0 index. I can't see why they would be different... my keys are auto incrementing integers starting at 1 referenced by the adjoining table on a nullable column. Below is a script that will generate structures and data analogous to my production data.

Please note the following... if you execute these scripts and then compare the two select statements, you will probably get fifty fifty performance like I did. However in my production data the query that utilizes the > 0 filtered index chooses an index scan instead of seek. This scan runs much faster. Below is a screenshot of my actual query plan comparison. Secondly, I have rebuilt these indexes, fragmentation is not an issue.

Questions: Why the disparity? Where does the scan vs seek come from? Wouldn't is not null and > 0 be equivalent in a join when the datatype is int identity(1,1)?

enter image description here

Schema + Data:

if exists (select * from sys.tables where name = 'sometable')  begin drop table sometable end  go    create table sometable (id int not null identity(1,1) primary key                        , value nvarchar(50));  go    insert into sometable values ('a test value');  insert into sometable values ('a test value');  insert into sometable values ('a test value');  insert into sometable values ('a test value');  insert into sometable values ('a test value');  go    if exists (select * from sys.tables where name = 'audit')  begin drop table audit end  go    create table audit (id int not null identity(1,1) primary key                    , sometable_id int null                    , someothertable_id int null                    , auditvalue nvarchar(50));  go    declare @count int = 0;  while (@count < 40000)  begin  insert into audit (sometable_id,someothertable_id,auditvalue) values (floor(rand()*5+1),null,'a sometable audit');  insert into audit (sometable_id,someothertable_id,auditvalue) values (floor(rand()*5+1),null,'another sometable audit');  insert into audit (sometable_id,someothertable_id,auditvalue) values (null,1,'irrelevant other table audit');  insert into audit (sometable_id,someothertable_id,auditvalue) values (floor(rand()*5+1),null,'another audit for record one sometable');  insert into audit (sometable_id,someothertable_id,auditvalue) values (floor(rand()*5+1),null,'record three audit');  insert into audit (sometable_id,someothertable_id,auditvalue) values (floor(rand()*5+1),null,'another record 3 audit');  insert into audit (sometable_id,someothertable_id,auditvalue) values (null,50,'irrelevant1');  insert into audit (sometable_id,someothertable_id,auditvalue) values (null,51,'irrelevant2');  insert into audit (sometable_id,someothertable_id,auditvalue) values (null,52,'irrelevant3');  insert into audit (sometable_id,someothertable_id,auditvalue) values (floor(rand()*5+1),null,'relevant fourth record');  insert into audit (sometable_id,someothertable_id,auditvalue) values (floor(rand()*5+1),null,'back to one record');  insert into audit (sometable_id,someothertable_id,auditvalue) values (null,53,'irrelevant 4');  insert into audit (sometable_id,someothertable_id,auditvalue) values (null,54,'irrelevant fifth record');  insert into audit (sometable_id,someothertable_id,auditvalue) values (null,55,'irrelevant sixth record');  insert into audit (sometable_id,someothertable_id,auditvalue) values (null,56,'irrelevant seventh record');  insert into audit (sometable_id,someothertable_id,auditvalue) values (floor(rand()*5+1),null,'a fifth record audit');  insert into audit (sometable_id,someothertable_id,auditvalue) values (floor(rand()*5+1),null,'another fourth record audit');  set @count = (@count + 1);  end  go    --drop index audit.filter_null_audits_for_sometable  create index filter_null_audits_for_sometable on audit(sometable_id,id) include(auditvalue) where sometable_id is not null;  go  --drop index audit.filter_upzero_audits_for_sometable  create index filter_upzero_audits_for_sometable on audit(sometable_id,id) include(auditvalue) where sometable_id > 0;  go  

Two Queries:

select top 50000 a.id sometableid,a.value,b.id auditid,b.auditvalue    from sometable a    join audit b with(index(filter_null_audits_for_sometable)) on a.id = b.sometable_id    select top 50000 a.id sometableid,a.value,b.id auditid,b.auditvalue    from sometable a    join audit b with(index(filter_upzero_audits_for_sometable)) on a.id = b.sometable_id and b.sometable_id > 0  

Update1

I copied production data into my test tables. Instead of fifty fifty, the results matched the included query plan and reproduced the disparity. The test scenario is structurally analogous.

Update2

select a.id sometableid,a.value,b.id auditid,b.auditvalue    from sometable a   inner merge join audit b with(index(filter_null_audits_for_sometable))       on a.id = b.sometable_id    select a.id sometableid,a.value,b.id auditid,b.auditvalue    from sometable a    join audit b with(index(filter_upzero_audits_for_sometable))       on a.id = b.sometable_id  

These query plans will not compile. Why? They force me to use > 0 as query join conditions to get the optimized plan.

SQL Server 2008 Convert String to Datetime question

Posted: 20 Sep 2013 10:28 AM PDT

I have a string column in a table that display data as "CXL P/D 08/15/13"

I'm trying to convert this column to datetime, but I can't figure out how to extract only date and change the data type.

Cast(RIGHT(RTRIM(Trade_Date) ,8)as datetime) I'm trying this statement but it doesn't work

Conversion failed when converting date and/or time from character string.

Thank you

Syntax Error - Anyone Able to Help?

Posted: 20 Sep 2013 09:23 AM PDT

I'm very bad with checking Syntax and I confused myself here D:

Anybody able to help?

Function:

<?php  require('main.php');  require('connect.php');  $id=$_SESSION['id'];  $result3 = mysql_query("SELECT * FROM photo where id='$id'");  while($row3 = mysql_fetch_array($result3))  $image=$row3['filename'];  ?>  

Calling the image:

<img src="  <?php  if(empty($image)){  echo "upload/your-photo.jpg";  }else{  echo "site_images/" + $id + "/" + $image;  }?>"  alt="" width="85" height="85">  

Basically it will save the file uploaded into the database with the filename and the id of the user, it will then make a directory inside site_images named after the ID of the user and it will move the image to that folder. Then to show the image I made it check if the user never uploaded a photo, and if they haven't to show the default. If they have uplaoded a photo it will set the source of the images to "site_images/$id/$image" basically.

This works perfectly if I put it like this.

<img src="/site_images/<?php echo $id ?>/<?php echo $image ?>">  

Any idea what I did wrong or whats wrong with my syntax?

Edit: The image it shows up with is a link to just /15 and 15 is the ID of the user. It doesn't go in site_images but it does go in the users ID. It also doesn't go in the $image variable. Any ideas?

How can I find out which tables use reference partitioning from the Oracle data dictionary?

Posted: 20 Sep 2013 01:44 PM PDT

I'm writing a generic drop-all-objects script for our Oracle databases. It generates drop table TABLE_NAME cascade constraints purge lines for all tables, amongst other object types, by looping through user_objects.

The problem is that tables which are have reference-partitioned tables dependent on them can't be dropped in this way: ORA-14656: cannot drop the parent of a reference-partitioned table.

How can I detect which tables are the parents of reference partitioned tables from the data dictionary, so I can skip them in the first loop, and drop them in a second loop?

Allow self-referential foreign keys to be null from an alter-table

Posted: 20 Sep 2013 08:10 AM PDT

I am trying to establish a hierarchical relationship between rows in an existing table. There are two fields of interest: a not-nullable primary key field id, and a second nullable field parentId.

Both these fields are pre-existing, and the normal case is that parentId will be null. This seems to be a normal use case, and in fact we have some old tables using the same pattern.

I am trying to use fluentMigrator to add a foreign key constraint after the fact on these rows:

Create.ForeignKey("C", "ParentId", "C", "CId").OnUpdate(Rule.None);  

It is failing with the error:

The ALTER TABLE statement conflicted with the FOREIGN KEY SAME TABLE constraint "FK_C_ParentId". The conflict occurred in database "R", table "dbo.C", column 'CId'.

This is not a problem with illegal values, as I did a bulk update on the test system, and set all C.ParentId = null, and am still getting the same error.

Thoughts?

how to join table from different database with different Users of database?

Posted: 20 Sep 2013 12:25 PM PDT

I have Two database called.

Database1:

host: 127.0.0.1  user: root1  pass: root1  dbname: db1  

Table 1: Category

ID    INT(11)          Primary Key  Name  VARCHAR(255)  

Database2:

host: 127.0.0.2  user: root2  pass: root2  dbname: db2  

Table 2: Product

ID    INT(11)          Primary Key  Name  VARCHAR(255)  CatID INT(11)          FK( Table1: Category: ID )  

I am calling query like this;

SELECT *   FROM db2.product AS p  LEFT JOIN db1.category AS c ON p.CatID = c.ID  

It will return error:

SELECT command denied to user 'root2'@'127.0.0.2' for table 'Product'  SELECT command denied to user 'root1'@'127.0.0.1' for table 'Category'  

Can anyone help me How can I resolve this issue.

help in creating a promotion/Offer table (database schema) for shopping cart site [on hold]

Posted: 20 Sep 2013 08:33 PM PDT

I want to add promotions/ offer table to my shopping cart site that have other tables as product, category.

product table as- -ProductId -Productname -CategoryId -UnitPrice -etc

category table as- -CategoryId -CategoryName -ParentCategoryId -etc

Offer may be of various types e.g. 'Discount of 20% on any book purchase', 'Discount of 20% on any mobile of samsung brand', 'Buy 2 get 1 free' etc.

How can I design promotion table and then link it with product or category table. I want to show the List of products with offer on that product, but don't want to apply too many joins. What would be the effective way of doing this.

Detabase Tuning Advisor about Indexing

Posted: 20 Sep 2013 09:46 AM PDT

How does DTA make an index recommendation? What is the technology behind that? If we can use same on our side then we can shorten the whole process of Tuning recommendation.

       Like        (1) creating trace file using SQl Profiler       (2) Use trace that trace file in DTA       (3) Than look for recommendation       (4) Follow the recommendation & finally see the results through Query Execution Plan.  

So i just wanted to know if it is there any way to create a predictable index.

FOR loop oracle query with calulation

Posted: 20 Sep 2013 11:37 AM PDT

I have a list of distinct areas, and would like to track statistics for those districts in a SQL statement. I currently have a setup that runs some summary information, but it doesn't include the whole list.

What I'd like to do is Select a distinct array of values from 1 table, then run SQL for each of those values against a few other tables and populate a third table. I've done a fair amount of Googling, and see that I can loop using "LOOP" or feed in the distinct values via a CURSOR.

Here's my SQL I'm using now, but if there isn't a dispatchgroup that is in the current outages table, it doesn't show statistics for that dispatch group.

 select a.DISPATCHGROUP, a.WEATHER, SUM(a.NUM_CUST) as out_cust, count(a.eventnum) as events,  (select count(*) from v_outcall_rpt b, v_current_outages_rpt c where b.isopen ='T' and b.dgroup = a.dispatchgroup )  as calls,   (select count(*) from v_current_outages_rpt c where c.dispatchgroup = a.dispatchgroup  and c.event_type_code = 'TOEL')   as ISOLATED,   (select count(*) from v_current_outages_rpt c where c.dispatchgroup = a.dispatchgroup and (c.event_type_code = 'TOTO' or c.event_type_code = 'TOTU' or c.event_type_code = 'TOVT' or c.event_type_code = 'TOPM'))   as TSTATION,   (select count(*) from v_current_outages_rpt c where c.dispatchgroup = a.dispatchgroup and (c.event_type_code = 'TOFS' or c.event_type_code = 'TOSC' or c.event_type_code = 'TORL' or c.event_type_code = 'TOLC' or c.event_type_code = 'TOSW'))  as lateral,  (select count(*) from v_current_outages_rpt c where c.dispatchgroup = a.dispatchgroup and (c.event_type_code = 'TOCR' or c.event_type_code = 'TOSB' or c.event_type_code = 'TOTL' ))   as mainline,   (select count(*) from v_current_outages_rpt c where c.dispatchgroup = a.dispatchgroup and (c.event_type_code = 'TOEE' ))   as emergency,   (select count(*) from v_current_outages_rpt c where c.dispatchgroup = a.dispatchgroup and (c.event_type_code = 'TOEC' ))   as cleanup,   (select count(*) from v_outage_duration_rpt where substr(restore_dts,1,8) = (select SUBSTR(max(view_dts),1,8) from v_current_outages_rpt) and a.dispatchgroup = dgroup) as restored,   (select count(*) from v_outcall_rpt where substr(offtime,1,8) = (select SUBSTR(max(view_dts),1,8) from v_current_outages_rpt) and a.dispatchgroup = dgroup) as callstoday   from v_current_outages_rpt a where weather is not null group by a.DISPATCHGROUP, WEATHER;  

When I try adding the select distinct dgroup from MV_WE_AGENCY_LOOKUP as dispatchgroup it errors out with single query returns more than one row.

Ideally what i'd like to happen is first I select the distinct dgroups from the first table, then loop through the SQL and output the statistics. Here's my stab at the psudocode.

FOR EACH DGROUP IN (select distinct dgroup from MV_WE_AGENCY_LOOKUP)     LOOP       a.WEATHER, SUM(a.NUM_CUST) as out_cust, count(a.eventnum) as events,  (select count(*) from v_outcall_rpt b, v_current_outages_rpt c where b.isopen ='T' and b.dgroup = %dgroup )  as calls,   (select count(*) from v_current_outages_rpt c where c.dispatchgroup = %dgroup  and c.event_type_code = 'TOEL')   as ISOLATED,   (select count(*) from v_current_outages_rpt c where c.dispatchgroup = %dgroup and (c.event_type_code = 'TOTO' or c.event_type_code = 'TOTU' or c.event_type_code = 'TOVT' or c.event_type_code = 'TOPM'))   as TSTATION,   (select count(*) from v_current_outages_rpt c where c.dispatchgroup = a.dispatchgroup and (c.event_type_code = 'TOFS' or c.event_type_code = 'TOSC' or c.event_type_code = 'TORL' or c.event_type_code = 'TOLC' or c.event_type_code = 'TOSW'))  as lateral,  (select count(*) from v_current_outages_rpt c where c.dispatchgroup = a.dispatchgroup and (c.event_type_code = 'TOCR' or c.event_type_code = 'TOSB' or c.event_type_code = 'TOTL' ))   as mainline,   (select count(*) from v_current_outages_rpt c where c.dispatchgroup = %dgroup and (c.event_type_code = 'TOEE' ))   as emergency,   (select count(*) from v_current_outages_rpt c where c.dispatchgroup = %dgroup and (c.event_type_code = 'TOEC' ))   as cleanup,   (select count(*) from v_outage_duration_rpt where substr(restore_dts,1,8) = (select SUBSTR(max(view_dts),1,8) from v_current_outages_rpt) and %dgroup = dgroup) as restored,   (select count(*) from v_outcall_rpt where substr(offtime,1,8) = (select SUBSTR(max(view_dts),1,8) from v_current_outages_rpt) and %dgroup = dgroup) as callstoday   from v_current_outages_rpt a where weather is not null group by dgroup, WEATHER;  END LOOP  

Location of the mdf file of the database

Posted: 20 Sep 2013 12:37 PM PDT

I have a database Project. My problem is where can I find the .mdf and _log.ldf of my database so that I can transfer my database to another user. I am using Microsoft SQL Server 2008 Management Studio

Database f properties

My Ssms.exe is stored in this path

C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\Ssms.exe  

mongo replication not happening

Posted: 20 Sep 2013 08:22 AM PDT

I setup master/slave replication of mongodb on EC2. But I see not replication happening. When I do "show dbs" on master, it shows all dbs expected.

But when I do the same on replica, it does not show me any db.

Please help me troubleshoot.

rs.config()  {      "_id" : "ittw",      "version" : 1,      "members" : [          {              "_id" : 0,              "host" : "ip-10-304-48-93:27017"          }      ]  }    rs.config()  {      "_id" : "ittw",      "version" : 2,      "members" : [          {              "_id" : 0,              "host" : "domU-17-31-19-16-88-5F:27017"          },          {              "_id" : 1,              "host" : "ec2-50-321-52-908.compute-1.amazonaws.com:27017"          }      ]  }        rs.status() // replica  {      "set" : "ittw",      "date" : ISODate("2013-08-12T06:55:57Z"),      "myState" : 1,      "members" : [          {              "_id" : 0,              "name" : "$ip:27017",              "health" : 1,              "state" : 1,              "stateStr" : "PRIMARY",              "uptime" : 356039,              "optime" : Timestamp(1375934685, 1),              "optimeDate" : ISODate("2013-08-08T04:04:45Z"),              "self" : true          }      ],      "ok" : 1  }        rs.status()  //Master  {      "set" : "ittw",      "date" : ISODate("2013-08-12T06:57:19Z"),      "myState" : 1,      "members" : [          {              "_id" : 0,              "name" : "PRI_IP:27017",              "health" : 1,              "state" : 1,              "stateStr" : "PRIMARY",              "uptime" : 356543,              "optime" : Timestamp(1376289725, 1),              "optimeDate" : ISODate("2013-08-12T06:42:05Z"),              "self" : true          },          {              "_id" : 1,              "name" : "REP_IP:27017",              "health" : 1,              "state" : 1,              "stateStr" : "PRIMARY",              "uptime" : 355869,              "optime" : Timestamp(1375934685, 1),              "optimeDate" : ISODate("2013-08-08T04:04:45Z"),              "lastHeartbeat" : ISODate("2013-08-12T06:57:17Z"),              "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"),              "pingMs" : 1          }      ],      "ok" : 1  }  

Filter on a window function without writing an outer SELECT statement

Posted: 20 Sep 2013 05:22 PM PDT

Since window functions cannot be included in the WHERE clause of the inner SELECT, is there another method that could be used to write this query without the outer SELECT statement? I'm using Oracle. Here is the sqlfiddle.

SELECT MERCHANTID, WAREHOUSEID, PRODUCTCODE  FROM (    SELECT 0    , WMP.MERCHANTID    , WMP.WAREHOUSEID    , WMP.PRODUCTCODE    , RANK() OVER (PARTITION BY ML.MASTERMERCHANTID, WMP.PRODUCTCODE ORDER BY ML.PREFERENCEORDER ASC NULLS LAST) MERCH_RANK    , RANK() OVER (PARTITION BY WMP.MERCHANTID, WMP.PRODUCTCODE ORDER BY WM.PREFERENCEORDER ASC NULLS LAST) WARE_RANK    FROM MW_WAREHOUSEMERCHANTPRODUCT WMP      LEFT OUTER JOIN MW_WAREHOUSEMERCHANT WM ON 0=0                  AND WMP.MERCHANTID  = WM.MERCHANTID                  AND WMP.WAREHOUSEID = WM.WAREHOUSEID      LEFT OUTER JOIN MW_MERCHANTLINK ML ON 0=0                  AND WMP.MERCHANTID = ML.LINKEDMERCHANTID      LEFT OUTER JOIN MW_WAREHOUSEMERCHANTPRODUCT MASTER ON 0=0                  AND ML.MASTERMERCHANTID = MASTER.MERCHANTID                  AND WMP.PRODUCTCODE     = MASTER.PRODUCTCODE    WHERE 0=0      AND WMP.STOCKLEVEL > 0      AND NVL(MASTER.STOCKLEVEL, 0) <= 0  )  WHERE 0=0    AND MERCH_RANK = 1    AND WARE_RANK  = 1  ;  

Errors while creating multiple mysql-5.5 instances

Posted: 20 Sep 2013 09:22 AM PDT

I have installed 3rd mysql instance on my testing server.

2 instances already running without any issues.

When I installed 3rd instance by mysql-5.5.30 zip source, it installed successfully but when I tried to restart 3rd instance of mysql it says,

MySQL server PID file could not be found! [FAILED]

Starting MySQL........................................................

The server quit without updating PID file.

1st instance running on 3305

BASEDIR: /usr/local/mysql  Configuration File: /etc/my.cnf  Socket: /tmp/mysql.stock  

2nd instance running on 3306

BASEDIR: /backup/mysql-cbt  Configuration File: /backup/mysql-cbt/my.cnf  Socket: /backup/mysql-cbt/mysql.stock  

3rd instance running on 3307

BASEDIR: /home/mysql-5/  Configuration File: /home/mysql-5/my.cnf  Socket: /home/mysql-5/mysql.stock  

Error Log is as follows.

130513 11:22:23 mysqld_safe Starting mysqld daemon with databases from /backup/mysql-cbt/data<br>  130513 11:22:23 InnoDB: The InnoDB memory heap is disabled<br>  130513 11:22:23 InnoDB: Mutexes and rw_locks use GCC atomic builtins<br>  130513 11:22:23 InnoDB: Compressed tables use zlib 1.2.3<br>  130513 11:22:23 InnoDB: Using Linux native AIO<br>  130513 11:22:23 InnoDB: Initializing buffer pool, size = 128.0M<br>  130513 11:22:23 InnoDB: Completed initialization of buffer pool<br>  130513 11:22:23 InnoDB: highest supported file format is Barracuda.<br>  130513 11:22:23  InnoDB: Waiting for the background threads to start<br>  130513 11:22:24 InnoDB: 5.5.30 started; log sequence number 1595675<br>  130513 11:22:24 [ERROR] /backup/mysql-cbt/bin/mysqld: unknown option '--safe- show-    database'<br>  130513 11:22:24 [ERROR] Aborting<br>  130513 11:22:24  InnoDB: Starting shutdown...<br>  130513 11:22:25  InnoDB: Shutdown completed; log sequence number 1595675<br>  130513 11:22:25 [Note] /backup/mysql-cbt/bin/mysqld: Shutdown complete<br>  130513 11:22:25  mysqld_safe mysqld from pid file /backup/mysql-cbt/cbt-instance.pid ended  

Still unable to figure out this error. How can I start the 3rd instance?

Installation

Here is the story from beginning. I have installed mysql via source:

  1. extracted source mysql-5.5.30-linux2.6-x86_64.tar.gz to directory, then recursively change permission of mysql directory to user & group mysql.
  2. in scripts directory of mysql I run this command as root:
[root@server /backup/mysql-cbt/scripts]# ./mysql_install_db --basedir=/backup/mysql-cbt/ --datadir=/backup/mysql-cbt/data --defaults-file=/backup/mysql-cbt/my.cnf --user=mysql**  Installing MySQL system tables...  OK  Filling help tables...  OK    To start mysqld at boot time you have to copy  support-files/mysql.server to the right place for your system    PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER !  To do so, start the server, then issue the following commands:    /backup/mysql-cbt//bin/mysqladmin -u root password 'new-password'  /backup/mysql-cbt//bin/mysqladmin -u root -h 69.16.196.128 password 'new-password'    Alternatively you can run:  /backup/mysql-cbt//bin/mysql_secure_installation  which will also give you the option of removing the test  databases and anonymous user created by default.  This is  strongly recommended for production servers.    See the manual for more instructions.    You can start the MySQL daemon with:  cd /backup/mysql-cbt/ ; /backup/mysql-cbt//bin/mysqld_safe &    You can test the MySQL daemon with mysql-test-run.pl  cd /backup/mysql-cbt//mysql-test ; perl mysql-test-run.pl    Please report any problems with the /backup/mysql-cbt//scripts/mysqlbug script!  

When I restart this instance it gives error of updating pid and exit.

Which step is missing?

My my.cnf file (/backup/mysql-cbt/my.cnf):

    [mysqld]      federated      basedir         = /backup/mysql-cbt      datadir         = /backup/mysql-cbt/data      log-error       = /backup/mysql-cbt/cbt-instance.err      pid-file        = /backup/mysql-cbt/cbt-instance.pid      #socket         = /tmp/mysql.sock      socket          = /backup/mysql-cbt/mysql.sock      port            = 3310      user            = mysql      tmpdir          = /backup/mysql-cbt/tmpdir  

/etc/init.d/mysql_cbt

# If you want to affect other MySQL variables, you should make your changes  # in the /etc/my.cnf, ~/.my.cnf or other MySQL configuration files.    # If you change base dir, you must also change datadir. These may get  # overwritten by settings in the MySQL configuration files.    basedir=/backup/mysql-cbt  datadir=/backup/mysql-cbt/data    # Default value, in seconds, afterwhich the script should timeout waiting  # for server start.   # Value here is overriden by value in my.cnf.   # 0 means don't wait at all  # Negative numbers mean to wait indefinitely  service_startup_timeout=900    # Lock directory for RedHat / SuSE.  lockdir='/var/lock/subsys'  lock_file_path="$lockdir/mysql"    # The following variables are only set for letting mysql.server find things.    # Set some defaults  mysqld_pid_file_path=/backup/mysql-cbt/mysql-cbt-instance.pid  if test -z "$basedir"  then    basedir=/usr/local/mysql-cbt    bindir=/usr/local/mysql-cbt/bin    if test -z "$datadir"    then      datadir=/backup/mysql-cbt/data    fi    sbindir=/backup/mysql-cbt/bin    libexecdir=/backup/mysql-cbt/bin  else    bindir="$basedir/bin"    if test -z "$datadir"    then      datadir="$basedir/data"    fi    sbindir="$basedir/sbin"    libexecdir="$basedir/libexec"  fi  

How do I identify the remote db agent name to use in create_database_destination on Oracle 11gR2?

Posted: 20 Sep 2013 08:22 PM PDT

I am trying to setup DBMS_SCHEDULER in Oracle 11g to run a remote database job.

I have a remote Oracle 11g R2 database on unix and a local one on Windows.

I read that you can install the oracle scheduler agent from the 11g client install for machines that don't have Oracle installed but this is not needed for running remote jobs if Oracle is present on both machines. With the remote agent installation, you run schagent and provide parameters to register the agent to the remote machine but I cant find any instructions on the web regarding how to register remote agents when both machines have Oracle installed or what to use as the agent name in this case.

I have added an entry to tnsnames.ora for the remote DB and can tnsping, etc.

If I run the dbms_scheduler.create_database_destination procedure, it requires an agent name but where can I find this for the database or how can I check that it is running on Unix or Windows?

Loading XML documents to Oracle 11g DB with control file

Posted: 20 Sep 2013 06:22 PM PDT

I am using Oracle 11g XML database and trying to load XML documents to this DB with a control file and the sqlldr utility. All these XML files have an element that contains a date string with time stamp (and the letter T in the middle of it). Oracle rejects this date string because of T in it and thus the XML file is not loaded to the DB.

I want to use the Oracle function TO_TIMESTAMP_TZ on the date string during the data load, but I do not know how to do it. That's where I need help. If there is any other way to import the XML (with date string with timestamp), I will try that also.

Here is the date entry in XML file:

<ns3:EntryDateTime cls="U">2013-04-20T21:02:52.468-04:00</ns3:EntryDateTime>  

And here is entire code the control file:

load data infile 'filelist.dat'     append into table STXP xmltype(XMLDATA)    ( filename filler char(120), XMLDATA lobfile(filename) terminated by eof )  

I believe that I can execute the above control file with the sqlldr utility on SQL*Plus command line also, but not sure about this option. If this is possible, I guess I can ALTER SESSION (to somehow format date string) on command line before executing the control file.

The filelist.dat mentioned above contains entries for input XML file, with one line listing one XML file. The above date entry is required in each XML file. Each XML file has about 50 different elements, some required and some optional. I would greatly appreciate your help.

UPDATE: I successfully registered the schema, which contains definition for the date string, and 100 other schema, with a script. Since this script is very large, I am posting only 2 registration portions of it:

DECLARE  SCHEMAURL VARCHAR2( 100 );  SCHEMADOC VARCHAR2( 100 );  BEGIN  SCHEMAURL := 'http://www.some.org/stxp/DataTypes.xsd';  SCHEMADOC := 'DataTypes.xsd';  DBMS_XMLSCHEMA.registerSchema(       SCHEMAURL,       BFILENAME( 'XSD_DIR', SCHEMADOC ),      LOCAL => TRUE, -- local      GENTYPES => TRUE,  -- generate object types      GENBEAN => FALSE, -- no java beans      GENTABLES => TRUE,  -- generate object tables      OWNER => USER );      SCHEMAURL := 'http://www.some.org/stxp/STXP.xsd';      SCHEMADOC := 'STXP.xsd';      DBMS_XMLSCHEMA.registerSchema(       SCHEMAURL,       BFILENAME( 'XSD_DIR', SCHEMADOC ),      LOCAL => TRUE, -- local      GENTYPES => TRUE,  -- generate object types      GENBEAN => FALSE, -- no java beans      GENTABLES => TRUE,  -- generate object tables      OWNER => USER );    END;    /  

The 2nd registration above is the last in the script, and this creates the table STXP, in which I am trying to load about 800 XML files. Each XML file has a root element called stxp.

This is the relevant definition of date string:

 <xsd:simpleType name="DT" xdb:SQLType="TIMESTAMP WITH TIME ZONE">      <xsd:restriction base="xsd:dateTime"/>  </xsd:simpleType>  

And this is how I am using the above definition:

<element name="EntryDateTime" type="oth:DT"/>  

When I make the above element optional (for testing purpose) and remove the date string entry (mentioned near the top of this question) from my XML file, the XML file is loaded successfully to Oracle XML database. When I put this entry back to XML file (because it is required), Oracle rejects it.

Because I let Oracle take care of population of STXP table with data from XML files, I am not sure if I can set a trigger to pre-process the date string from the input XML file before saving it in database. i think there is a way to do it in the control file.

Enabling/disabling/changing Oracle auditing without a shutdown?

Posted: 20 Sep 2013 01:22 PM PDT

I have a large database that needs auditing on a very detailed level (every select, update, insert, and delete, along with the actual text of the statement) for about half the users. I know how to do this (here is a related question for anyone interested), but I also realize we cannot do this for any extended amount of time because of how much quickly we would be collective massive amounts of data. So while there is a scheduled downtime coming up that we can implement the auditing, to change it to fine tune it (as management changes the request of what data they desire) or to disable it once we have enough data would require us having to take the database down to disable this. While this wouldn't be too horrible to do if we were able to schedule a short downtime late at night, it would be really nice if this could be avoided altogether, but every reference I've seen so far requires the database to be brought down and back up.

So, my question (which I believe to be general enough for the purposes of this site, even though the back story is specific) is if there is a way to enable/disable/change auditing without shutting down the database.

Edit: Oracle version 11r2. As for AUD$ vs. FGA, I'm not sure what FGA is, but AUD is the table which will hold the data, so I am assuming that one.

MySQL - run SELECT statement on another server without defining the table structure

Posted: 20 Sep 2013 04:22 PM PDT

In MySQL I can query information on another server using federated tables, as long as I've defined the same table structure locally.

In MS SQL Server, however, I can run any SQL statement against a linked server. Is it possible to do the same thing in MySQL?

consequences of using "innodb_flush_method = O_DIRECT" without having a battery backed write cache? or on a KVM guest?

Posted: 20 Sep 2013 07:22 PM PDT

Mysql 5.5.29 Innodb- 128GB Ram - 32 cores - Raid 10 SSD.

Our server which is a dedicated KVM guest on a 'baremetal' is hosting our heavy read-write DB server. Everything is file-per-table. innodb_Buffer_pool is 96GB with 1GBx2 log_file_size with about 20 minutes of writes to fill up those logs at peak time.

How bad of a situation would it be if O_DIRECT (currently running on the default) was enabled during a high work load without a battery backed write cache and a total crash were to occur on the OS, parent host or the power was cut?

Does a battery backed write cache make a difference if the server is a vm guest of the parent anyway?

.

DB2 to require password each time

Posted: 20 Sep 2013 11:22 AM PDT

I am using db2inst1 to connect to a database in DB2 which I have installed on my machine. Therefore, db2inst1 user does not require username/password authentication (borrows them from the OS). I would like to change that, and force every time a connection is initiated a username/password to be requested.

More specifically, this is how the authentication configuration looks like:

db2 get dbm cfg|grep -i auth

 GSS Plugin for Local Authorization    (LOCAL_GSSPLUGIN) =    Server Connection Authentication          (SRVCON_AUTH) = NOT_SPECIFIED   Database manager authentication        (AUTHENTICATION) = CLIENT   Alternate authentication           (ALTERNATE_AUTH_ENC) = NOT_SPECIFIED   Cataloging allowed without authority   (CATALOG_NOAUTH) = NO   Trusted client authentication          (TRUST_CLNTAUTH) = SERVER   Bypass federated authentication            (FED_NOAUTH) = NO  

db2 connect to dbName

   Database Connection Information       Database server        = DB2/LINUXX8664 10.1.0     SQL authorization ID   = DB2INST1     Local database alias   = DBNAME  

db2 connect to dbName user db2inst1 using password

   SQL1639N  The database server was unable to perform authentication because      security-related database manager files on the server do not have the required      operating system permissions.  SQLSTATE=08001  

I have played with some authentication combinations for "AUTHENTICATION" and "TRUST_CLNTAUTH" without much luck.

oracle streams apply: how to get a reason why LCR message was not applied

Posted: 20 Sep 2013 12:22 PM PDT

I've set up bidirectional oracle streams replication (11gR1) using identical scripts on both machines (DB1 and DB2). Although changes from DB1 are being applied to DB2, changes from DB2 to DB1 aren't.

I have only one rule for capture processes that checks for apply tag to prevent cyclic propagation, and have no rules for apply processes. LCRs from DB2 are dequeued at DB1 by apply reader process (update LCRs are among dequeued messages for sure, because when I issue 50 inserts at DB2, at DB1 dequeued messages counter increases by 50), but aren't processed by apply coordinator and servers :

Apply           Apply           Apply Apply  Name            Queue           Tag   Status  --------------- --------------- ----- ------------------------------  FROM_DB2_APP    FROM_DB2_APP_Q  02    ENABLED    Apply                                Messages       Last  Name            State                Dequeued        SCN  --------------- ------------------ ---------- ----------  FROM_DB2_APP    DEQUEUE MESSAGES          102    1118751    Apply                             Trans   Trans   Trans   Trans   Trans   Trans  Name            State             Appld  Assign    Rcvd   Ignrd  Rollbk   Error  --------------- --------------- ------- ------- ------- ------- ------- -------  FROM_DB2_APP    IDLE                  0       0       0       0       0       0    Apply           Server  Messages  Name                ID   Applied State  --------------- ------ --------- ------------------------------  FROM_DB2_APP         1         0 IDLE  

As far as I understand, in that case LCRs can be silently ignored (without throwing apply error) only if SCN of LCR is lesser that instantiation SCN for a table, but instantiation SCN is 1114348 (< 1118751):

Source       Object       Object       Instantiation  Database     Owner        Name                   SCN  ------------ ------------ ------------ -------------  DB2          DUMMYUSR     DUMMYTBL           1114348  

Oracle provides means to deal with errors, but how to check why message was not applied if there was no error?

SQL Server 2008 Setup Error 0x80070490

Posted: 20 Sep 2013 02:22 PM PDT

I am trying to install SQL Server 2008 x64 on Windows 2008 R2 and keep getting the following error:

SQL Server Setup has encountered the following error: Element not found. (Exception from HRESULT: 0x80070490)

I have applied all required patches and there are no other instances of SQL Server on the machine.

Any clues as to what the cause might be?

Thanks.

Replicating data from Oracle to MySQL

Posted: 20 Sep 2013 03:22 PM PDT

I work with a vendor that does data analytics, and they currently receive a replication stream from some of our databases using a product called Goldengate (which is very expensive). Goldengate has been great - it replicates transactions from the Tandem-NSK source and can apply the changes into any supported database - they're using MySQL at the remote end. We're switching our billing system to Oracle, and while we could continue to use Goldengate to move these logs, I'd like to see if there's another option.

We initially chose Goldengate because nothing else could get data out of the Tandem NSK, but now that we're moving to Oracle, there may be some more native (or at least simpler) choices. I've got nothing against them - like I said, it works great - but I'm hoping that two mainstrem databases are easier to do replication between than the NSK.

Are there any products of methods that would help get transactional data from an Oracle system into an MySQL database? I'm not sure if there's any way to do this kind of replication natively (I know we can do Oracle -> MSSQL using native replication, but not any way to target MySQL that I'm aware of), or if anybody knows of a product that could facilitate this (and costs less than Goldengate).

Thanks for any suggestions!

MI Data Warehouse Advice

Posted: 20 Sep 2013 10:22 AM PDT

I have recently started a new job and part of my remit is to try to rescue the Management Information (MI) Data Warehouse. I use the term Data Warehouse very loosely here!

The server setup is:

  • Windows Server 2008
  • MS SQL Server 2008 R2 Standard Edition

The disks split in to 3 drives:

  • C: (OS) & System Databases

  • D: Log Files (Approx 120GB Free)

  • E: Data Files (Approx 1.3TB Free)

These are the observations I have made regarding the database:

  • The schema is exactly the same as the equivalent production database schema. So rather than extract & transform the data in to a Data Warehouse style schema it is basically a copy of production that is used for reporting.
  • The tables do not have any indexes, primary keys or foreign keys.
  • Most information required for reporting is located in 5-10 core tables. These tables range from 10-15 million rows to 1.6 billion rows.
  • There are no stored procedures or views. All reports use embedded SQL (most of these reports are Excel docs.)
  • I have found some tables that contain duplicate rows, that shouldn't be there.
  • There are hundreds of logins with access that no one seems to know who they are or what they need them for.

Importing data

The data is imported using batch files and OSQL. It is slow, clunky and prone to failure (It has failed 4 times and I have only been there for 2 and half weeks) The logging is also poor.

So apart from all that, it is perfect...

I need to find a way to fight my way out of this mess but I am not sure how to go about it.

Ideally, I want to be able to:

  • migrate the import process to use staging tables and SSIS
  • modify the existing schema to include referential integrity
  • create indexes to improve performance
  • create the reports in SSRS instead of random Excel docs dotted around
  • lock down the access (or at least perform some sort of audit)

The main issue at the moment is the performance. I have created a new filegroup on drive D: (where the log files are stored) and placed a few non clustered indexes on there. I am being slightly cautious as I don't want to increase the import times as these are taking too long as it is!

I wanted to partition the larger tables but partitioning is not included in Standard, it is an Enterprise feature.

I realise that this is a pretty huge task and I am not looking for a magic fix but a little guidance on how to attack this would be a great help.

EDIT:

I should also point out that there is no test or dev environment for this either...

How can I convert from Double Precision to Bigint with PostgreSQL?

Posted: 20 Sep 2013 10:32 AM PDT

I need to convert a value of Double Precision to Bigint with PostgreSQL. How can I do that?

I have tried with to_bigint(myvalue) but that function didn't exist.

No comments:

Post a Comment

Search This Blog