Thursday, June 13, 2013

[how to] Filtering records not present with specific conditions in another table

[how to] Filtering records not present with specific conditions in another table


Filtering records not present with specific conditions in another table

Posted: 13 Jun 2013 08:18 PM PDT

I have a table (let's call it extra_info) in MySQL that has various extra infomation/labels on user accounts:

account_id / data_key   11 / 'test'   11 / 'blah'   12 / 'coupon a'   12 / 'coupon b'   15 / 'whatever'   18 / 'test'  

And so on.

I am building reports on other tables but need to filter out records that are present in the extra_info table as 'test'.

So in the example above, records 1-10, 13-14, 16-17 are fine because they are not in extra_info, 11 needs to be filtered out (it is 'test'), 12 is fine (but obviously just need to have it once even though is has 2 labels, 15 is fine (not 'test'), and 18 needs to be removed.

I have done this with subquiry in the WHERE clause:

WHERE   -- filter out test labels   a.id NOT IN (     SELECT ei.account_id FROM extra_info AS ei     WHERE ei.data_key = 'test'  )  

This does its job, however, I have a suspicion that a regular join would have worked faster. What's the right syntax for it?

If I just join and filter out the ones with data_key='test', #11 would still be there - so this is not a solution. Grouping by account_id will probably has performance implications and basically same impact as the current code, right?

Any ideas?

How to count number of consecutive events matching status of last event?

Posted: 13 Jun 2013 03:48 PM PDT

In MS-SQL Server 2008, I have a table of events in the following format:

Controller   |  ExecutionTime        |  Result  1            |  2012-09-24 09:00:00  | 0  1            |  2012-09-24 09:01:00  | 0  2            |  2012-09-24 09:02:00  | 1  

I need to analyze this table to produce a result that:

  • Identifies each controller whose latest event was an error (result=1)
  • Identifies how many consecutive failures that controller has had (# events since the most recent successful event)

I've made some progress toward this task, I can identify the controllers whose most recent event was a failure - see this SQL Fiddle. But I could use some help finding an approach to counting the # of consecutive failures that has led to this state.

Thanks! John

Is backing a website by a SQL Server 2012 OLAP cube considered reasonable?

Posted: 13 Jun 2013 04:09 PM PDT

I have been tasked with architecting a solution for a large retail chain. They want to allow each of its 1.2 million customers to log on to a web site to see the distribution of recent purchases (current month, previous month, year-to-date) over about 50 categories. Data will be updated once every day.

I am thinking of putting up a SQL Server 2012 based OLAP cube and letting the website query this cube directly, leveraging features like proactive caching. However, being a developer at heart, I have next to no experience with the analysis services parts of SQL Server, so am quite concerned about the performance of this solution.

Does connecting a web site directly to an OLAP cube sound like a feasible solution? Do such systems react to the load from multiple users roughly like a SQL Server, making this a reasonable solution, or do they act completely differently?

I don't expect users to check their status very often and I will of course be using caching on the webserver etc.

Oracle schema migration to new database with zero downtime

Posted: 13 Jun 2013 01:17 PM PDT

I have a problem: I have two production schemas on one database each serving two different appplicaitons.

I need to migrate (move) one schema to an entirely new database that will be dedicated for that application with zero downtime. SO EXPORT/IMPORT can not be used.

I have license to even active data guard. But if I setup data guard from current database DB to new DB, should I switch over OR failover or what else so that the current primary will continue to be primary in its role and only the new DB will change its role from standby to fully operational primary. Then we simply direct the application to new and start the new db. Thanks for your help.

MySQL import LOCK doesn't prevent PRIMARY key issues

Posted: 13 Jun 2013 03:44 PM PDT

I am using Django and only want to backup mysql data (not it's structure; Django will do that, and it will otherwise mess up Django-South's migration checks).

Some info;

  • my tables are almost all intertwined with each other by Foreign Key bindings.
  • I'm currently having over 100 tables that hold all my data.

What I'm trying to achieve is some mechanism with mysqldump to dump my data, like so;

imac2011:Desktop allendar$ /Applications/MAMP/Library/bin/mysqldump -u root -p my_database --no-create-info > my_database_data.sql  

..and restore it like so;

imac2011:Desktop allendar$ /Applications/MAMP/Library/bin/mysql -u root -p my_database < my_database_data.sql  

..but then with the ability to store the following on the dump (and thus restore it that way on import);

When doing mysqldump

  1. Store the PRIMARY key counters

When importing my_database_data.sql

  1. LOCK the tables fully (even ignoring PRIMARY key counter checks)
  2. INSERT all data (incl. primary keys)
  3. Set the PRIMARY key counter back to what was saved
  4. UNLOCK the tables again

Other issue

My first thought was to just ignore the PRIMARY keys somehow and just look-increment them again on insertion. The problem is that I can't do this due to their bindings with Foreign Keys, that would then thus break.

SQL restore duration

Posted: 13 Jun 2013 12:06 PM PDT

I am trying to determine the duration of backup restore. I executed a few restore command to restore some backups located on network share drive. Here is command and summary,

RESTORE DATABASE [test]  FROM DISK = '\\network\test.bak'  WITH      MOVE 'test_data' TO 'D:\test_data.mdf',      MOVE 'test_log' TO 'E:\test_log.ldf',      CHECKSUM, STATS;  

Backup A
1st restore take about 5 min. Ok, after restore complete, I delete the database
2nd restore take about 1.8 min. Hm.. Let's delete the database and try again.
3rd restore take about 1.5 min. Hm..

Backup B (almost same size as backup A, at the same network location)
1st restore take about 1.8 min. I deleted it and try again
2nd restore take about 1.5 min. Ok..

The service account is enabled for instant file initialization.

My question is why the restore duration varied on different restore attempt on the same backup file to the same machine? Is it purely on network throughput (perhaps someone was doing something on the network share and stuff), or something else like cache or something in SQL internal?

SQL Developer: how to script output to query results grid

Posted: 13 Jun 2013 11:55 AM PDT

I am used to MS SQL Server Studio where I can DECLARE and SET variables and use them in my query, and then be presented with the results in a datagrid that I can copy and paste into Excel.

In Oracle SQL Developer, I can get simple query results returned in the 'Query Results' grid, but if I need to script something more complex (with variables), I need to use the 'Run Script' option and my results are ouput to the 'Script Output' window, where it's much harder to select the results for copying into Excel.

Here is example code, where the variable :v_yr is used three times:

var v_yr number;  exec :v_yr := 2014;    select job.employee_no,  mas.last_name,  mas.first_name,  pay.contract_salary,  pay.contract_days_actual,  CASE job.contract_year WHEN :v_yr THEN cal.contract_days_actual END,  pay.full_year_salary,  pay.full_year_contract_days,  CASE job.contract_year WHEN :v_yr THEN sal.annual_rate END  from hrs.employee_jobs job  LEFT OUTER Join hrs.employee_master mas on job.employee_no = mas.employee_no  LEFT OUTER JOIN hrs.employee_pays pay on job.employee_no = pay.employee_no AND      job.job_seq_no = pay.job_seq_no  LEFT OUTER JOIN hrs.schedule_amounts sal on pay.schedule_id = sal.schedule_id AND      pay.schedule_no = sal.schedule_no AND      pay.schedule_level = sal.schedule_level AND      pay.schedule_step = sal.schedule_step  LEFT OUTER JOIN hrs.calendar_master cal on pay.calendar = cal.calendar  where job.contract_year in (2013,:v_yr);  

In my Googling so far, it seems that in order to display these results in a grid, I will need to set up this query as some kind of stored procedure, assign the results to a cursor. I don't know enough about Oracle programming to understand this answer yet.

Insert performace with Geography column

Posted: 13 Jun 2013 09:34 AM PDT

I've been tasked with inserting data into a SQL Server table with a geography column. I've found that my times for doing inserts (same data 1.5 million rows) go up increasingly.

I started out with no geography column and it took 6 minutes, then I added a geography column and it took 20 minutes (again same data). Then I added a spatial index and it took 1 hour and 45 minutes.

I'm new at anything spatial, but this seems like really bad performance. Is there anything I can do to help speed this up or is this just the performance I'm going to see when dealing with SQL Spatial?

I can't get MySQL56 service to start after i stopped it

Posted: 13 Jun 2013 10:18 AM PDT

I am having an issue on my server. I am running MySql on Windows Server 2008 R2. I have done some changes to my.ini file so I had to restart the MySQL for the changes to take effect. and when I went to services and tried to start MySQL56 windows gives me this error

Windows could not start the MySQL56 service on Local Computer. Error 1067: tHe process terminated Unexpetedlly.

I tried rebooting my server and that did not work. I restored the changes that I made to my.ini and that did not work.

What can I do to bring mysql back up?

Thanks

count(PK) yields different result than left join ... is null

Posted: 13 Jun 2013 08:14 AM PDT

I have a really strange behaviour here. Situation is, I added rows to my table store_inventory. Before I did this, I made a backup of the table in store_inventory_old.

Now I wanted to make sure everything is okay and queried

SELECT 'old', count(product) FROM store_inventory_old  union all  SELECT 'new', count(product) FROM store_inventory  

The difference between the two count(product) values I receive from this query is about 2000, which was too few for me.

When I do

select count(new.product) from store_inventory new  left join store_inventory_old old on new.product = old.product  where old.product is null  

I get about 6000, which is okay. The column product is the primary key in both tables.

Question is, how is it possible that those two queries deliver different results?

Thanks in advance.

LDAP in SQL Server 2012

Posted: 13 Jun 2013 04:20 PM PDT

In our current system, we query the LDAP using SQL Server 2008R2 but are moving to SQL Server 2012 where we have hit a snag. According to http://msdn.microsoft.com/en-us/library/cc707782.aspx one of the discontinued features of SQL Server 2012 is the AD helper service.

Does anyone know of a way to query the AD from SQL Server 2012?

Recommended distributor architecture for transactional replication?

Posted: 13 Jun 2013 09:59 AM PDT

We are planning to create a Transactional replication on 2 nodes of SQL Server 2008 R2, each node will be a publisher and a subscriber.

What is the best practice in order to have this scenario done? Microsoft always recommends that the distributor should be independent from the publisher or the subscriber, so it is obvious that a 3rd node is to be involved in the scenario. But, as each node will be at the same time publisher and subscriber, can the same (3rd node) be the distributor for the 2 publishers? Should I create 2 distributors, or can it still be on the same architecture, using only 2 nodes, each one acting as distributor as well as publisher?

SQL Server not giving back memory - staying at Max Server Memory setting

Posted: 13 Jun 2013 10:41 AM PDT

I'm the only one working on this server and followed documentation about optimizing SQL memory usage when I saw that SQL was using all of the memory available of the server.

One of the steps I did is to fix the max memory size to 20 Gb (server for test purposes with 32 Gb memory).

After tuning, yesterday, SQL Server was only using 3-4 Gb but today, it has again reached the max memory usage I allocated.

Why is SQL Server keeping all 20 Gb? What will happen when users will begin to use the server if the max memory size is already used?

Shrink transaction log while transaction in progress

Posted: 13 Jun 2013 03:35 PM PDT

I have a several-GB import into my SQL Server 2005 (Standard Edition) database.

The import is separated into 4 parts, each part in its own transaction.

As you can guess, the transaction log files are quite big. The space on my disk is also limited, but should fit the data. While doing my import the free space on my disk is about 2 gb.

Does it work to shrink the files, while my transactions are in progress? Would it also have an effect?

How do I remove partitioning?

Posted: 13 Jun 2013 04:28 PM PDT

There is a table my_table which has not been created any partition in MySQL 5.6. And I'm trying to create partitions by procedure in my_table, when I run this code:

alter table my_table remove partitioning  

I'm getting this error:

1505 - Partition management on a not partitioned table is not possible  

But when I run above code in MySQL version 5.5, this code can be run.

If my_table has been created partitions, and when I run above code, not only MySQL 5.5 can run, but also MySQL 5.6 can run too.

I guess that if this code can be run, MySQL 5.6 need to be config some files.But I am not able to why.

Can you solve it?

Best practice for storing record metadata

Posted: 13 Jun 2013 04:26 PM PDT

What is the best practice for storing metadata of individual records in a database?

I need to store common meta data such as creation time and time of last update for many tables in my database. I found a few different solutions:

  1. Store the meta data directly in the tables.

    Pros:

    • Meta data is directly linked to records
    • No joins are required to retrieve meta data

    Cons:

    • A lot of duplicate columns are required (unless inheritance is used)
    • Meta data and business data are not separated
  2. Create a general meta data table with and use soft foreign keys to link data to the correct tables and records.

    Pros:

    • No duplication of columns
    • Meta data is separated from business data

    Cons:

    • No direct links between meta data and data (FK's can't be used)
    • Joins require an additional condition
  3. Create individual meta data tables for each table requiring meta data.

    Pros:

    • Meta data is directly linked to records
    • Meta data is separated from business data

    Cons:

    • A lot of extra tables are required
    • A lot of duplicate columns are required (unless inheritance is used)

Are there more options, pros or cons than the ones I mentioned here? And what is the best practice for storing this meta data?

how to view content of Sybase procedures?

Posted: 13 Jun 2013 09:42 AM PDT

I would like to view the code of my Sybase stored procedures.

I recognize that show procedures gives me the list of procedures.

Can anyone instruct me on how to get the actual code for the stored procedures?

How can I model this problem in a graph database?

Posted: 13 Jun 2013 11:26 AM PDT

I have a project that I'm working on where I extract data from PDFs and map/visualize the relationships between the extracted pieces.

Here's an example of my problem:

file: 11425646.pdf    author: bob    company: abc co    date: 1/1/2011    mentioned_users: [alice,sue,mike,sally]    images: [1958.jpg,535.jpg,35735.jpg]    file: 15421484.pdf    author: betty    company: ionga    date: 2/15/2011    mentioned_users: [john,alex,george]    images: [819.jpg,9841.jpg,78.jpg]    file: 11975748.pdf    author: micah    company: zoobi    date: 9/26/2011    mentioned_users: [alice,chris,joe]    images: [526.jpg,5835.jpg,355.jpg]  

How can I model this in a graph database like Neo4j?

I would like to be able to be given one piece of data (like a person's name) and find all related (images, co-mentions, authors, etc.) at up to 10 depth. Here's what I'm thinking for the structure, but I'm not sure if it's a good approach: (this isn't any kind of actual syntax)

[file: 11425646.pdf date:1/1/2011] -written_by-> bob  [file: 11425646.pdf date:1/1/2011] -from_company-> abc co  [file: 11425646.pdf date:1/1/2011] -mentions-> alice  [file: 11425646.pdf date:1/1/2011] -mentions-> sue  [file: 11425646.pdf date:1/1/2011] -mentions-> mike  [file: 11425646.pdf date:1/1/2011] -mentions-> sally  [file: 11425646.pdf date:1/1/2011] -has_image-> 1958.jpg  [file: 11425646.pdf date:1/1/2011] -has_image-> 535.jpg  [file: 11425646.pdf date:1/1/2011] -has_image-> 35735.jpg  

Is this the right way to structure this data in a graph database?

pg_dump 9.2.x command does not work with pg_dump 9.2.3

Posted: 13 Jun 2013 01:21 PM PDT

This is how i used to create all the time backups of my databases with PostgreSQL 9.2.x:

pg_dump -Fc -Z 6 -U postgres mydatabase > 2013-xx-xx_xxxxx.db  

However on a machine where PostgreSQL with version 9.2.3 is running I get all the time the error "too many command line arguments" starting with "-Z". If I remove -Z he is complaining about "-U"? What is wrong?


Okay there is definitely sth. wrong. I removed argument for argument until i just started pg_dump. Then i added argument by argument and it worked. So I opened another command line window and tried the same: First "too many arguments", after calling only "pg_dump" and adding further arguments it worked...

Primary replica set server goes secondary after secondary fails

Posted: 13 Jun 2013 03:27 PM PDT

I have a 2 servers replica set that, after the secondary fails the primary goes into secondary mode while the secondary is in STARTUP2 (recovering). The problem with this is that I can't use the collection stored in that replica set freely, I'm getting errors trying to use the collection:

pymongo.errors.OperationFailure: database error: ReplicaSetMonitor no master found for set: rs2  

Sometimes if I restart the mongod instances, the server rs2-1 is the primary for a while, but after some time (while the secondary is recovering) I see this in the logs of rs2-1 (the primary):

Tue May  7 17:43:40.677 [rsHealthPoll] replSet member XXX.XXX.XXX.XXX:27017 is now in state DOWN  Tue May  7 17:43:40.677 [rsMgr] can't see a majority of the set, relinquishing primary  Tue May  7 17:43:40.682 [rsMgr] replSet relinquishing primary state  Tue May  7 17:43:40.682 [rsMgr] replSet SECONDARY  Tue May  7 17:43:40.682 [rsMgr] replSet closing client sockets after relinquishing primary  

Is there an easy way to make the primary keep being primary after the secondary fails? Am I doing something wrong?

Thanks in advance!

speeding up a query on MySql

Posted: 13 Jun 2013 01:27 PM PDT

I have a table with more than 10 million rows and 10 fields(columns). There is an index on field_1 and I am running the following query.

create table t2   select field_1,         sum(ifnull(field_2,0)) as field_2,          sum(ifnull(field_3,0)) as field_3,         sum(ifnull(field_4,0)) as field_4   from t1   group by field1;  

The data type of all the columns is varchar(200).

This query is not able to produce the result even after running for more than 1 day. Any suggestions on getting the results quickly will be helpful.

Newly discovered SQL Server Express has all databases inaccessible/offline

Posted: 13 Jun 2013 07:27 PM PDT

We have several remotes sites at my company and today one of the employees came to me and said their SQL reporting tool stopped working. Up until this point I did not even know this server or database or reporting service existed!

I have RDP access to the server and can open SQL Server Management Studio 2008 R2 and the databases appear to be SQL Server Express. Currently I can login to the instance with my domain admin account but every database gives me the following message when I try to expand it:

enter image description here The webpage they go to for the reports gives them the following error:

•   An error has occurred during report processing. (rsProcessingAborted)      o   Cannot create a connection to data source 'DNPDataSource'. (rsErrorOpeningConnection)          ?   For more information about this error navigate to the report server on the local server machine, or enable remote errors  

When I try to view the SQL Server Error log I get this error:

enter image description here

I do not have the sa password. I'm not a DBA but need to try and figure this one out, can anyone point me in a direction to start troubleshooting this? I'm completely lost.


here is the ERROR.LOG

2013-03-27 13:14:24.34 Server      Microsoft SQL Server 2008 R2 (SP1) - 10.50.2550.0 (X64)       Jun 11 2012 16:41:53       Copyright (c) Microsoft Corporation      Express Edition with Advanced Services (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1) (Hypervisor)    2013-03-27 13:14:24.34 Server      (c) Microsoft Corporation.  2013-03-27 13:14:24.34 Server      All rights reserved.  2013-03-27 13:14:24.34 Server      Server process ID is 9040.  2013-03-27 13:14:24.34 Server      System Manufacturer: 'Intel Corporation', System Model: 'S5520UR'.  2013-03-27 13:14:24.34 Server      Authentication mode is MIXED.  2013-03-27 13:14:24.34 Server      Logging SQL Server messages in file 'c:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\Log\ERRORLOG'.  2013-03-27 13:14:24.34 Server      This instance of SQL Server last reported using a process ID of 2428 at 3/27/2013 1:14:02 PM (local) 3/27/2013 7:14:02 PM (UTC). This is an informational message only; no user action is required.  2013-03-27 13:14:24.34 Server      Registry startup parameters:        -d c:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\DATA\master.mdf       -e c:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\Log\ERRORLOG       -l c:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\DATA\mastlog.ldf  2013-03-27 13:14:24.37 Server      SQL Server is starting at normal priority base (=7). This is an informational message only. No user action is required.  2013-03-27 13:14:24.37 Server      Detected 16 CPUs. This is an informational message; no user action is required.  2013-03-27 13:14:24.51 Server      Using dynamic lock allocation.  Initial allocation of 2500 Lock blocks and 5000 Lock Owner blocks per node.  This is an informational message only.  No user action is required.  2013-03-27 13:14:24.51 Server      Lock partitioning is enabled.  This is an informational message only. No user action is required.  2013-03-27 13:14:24.56 Server      Node configuration: node 0: CPU mask: 0x00000000000000ff:0 Active CPU mask: 0x00000000000000ff:0. This message provides a description of the NUMA configuration for this computer. This is an informational message only. No user action is required.  2013-03-27 13:14:24.62 spid7s      Starting up database 'master'.  2013-03-27 13:14:24.69 spid7s      2 transactions rolled forward in database 'master' (1). This is an informational message only. No user action is required.  2013-03-27 13:14:24.69 spid7s      0 transactions rolled back in database 'master' (1). This is an informational message only. No user action is required.  2013-03-27 13:14:24.69 spid7s      Recovery is writing a checkpoint in database 'master' (1). This is an informational message only. No user action is required.  2013-03-27 13:14:24.78 spid7s      FILESTREAM: effective level = 0, configured level = 0, file system access share name = 'SQLEXPRESS'.  2013-03-27 13:14:24.83 spid7s      SQL Trace ID 1 was started by login "sa".  2013-03-27 13:14:24.85 spid7s      Starting up database 'mssqlsystemresource'.  2013-03-27 13:14:24.87 spid7s      The resource database build version is 10.50.2500. This is an informational message only. No user action is required.  2013-03-27 13:14:25.09 spid10s     Starting up database 'model'.  2013-03-27 13:14:25.09 spid7s      Server name is 'WCCKEMAPP\SQLEXPRESS'. This is an informational message only. No user action is required.  2013-03-27 13:14:25.21 spid10s     The tail of the log for database model is being rewritten to match the new sector size of 4096 bytes.  2560 bytes at offset 99840 in file c:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\DATA\modellog.ldf will be written.  2013-03-27 13:14:25.31 spid10s     Clearing tempdb database.  2013-03-27 13:14:25.32 spid13s     A new instance of the full-text filter daemon host process has been successfully started.  2013-03-27 13:14:25.37 spid7s      Starting up database 'msdb'.  2013-03-27 13:14:25.40 Server      A self-generated certificate was successfully loaded for encryption.  2013-03-27 13:14:25.40 Server      Server is listening on [ 'any' <ipv6> 54547].  2013-03-27 13:14:25.40 Server      Server is listening on [ 'any' <ipv4> 54547].  2013-03-27 13:14:25.40 Server      Server local connection provider is ready to accept connection on [ \\.\pipe\SQLLocal\SQLEXPRESS ].  2013-03-27 13:14:25.40 Server      Server named pipe provider is ready to accept connection on [ \\.\pipe\MSSQL$SQLEXPRESS\sql\query ].  2013-03-27 13:14:25.40 Server      Dedicated administrator connection support was not started because it is disabled on this edition of SQL Server. If you want to use a dedicated administrator connection, restart SQL Server using the trace flag 7806. This is an informational message only. No user action is required.  2013-03-27 13:14:25.56 spid10s     Starting up database 'tempdb'.  2013-03-27 13:14:25.60 spid13s     The Service Broker protocol transport is disabled or not configured.  2013-03-27 13:14:25.60 spid13s     The Database Mirroring protocol transport is disabled or not configured.  2013-03-27 13:14:25.61 spid13s     Service Broker manager has started.  2013-03-27 13:14:25.77 spid7s      The tail of the log for database msdb is being rewritten to match the new sector size of 4096 bytes.  2048 bytes at offset 12007424 in file c:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\DATA\MSDBLog.ldf will be written.  2013-03-27 13:14:25.84 spid7s      Recovery is complete. This is an informational message only. No user action is required.  2013-03-27 13:14:25.90 Logon       Error: 17187, Severity: 16, State: 1.  2013-03-27 13:14:25.90 Logon       SQL Server is not ready to accept new client connections. Wait a few minutes before trying again. If you have access to the error log, look for the informational message that indicates that SQL Server is ready before trying to connect again.  [CLIENT: ::1]  2013-03-27 13:14:25.90 Logon       Error: 17187, Severity: 16, State: 1.  2013-03-27 13:14:25.90 Logon       SQL Server is not ready to accept new client connections. Wait a few minutes before trying again. If you have access to the error log, look for the informational message that indicates that SQL Server is ready before trying to connect again.  [CLIENT: 172.17.0.210]  2013-03-27 13:14:26.48 Server      The SQL Server Network Interface library successfully registered the Service Principal Name (SPN) [ MSSQLSvc/WCCKemAPP.WCC.LOCAL:SQLEXPRESS ] for the SQL Server service.   2013-03-27 13:14:26.48 Server      The SQL Server Network Interface library successfully registered the Service Principal Name (SPN) [ MSSQLSvc/WCCKemAPP.WCC.LOCAL:54547 ] for the SQL Server service.   2013-03-27 13:14:26.48 Server      SQL Server is now ready for client connections. This is an informational message; no user action is required.  2013-03-27 13:14:31.04 spid51      Starting up database 'ReportServer'.  2013-03-27 13:14:31.37 spid51      Starting up database 'ReportServerTempDB'.  2013-03-27 13:14:31.76 spid51      Starting up database 'ReportServerTempDB'.  2013-03-27 13:14:32.07 spid51      Starting up database 'ReportServerTempDB'.  2013-03-27 13:14:32.24 Logon       Error: 18456, Severity: 14, State: 5.  2013-03-27 13:14:32.24 Logon       Login failed for user 'drily'. Reason: Could not find a login matching the name provided. [CLIENT: 172.17.0.210]  2013-03-27 13:15:12.28 spid55      Starting up database 'DNP'.  2013-03-27 13:15:13.75 spid55      Starting up database 'DSS'.  2013-03-27 13:19:36.62 spid57      Starting up database 'ReportServerTempDB'.  2013-03-27 13:25:31.18 spid53      Starting up database 'ReportServer$SQLExpress'.  2013-03-27 13:25:36.38 spid53      Starting up database 'DSSDL'.  2013-03-27 13:25:38.89 spid53      Starting up database 'DSSUSERDIR'.  2013-03-27 13:25:41.26 spid53      Starting up database 'permissionsAudit'.  2013-03-27 13:25:45.00 spid53      Starting up database 'PMKemmererProduction'.  2013-03-27 13:25:48.05 spid53      Starting up database 'PMKemmererProductionTEST'.  2013-03-27 13:26:01.57 spid54      Attempting to load library 'xpstar.dll' into memory. This is an informational message only. No user action is required.  2013-03-27 13:26:01.58 spid54      Using 'xpstar.dll' version '2009.100.1600' to execute extended stored procedure 'xp_instance_regread'. This is an informational message only; no user action is required.  2013-03-27 13:26:52.10 Logon       Error: 18456, Severity: 14, State: 38.  2013-03-27 13:26:52.10 Logon       Login failed for user 'WCC\baadmin'. Reason: Failed to open the explicitly specified database. [CLIENT: <local machine>]  2013-03-27 13:26:53.37 spid59      Starting up database 'DSSDL'.  2013-03-27 13:26:53.60 spid59      Starting up database 'DSSUSERDIR'.  2013-03-27 13:26:53.92 spid59      Starting up database 'QuietDose'.  2013-03-27 13:26:54.16 spid59      Starting up database 'ReportServer$SQLExpress'.  2013-03-27 13:26:54.36 spid59      Starting up database 'ReportServer$SQLEXPRESSTempDB'.  2013-03-27 13:26:54.66 spid59      Starting up database 'ReportServerTempDB'.  2013-03-27 13:26:54.89 spid59      Starting up database 'STX'.  2013-03-27 13:26:55.57 spid59      Starting up database 'Test'.  2013-03-27 13:26:55.76 spid59      Starting up database 'DSSDL'.  2013-03-27 13:26:55.91 spid59      Starting up database 'DSSUSERDIR'.  2013-03-27 13:26:56.08 spid59      Starting up database 'ReportServer$SQLExpress'.  2013-03-27 13:26:56.31 spid59      Starting up database 'ReportServer$SQLEXPRESSTempDB'.  2013-03-27 13:26:56.52 spid59      Starting up database 'ReportServerTempDB'.  2013-03-27 13:26:56.68 spid59      Starting up database 'STX'.  2013-03-27 13:26:57.24 spid59      Starting up database 'DSSDL'.  2013-03-27 13:26:57.28 spid59      Starting up database 'DSSUSERDIR'.  2013-03-27 13:26:57.45 spid59      Starting up database 'ReportServer$SQLExpress'.  2013-03-27 13:26:57.55 spid59      Starting up database 'ReportServer$SQLEXPRESSTempDB'.  2013-03-27 13:26:57.74 spid59      Starting up database 'ReportServerTempDB'.  2013-03-27 13:26:57.83 spid59      Starting up database 'STX'.  2013-03-27 13:29:36.55 spid54      Starting up database 'ReportServerTempDB'.  2013-03-27 13:39:36.57 spid56      Starting up database 'ReportServerTempDB'.  2013-03-27 13:41:59.55 Logon       Error: 18456, Severity: 14, State: 8.  2013-03-27 13:41:59.55 Logon       Login failed for user 'sa'. Reason: Password did not match that for the login provided. [CLIENT: <local machine>]  2013-03-27 13:44:07.70 Logon       Error: 18456, Severity: 14, State: 8.  2013-03-27 13:44:07.70 Logon       Login failed for user 'sa'. Reason: Password did not match that for the login provided. [CLIENT: <local machine>]  2013-03-27 13:49:36.57 spid53      Starting up database 'ReportServerTempDB'.  2013-03-27 13:59:36.57 spid54      Starting up database 'ReportServerTempDB'.  2013-03-27 14:09:36.56 spid53      Starting up database 'ReportServerTempDB'.  2013-03-27 14:15:50.50 spid54      Starting up database 'DSSDL'.  2013-03-27 14:15:50.75 spid54      Starting up database 'DSSUSERDIR'.  2013-03-27 14:15:51.92 spid54      Starting up database 'ReportServer$SQLExpress'.  2013-03-27 14:15:52.25 spid54      Starting up database 'ReportServer$SQLEXPRESSTempDB'.  2013-03-27 14:15:52.51 spid54      Starting up database 'ReportServerTempDB'.  2013-03-27 14:15:52.70 spid54      Starting up database 'STX'.  2013-03-27 14:18:02.83 spid51      Starting up database 'ReportServer$SQLExpress'.  2013-03-27 14:18:46.58 Logon       Error: 18456, Severity: 14, State: 38.  2013-03-27 14:18:46.58 Logon       Login failed for user 'WCC\baadmin'. Reason: Failed to open the explicitly specified database. [CLIENT: <local machine>]  2013-03-27 14:18:47.49 spid59      Starting up database 'DSSDL'.  2013-03-27 14:18:47.70 spid59      Starting up database 'DSSUSERDIR'.  2013-03-27 14:18:47.92 spid59      Starting up database 'ReportServer$SQLExpress'.  2013-03-27 14:18:48.04 spid59      Starting up database 'ReportServer$SQLEXPRESSTempDB'.  2013-03-27 14:18:48.33 spid59      Starting up database 'ReportServerTempDB'.  2013-03-27 14:18:48.53 spid59      Starting up database 'STX'.  2013-03-27 14:18:49.12 spid59      Starting up database 'DSSDL'.  2013-03-27 14:18:49.33 spid59      Starting up database 'DSSUSERDIR'.  2013-03-27 14:18:49.44 spid59      Starting up database 'ReportServer$SQLExpress'.  2013-03-27 14:18:49.60 spid59      Starting up database 'ReportServer$SQLEXPRESSTempDB'.  2013-03-27 14:18:49.84 spid59      Starting up database 'ReportServerTempDB'.  2013-03-27 14:18:49.98 spid59      Starting up database 'STX'.  2013-03-27 14:18:50.28 spid59      Starting up database 'DSSDL'.  2013-03-27 14:18:50.39 spid59      Starting up database 'DSSUSERDIR'.  2013-03-27 14:18:50.48 spid59      Starting up database 'ReportServer$SQLExpress'.  2013-03-27 14:18:50.53 spid59      Starting up database 'ReportServer$SQLEXPRESSTempDB'.  2013-03-27 14:18:50.66 spid59      Starting up database 'ReportServerTempDB'.  2013-03-27 14:18:50.73 spid59      Starting up database 'STX'.  2013-03-27 14:19:36.54 spid59      Starting up database 'ReportServerTempDB'.  2013-03-27 14:19:36.93 spid59      Starting up database 'ReportServerTempDB'.  2013-03-27 14:29:36.55 spid53      Starting up database 'ReportServerTempDB'.  2013-03-27 14:39:36.57 spid54      Starting up database 'ReportServerTempDB'.  2013-03-27 14:49:36.57 spid53      Starting up database 'ReportServerTempDB'.  2013-03-27 14:59:36.58 spid54      Starting up database 'ReportServerTempDB'.  2013-03-27 15:09:36.60 spid53      Starting up database 'ReportServerTempDB'.  

I can provide older logs too if it will be beneficial. I'm reading those now from here.

Results from query select name, state_desc, is_in_standby, is_cleanly_shutdown from sys.databases;

enter image description here


Image for Kin's request

enter image description here

MySQL backup InnoDB

Posted: 13 Jun 2013 12:48 PM PDT

I have a VoIP server running 24x7. At low peak hour at lease 150+ users are connected. My server has MySQL running with InnoDB engine on Windows 2008 platform. I like to take at least 2 times full database backup without shutting down my service.

As per Peter Zaitsev - the founder of percona, mysqldump –single-transaction is not always good.

read here if you are interested

As I'm not a DBA, I like to know in my scenario, which would be best solution to take a database backup?

Thanks,

SQL Server 2008 R2 replication high delivery latency

Posted: 13 Jun 2013 02:27 PM PDT

I am seeing an unusually high delivery latency between our distributor and subscribers and i do not understand why.

We have in this configuration 3 sql servers using transactional push replication to replicate data from one master server to two reporting servers.

We have 9 publications. The distribution agent for most publications are showing under 5ms but one is show as 2000+ms to both subscribers.

The suspect publication has only 4 small articles (tables) that rarely, if ever, change. Ive checked and each table has an primary key.

ive also checked the @status parameter for each article according to the MS KB: The distribution agent may experience high latency when you configure transactional replication with articles that are configured not to replicate changes as parameterized statements

Im tempted to start droping articles to find out if one particular table is the culprit.

Doe anyone have any suggestions as to what I can look at?

Strange characters in mysqlbinlog output

Posted: 13 Jun 2013 10:27 AM PDT

Has anyone experienced this? Data replicates fine but when output in mysqlbinlog there are hidden characters that break the input?

  • mysqlbinlog Ver 3.3 for Linux at x86_64
  • mysql 5.5.28 server

Thanks! Julie

Connecting to a SQL Server database from a Flash program

Posted: 13 Jun 2013 11:27 AM PDT

I currently have the ability to utilize Microsoft SQL Server 2012. I am developing a project with Adobe Flash Builder 4.7.

If I link my database with Adobe Flash Builder is there any additional steps I must take in order to make the database live, or as long as my computer is running will this database be accessible from any device that is utilizing it?

In other words is this a LAN only system or does it automatically make itself available for the programs I link to it?

Oracle Express edition on Ubuntu - control file missing

Posted: 13 Jun 2013 04:27 PM PDT

I have installed the Oracle Express edition on Ubuntu as mentioned here.

I am facing issues when I try to create a sample table.

Started oracle

$ sudo service oracle-xe start   Starting Oracle Database 11g Express Edition instance.  

Started sqlplus

$ sqlplus / as sysdba  

Executed the CREATE command

SQL> CREATE TABLE SAMPLE (ID NUMBER);  CREATE TABLE SAMPLE (ID NUMBER)  *** ERROR at line 1: ORA-01109: database not open**  

After a series of research on web, I tried to shutdown and restart oracle:

Shutdown command

SQL> shutdown  ORA-01507: database not mounted  ORACLE instance shut down.  

Started the oracle instance

SQL> startup    ORACLE instance started.  Total System Global Area  688959488 bytes Fixed Size                   2229688 bytes Variable Size             411044424 bytes Database  Buffers          272629760 bytes Redo Buffers                3055616  bytes ORA-00205: error in identifying control file, check alert log  for more info  

I realized that the control file is missing at /u01/app/oracle/oradata/XE. XE is the DB name.

So I tried to create the control file as follows:

SQL> CREATE CONTROlFILE SET DATABASE XE RESETLOGS;    Control file created.  

Tried to create the sample table again

SQL> CREATE TABLE SAMPLE(ID NUMBER);      CREATE TABLE SAMPLE(ID NUMBER)    ERROR at line 1: ORA-01109: database not open  

So I tried to issue the following command

SQL> ALTER DATABASE OPEN RESETLOGS;  ALTER DATABASE OPEN RESETLOGS    ERROR at line 1:  ORA-01194: file 1 needs more recovery to be consistent**  ORA-01110: data file 1: '/u01/app/oracle/product/11.2.0/xe/dbs/dbs1XE.dbf'  

What should be done next? I am clueless as I am not a database guy.

Note:

Output of

$ lsnrctl services    LSNRCTL for Linux: Version 11.2.0.2.0 - Production on 04-JAN-2013 09:15:37    Copyright (c) 1991, 2011, Oracle.  All rights reserved.    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC_FOR_XE)))    Services Summary...    Service "PLSExtProc" has 1 instance(s).   Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...      Handler(s):        "DEDICATED" established:0 refused:0           LOCAL SERVER  Service "XE" has 1 instance(s).    Instance "XE", status READY, has 1 handler(s) for this service...      Handler(s):        "DEDICATED" established:0 refused:0 state:ready           LOCAL SERVER  The command completed successfully  

Are regular VACUUM ANALYZE stil recommended under 9.1?

Posted: 13 Jun 2013 01:01 PM PDT

I'm using PostgreSQL 9.1 on Ubuntu. Are scheduled VACUUM ANALYZE still recommended, or is autovacuum enough to take care of all needs?

If the answer is "it depends", then:

  • I have a largish database (30 GiB compressed dump size, 200 GiB data directory)
  • I do ETL into the database, importing close to 3 million rows per week
  • The tables with the most frequent changes are all inherited from a master table, with no data in the master table (data is partitioned by week)
  • I create hourly rollups, and from there, daily, weekly and monthly reports

I'm asking because the scheduled VACUUM ANALYZE is impacting my reporting. It runs for more than 5 hours, and I've had to kill it twice this week, because it was impacting regular database imports. check_postgres doesn't report any significant bloat on the database, so that's not really an issue.

From the docs, autovacuum should take care of transaction ID wrap around as well. The question stands: do I still need a VACUUM ANALYZE?

No comments:

Post a Comment

Search This Blog