How do I use subquery on the same table in MySQL? Posted: 14 May 2013 05:38 PM PDT I have a query like this which takes a really long time to run. The table is around 4 million rows. DELETE FROM TABLE WHERE value_was IS NULL OR value_was <= value_now; I'm hoping I could create an index for (value_was, value_now) so I could do something like DELETE FROM TABLE WHERE ID1 IN (SELECT ID1 from TABLE where value_was IS NULL) OR ID2 IN (SELECT ID2 FROM TABLE WHERE value_was <= value_now); This table doesn't have primary key. It has two composite keys. And I guess I cannot use the same table in subquery, but how do I improve the performance of the first query? Thanks very much any suggestion would be much appreciated. |
Oracle UPDATE single record with multiple WHERE conditions Posted: 14 May 2013 05:26 PM PDT I am a SQL Server guy and am suddenly found administering an Oracle database. I'm trying to UPDATE values in an Oracle database table (of salary information) that has no row IDs or a primary key. This syntax should work just fine in T-SQL: UPDATE schedule_amounts SET ANNUAL_RATE = 44000 WHERE SCHEDULE_ID = 'LCSD' and SCHEDULE_NO = 2014 and SCHEDULE_LEVEL = 100 and SCHEDULE_STEP = 17 but Oracle doesn't like it. Using multiple WHERE conditions is the only way I can specify the exact row I need to update. I have Googled Oracle's UPDATE syntax but I just can't seem to find an easy-to-understand example of how to do what, to me, is rather simple in T-SQL. |
Mysql: Schema/Query Performance Approach for aggregated mailbox folders Posted: 14 May 2013 04:56 PM PDT I am about to code a messaging system where users can write messages to other users. User can create custom inbox folders for sorting the messages they receive, however, every user has 2 main inboxes: a) with all messages they receive b) all messages they receive filtered by users they are friends with For all mailboxes the total/unread numbers inside shall be displayed. This leads me to the need of the following tables: - users (id, username,...)
- friends (user_id, friend_id)
- messages (id, sender_id, receiver_id, read_status, mailbox_id, ...)
- mailbox (id, name, owner_id, ...)
My question regarding the perfomance (thinking of a situation where there are many messages and users): Is is better to calculate the total/unread values for each mailbox for each page view (Select count(.) as total, .... , Group By mailbox_id, friend_status, read_status) or Is it better to store these information in the table 'mailbox' in extra fields like total_msg, total_unread, total_friends, total_friends_unread --- and update these values upon each event (new message, read message, new friends, friends break up, etc...) Would there be a 3rd approach to be considered? Thanks for help! |
Help my schema QI! Parent to Children model questions Posted: 14 May 2013 06:10 PM PDT I am trying to model out a way to track hours on modules. These modules have components that can be changed out. - The modules are manufactured by more than 1 manufacturer, but all modules have the same type of components. Components could be interchanged with components of a modules with a different manufacturer. NOTE: there are different types of modules and these different types also have their own component types.
For Instance, a module made by A has component types X,Y,Z and a module made by B also has component types X,Y,Z - For each component I need to define a maximum allowed hour limit based on the tool location.
If module A is in the US then I need the components for module A to have a life of 20 hours. If module A is sent to Canada I need the components to have a life of 10 hours. What I basically need is a "dictionary table" I want this to be database driven. My RegionRunLimit Table has the following properties: regionFK, componeentTypeFK and Hours which is a decimal. So each region has all component types referenced and I have a value set for Hours. Is this optimal? Here's what I have so far: I think I need to remove the many to many relationship from manufacturers to module type and place a module type FK in the manufacturer table. I need a good design that defines manufacturers to module types and component types. These should be assigned to the modules and components table, the goal is for these to be database driven and to remove the definitions from application code. I am first a programmer, but am having to get in touch with my inner data modeling QI.. Can you guys suggest any revisions based on what I have here? |
InnoDB Tablespace critical error in great need of a fix Posted: 14 May 2013 03:17 PM PDT |
How can I improve this query? Posted: 14 May 2013 01:25 PM PDT About a year back I introduced a query which returns a sort of "Customers Also Purchased" data-set. At the time it ran reasonably fast however, as of late it's become very slow, sometimes taking up to 5 seconds or more. SELECT p.*, COUNT(*) AS total FROM orders_products AS op JOIN products AS p ON p.products_id = op.products_id JOIN ( SELECT orders_id FROM orders_products WHERE products_id = 100 ) AS opf ON opf.orders_id = op.orders_id WHERE op.products_id <> 100 GROUP BY products_id ORDER BY total DESC LIMIT 5; Output from explain extended : +------+-------------+-----------------+--------+-----------------------+-------------+---------+------------------------------------+------+----------+---------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +------+-------------+-----------------+--------+-----------------------+-------------+---------+------------------------------------+------+----------+---------------------------------+ | 1 | SIMPLE | orders_products | ref | products_id,orders_id | products_id | 4 | const | 4511 | 100.00 | Using temporary; Using filesort | | 1 | SIMPLE | op | ref | products_id,orders_id | orders_id | 4 | database.orders_products.orders_id | 2 | 100.00 | Using where | | 1 | SIMPLE | p | eq_ref | PRIMARY | PRIMARY | 4 | database.op.products_id | 1 | 100.00 | | +------+-------------+-----------------+--------+-----------------------+-------------+---------+--------------------------------------+------+----------+-------------------------------+ SHOW CREATE TABLE products : +----------+----------------------------------------------------------------+ | Table | Create Table +----------+----------------------------------------------------------------+ | products | CREATE TABLE `products` ( `products_id` int(11) NOT NULL AUTO_INCREMENT, `products_model` varchar(128) DEFAULT NULL, `products_price` decimal(15,4) NOT NULL DEFAULT '0.0000', `products_date_added` datetime NOT NULL DEFAULT '0000-00-00 00:00:00', `products_last_modified` datetime DEFAULT NULL, `products_status` tinyint(1) NOT NULL DEFAULT '0', PRIMARY KEY (`products_id`), KEY `idx_products_date_added` (`products_date_added`), KEY `products_model` (`products_model`), KEY `products_price` (`products_price`), KEY `products_status` (`products_status`), ) ENGINE=InnoDB DEFAULT CHARSET=utf8 | +----------+----------------------------------------------------------------+ SHOW INDEXES FROM products +----------+------------+----------------------------------+--------------+----------------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment | +----------+------------+----------------------------------+--------------+----------------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | products | 0 | PRIMARY | 1 | products_id | A | 4356 | NULL | NULL | | BTREE | | | | products | 1 | idx_products_date_added | 1 | products_date_added | A | 4356 | NULL | NULL | | BTREE | | | | products | 1 | products_model | 1 | products_model | A | 4356 | NULL | NULL | YES | BTREE | | | | products | 1 | products_price | 1 | products_price | A | 1089 | NULL | NULL | | BTREE | | | | products | 1 | products_status | 1 | products_status | A | 4 | NULL | NULL | | BTREE | | | +----------+------------+----------------------------------+--------------+----------------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ SHOW CREATE TABLE orders_products : +-----------------+--------------------------------------------------------+ | Table | Create Table +-----------------+--------------------------------------------------------+ | orders_products | CREATE TABLE `orders_products` ( `orders_products_id` int(11) NOT NULL AUTO_INCREMENT, `orders_id` int(11) NOT NULL DEFAULT '0', `products_id` int(11) NOT NULL DEFAULT '0', PRIMARY KEY (`orders_products_id`), KEY `products_id` (`products_id`), KEY `orders_id` (`orders_id`), ) ENGINE=InnoDB DEFAULT CHARSET=utf8 | +-----------------+--------------------------------------------------------+ SHOW INDEXES FROM orders_products +-----------------+------------+-------------+--------------+--------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment | +-----------------+------------+-------------+--------------+--------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | orders_products | 0 | PRIMARY | 1 | orders_products_id | A | 3134198 | NULL | NULL | | BTREE | | | | orders_products | 1 | products_id | 1 | products_id | A | 5014 | NULL | NULL | | BTREE | | | | orders_products | 1 | orders_id | 1 | orders_id | A | 1567099 | NULL | NULL | | BTREE | | | +-----------------+------------+-------------+--------------+--------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ Is there something obvious I'm missing here? In most regards the database appears to well tuned. We're running MariaDB 5.5.30 under the InnoDB/XtraDB storage engine. |
SQL Server 2012 - Missing or lost transaction and no errors to show what went wrong Posted: 14 May 2013 02:01 PM PDT I have 2 missing records in SQL server and I am trying to identify the problem that caused this issue. New records are inserted using a Stored Procedure, which will return an error code to the application if something goes wrong. The Application, SQL Server, System Logs do not show any errors during the time when these 2 transactions should have been inserted. How do I go about identifying the issue here? My current idea is to find a way to read the Transaction Log (from a backup restore) to see if the transactions are even in the log and if they got rolled back. Can anyone recommend anything else I can do to troubleshoot this? Are there DMVs or other (SQL Server hidden) logs? I am worrying about this issue and what the SQL Server did in this situation. (I have experienced issues recently with timeouts, but since a recent reboot, this problem was resolved....) |
Run two SQL Agent Jobs Simultaneously (not sequentially) as the next step Posted: 14 May 2013 05:47 PM PDT At the completion of Step 1, the job currently goes to step 2 - an osql command to start a Sql Agent job on another server. However at the completion of step 1, I would also like to start a concurrent SQL Agent Job on the same server as the step 1 SQL Agent job. Right now, the jobs are cascaded but I would really like "job 2" and "Job 3" to run at the same time. Any ideas on how to make steps non- sequential? |
Left Join not giving desired result Posted: 14 May 2013 12:34 PM PDT I have two Datasets FEES PAYMENTS CrsCode InstNo FEE Regno CRSCODE Instno Payment CA1 -2 100 R1 CA1 -2 100 CA1 -1 200 R1 CA1 -1 200 CA1 1 200 R1 CA1 1 200 CA1 2 300 DESIRED OUTPUT REGNO CRSCODE INSTNO FEE PAID DIFF R1 CA1 -2 100 100 0 R1 CA1 -1 200 200 0 R1 CA1 1 200 200 0 R1 CA1 2 300 0 300 OUTPUT I AM GETTING REGNO CRSCODE INSTNO FEE PAID DIFF R1 CA1 -2 100 100 0 R1 CA1 -1 200 200 0 R1 CA1 1 200 200 0 null null null 300 null null Query i am using select a.Regno,a.crscode,b.fee,a.payment,b.fee-a.payment from Fees as b Left join Payments as a on a.CrsCode=b.CrsCode and a.InstNo=b.InstNo Please find SQLFiddle |
Using If statements in stored procedure breaks it [migrated] Posted: 14 May 2013 11:16 AM PDT This is my stored procedure: USE [Hires_new] GO /****** Object: StoredProcedure [dbo].[ps_selectNewHireWorkPeriodsSQL] Script Date: 05/14/2013 12:59:28 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO -- ============================================= -- Author: -- Create date: -- Description: -- ============================================= ALTER PROCEDURE [dbo].[ps_selectNewHireWorkPeriodsSQL] @HireID int, @type int -- Add the parameters for the stored procedure here AS If (@type = 1) select StartDate, EndDate, date_initiated, date_closed, firmName, Inquiries.InquiryID from WorkPeriod, Firms, Inquiries where HireID = @HireID and WorkPeriod.FirmID = Firms.FirmID and WorkPeriod.InquiryID = Inquiries.InquiryID order by StartDate DESC Else If (@type = 2) select WorkPeriodID, StartDate, EndDate, firmName from WorkPeriod, Firms where HireID = @HireID and WorkPeriod.FirmID = Firms.FirmID order by StartDate DESC I want to use the if statement to break up the kinds of select statements, but if I use the if statement, then the record count of whats returned is -1 . But if I remove the if statement and the second select statement, only leave the first select statement, then it works fine. Is there someone thing wrong with my stored procedure? Thanks. |
Why do i get this error…“directory lookup for the file .mdf failed with the operating system error 3” from a log shipping configuration Posted: 14 May 2013 10:09 AM PDT I have configured log shipping for all our SharePoint 2010. All worked well since Friday and today Monday I ran this query SELECT * FROM [msdb].[dbo].[sysjobhistory] WHERE [message] like '%Operating system error%' on the secondary server and got the error below. Basically its SharePOints WebAnalyticsServiceApplication_ReportingDB which creates an extra database weekly and it seems this latest copy could not be found. What Im not sure of is 2 things. Why is it this database(WebAnalyticsServiceApplication_ReportingDB) when viewed/backedup on the primary server shows as 1 db but when copied/restored to another server, it shows up with its weekly breakdown. My default sql installation and data folder is in in the H Drive, why is .Net SqlClient Data Provider looking in C drive for this one newly created SharePoint file? 2013-05-13 11:45:57.91 * Error: Could not apply log backup file 'H:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\WebAnalyticsServiceApplication_ReportingDB##..._20130513061518.trn' to secondary database 'WebAnalyticsServiceApplication_ReportingDB##...'.(Microsoft.SqlServer.Management.LogShipping) 2013-05-13 11:45:57.91 Error: Directory lookup for the file "C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\WebAnalyticsServiceApplication_ReportingDB##...Aggregation20130519.ndf" failed with the operating system error 3(The system cannot find the path specified.). File 'WebAnalyticsServiceApplication_ReportingDB##...Aggregation20130519' cannot be restored to 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\WebAnalyticsServiceApplication_ReportingDB##...Aggregation20130519.ndf'. Use WITH MOVE to identify a valid location for the file. Problems were identified while planning for the RESTORE statement. Previous messages provide details. RESTORE LOG is terminating abnormally.(.Net SqlClient Data Provider) * Other than this one error, my log shipping works well. Any help? Thanks. |
Adding Roles in SQL SERVER 2012 failover clustering Posted: 14 May 2013 09:19 AM PDT Is the Application Role required for SQL Server failover cluster |
Oracle TNS names configuration Posted: 14 May 2013 02:54 PM PDT I've edited added new TNS name to tnsnames.ora . Toad for Oracle started to work at once. SQL Plus doesn't work even after logoff/logon, the same problem with my build scripts. Getting the error: ORA-12154: TNS:could not resolve the connect identifier specified How could I fix it? |
Reorganize full text catalog is offline or online? Posted: 14 May 2013 12:03 PM PDT I need to schedule a Full Text Search Maintenance and I was reading this link: http://msdn.microsoft.com/en-us/library/ms176095.aspx. Reading this documentation I am supposing that I need to rebuild/reorganize the unique index used by the Full Text Search and then reorganize the Full Text Catalog. I am only concerning here, if this process of reorganize the Full Text Catalog is done online or offline? When the catalog reorganize is ocurring the users will still be able to query the full text data? I appreciate any help here! Regards, Marcos Freccia |
Unable to use mysql_upgrade: Posted: 14 May 2013 10:23 AM PDT I updated mysql recently and found problem related to privileges and backup. When I am using mysql_upgrade command it says- mysqlcheck: Got error: 1064: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '.`'` FOR UPGRADE' at line 1 when executing 'CHECK TABLE ... FOR UPGRADE' I searched this problem and found that this problem is due to use of delimeter in table_name and If I alter the name of table then this can be solved. But I don't know how to search those table which are not ready to upgrade ? |
Where and how to store my weird datamart Posted: 14 May 2013 12:19 PM PDT I could really use some help here. Here's my situation. I'm building a datamart in SQL Server 2005, which will furnish a single report (for now). We have 26 clients (medical organizations) that will be using this report. Each client will have between 1 and 2,000 users accessing it at any given time (most likely to average in the 50's but it needs to scale). There will be 26 versions of the same rdl, one for each client. Each one will access its own respect client database. The interface for said report is SSRS 2012. This is our first 2012 report - the rest are still 2005, and the rdl will access a 2005 database. I am not using OLAP; the SSRS report runs SPs and views. We built a virtual server and installed SQL 2012 to host the report (both these virtual servers live on the same physical machine, in case that matters). Nothing else will be running on the SQL 2012 virtual server. These are the facts about the environment... Our system is not OLTP heavy - with one exception, which I'll describe below, it's all Read (except for the ETL of course). We have one client-facing database per client, 26 altogether. In these we store transactional data, rolled-up data, some report-ready flat tables and a ton of T-SQL code that crunches numbers when clients pull up reports in SSRS. I'll refer to these databases as "operational" because, for me, they will essentially function as ODS's. The operational databases are loaded by a horrific ETL process (clients have varying load schedules - usually monthly or weekly). I'll be building a mini ETL process (hopefully not so horrific) to populate the datamart from those operational databases. All of the datamart's dimensions are conforming, but due to HIPAA restrictions, some of them (like Physician and Patient) can't be stored in a central database, and neither can the fact table. So there will need to be 26 versions of the same fact and dimension tables, one for each client. There is a real-time component to our system. Physicians and nurses can enter transactional data via our site and all reports need to reflect the changes immediately. As for the datamart, they'll only affect the fact table. For what it's worth, this is why I decided not to use SSAS. I know diffs process really quickly but it just feels like too many moving parts. I plan to create a trickle-in fact table with a view that combines it with the main fact table. Again, I'll need 26 of these. A pared down version of my new ETL process will need to run upon each user edit. Here are my questions... - Where should I store the 26 sets of datamart tables?
- On a dedicated 2005 server, away from the SSRS server and the operational databases?
- On the same server as the operational databases but in dedicated dds databases?
- Within the operational databases themselves?
- On the SQL 2012 reporting server?
- On the moon? Other?
- Where should I store the 26 trickle-in fact tables?
- In the same database as the main fact tables?
- On the same server as the operational databases but in dedicated DDS databases?
- Within the operational databases themselves? This seems logical to me given that they'll need to be unioned at runtime...
- Should I create a central database for non-sensitive dimensions?
- Maybe create a cloning process to copy them to the individual DDS's?
- Or is it simpler to just have 26 of these darned things as well?
With all of these questions, I'm concerned about good design practice, but mostly about the performance of the report and the performance of the ETL that will need to run upon user edit. I hope all of this made sense. I would very much appreciate any feedback! |
Primary replica set server goes secondary after secondary fails Posted: 14 May 2013 10:45 AM PDT I have a 2 servers replica set that, after the secondary fails the primary goes into secondary mode while the secondary is in STARTUP2 (recovering). The problem with this is that I can't use the collection stored in that replica set freely, I'm getting errors trying to use the collection: pymongo.errors.OperationFailure: database error: ReplicaSetMonitor no master found for set: rs2 Sometimes if I restart the mongod instances, the server rs2-1 is the primary for a while, but after some time (while the secondary is recovering) I see this in the logs of rs2-1 (the primary): Tue May 7 17:43:40.677 [rsHealthPoll] replSet member XXX.XXX.XXX.XXX:27017 is now in state DOWN Tue May 7 17:43:40.677 [rsMgr] can't see a majority of the set, relinquishing primary Tue May 7 17:43:40.682 [rsMgr] replSet relinquishing primary state Tue May 7 17:43:40.682 [rsMgr] replSet SECONDARY Tue May 7 17:43:40.682 [rsMgr] replSet closing client sockets after relinquishing primary Is there an easy way to make the primary keep being primary after the secondary fails? Am I doing something wrong? Thanks in advance! |
Choosing a database for storing large amount of data and allowing for thousands of writes Posted: 14 May 2013 06:53 PM PDT I am working on a project that has some serious requirements regarding its database and having hard time choosing the best solution. The details are: - Database will easily reach tens of terabytes of data
- Schema is simple in nature. In relational database terms it would consist of 3 tables, each with millions and billions of rows. Each table has a few columns only. Records are like 10K in size each
- Allow for tens of thousands of writes per second. I don't have good estimate for the reads but they should be less. Possibly will move to bulk inserts (write clients will insert at average 10 new rows per second so I can combine those and have 5000 bulk inserts per second)
- Obviously will need to be able to scale horizontally easily. These number are just for start and I expect they will multiply over the years. Hopefully :)
- SQL solution is not required
- Read queries performed will be over ranges of data. Geospatial support would be nice though not needed. Secondary keys also would be cool
- Should be easy to access via a C/C++ application. I am still considering Java as our choice of platform to build the servers that will talk to the database, but probably will end up with C/C++ (for reasons I am not going to put up here)
- CAP related - obviously we would like to have it all and it won't happen. Availability wise we will be fine with some read/write delays (in terms of seconds). Eventually consistent is OK as long as it doesn't take ages for the database to become consistent. Partition tolerance should be enough to cover the numbers listed earlier. So I can't really put some serious accent on which of those three is most important for us. Only data loss is not acceptable :)
- Cross-data centre replication would be nice, although this is not planned in the scope of the project.
- Updates/Deletes of the data are minimal. Just insert and read.
- There may be some Map/Reduce queries to the data but most likely they will not be executed very often and their results will be cached. The very least the heavy queries can be performed on a replica of the database so writes can continue while the heavy analysis is performed.
- Schema can be easily fixed, flexibility is not necessary although if we can have it without sacrificing something else - why not? In fact a key-value store with 3-4 buckets is an option.
Basically the design of the project resembles a one-way VoIP application with constant recording of the data. A set of clients constantly push data in the database (several thousands of clients) at 10Hz rate and a similar number of clients constantly read the data (at 1Hz rate). Applying Pub/Sub solution would take off some of the database load so we are considering that as part of the solution. Any suggestions on that are also welcome. This also means the working set is much smaller. Real-time data is being passed between the clients and the recorded data is accessed far less frequently. I do not have experience with NoSQL solutions but probably it is the way to go in that case. So far I have few names in mind: - MongoDB (so far seems to match my requirements)
- CouchBase (though I have yet to find the documentation how to use it from C)
- MySQL with NDB (Probably will fail in time - I haven't found someone using it at such scale but I am an old MySQL user and have quite happily lived with it for ages)
- Cassandra (I have some hard time getting my head around their data models so far but I will figure it out)
Does anyone have any suggestions based on experience with data warehouses at that scale? Which database solution would be the best in the provided scenario? Thanks in advance to anyone who posts an answer :) Edit: I missed something and one of the answers reminded me to add it - we are running *nix boxes only (blade servers). Edit 2: Just making sure it is clear - I am not looking for in-memory database. I think it will give us hard time protecting against data loss. And I don't really need to have all the data available all the time - about 99% of the data is archive which will be used on rare occasions. The working set may be as low as few dozens of gigabytes. |
speeding up a query on MySql Posted: 14 May 2013 01:07 PM PDT I have a table with more than 10 million rows and 10 fields(columns). There is an index on field_1 and I am running the following query. create table t2 select field_1, sum(ifnull(field_2,0)) as field_2, sum(ifnull(field_3,0)) as field_3, sum(ifnull(field_4,0)) as field_4 from t1 group by field1; The data type of all the columns is varchar(200). This query is not able to produce the result even after running for more than 1 day. Any suggestions on getting the results quickly will be helpful. |
Newly discovered SQL Server Express has all databases inaccessible/offline Posted: 14 May 2013 07:06 PM PDT We have several remotes sites at my company and today one of the employees came to me and said their SQL reporting tool stopped working. Up until this point I did not even know this server or database or reporting service existed! I have RDP access to the server and can open SQL Server Management Studio 2008 R2 and the databases appear to be SQL Server Express . Currently I can login to the instance with my domain admin account but every database gives me the following message when I try to expand it: The webpage they go to for the reports gives them the following error: • An error has occurred during report processing. (rsProcessingAborted) o Cannot create a connection to data source 'DNPDataSource'. (rsErrorOpeningConnection) ? For more information about this error navigate to the report server on the local server machine, or enable remote errors When I try to view the SQL Server Error log I get this error: I do not have the sa password. I'm not a DBA but need to try and figure this one out, can anyone point me in a direction to start troubleshooting this? I'm completely lost. here is the ERROR.LOG 2013-03-27 13:14:24.34 Server Microsoft SQL Server 2008 R2 (SP1) - 10.50.2550.0 (X64) Jun 11 2012 16:41:53 Copyright (c) Microsoft Corporation Express Edition with Advanced Services (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1) (Hypervisor) 2013-03-27 13:14:24.34 Server (c) Microsoft Corporation. 2013-03-27 13:14:24.34 Server All rights reserved. 2013-03-27 13:14:24.34 Server Server process ID is 9040. 2013-03-27 13:14:24.34 Server System Manufacturer: 'Intel Corporation', System Model: 'S5520UR'. 2013-03-27 13:14:24.34 Server Authentication mode is MIXED. 2013-03-27 13:14:24.34 Server Logging SQL Server messages in file 'c:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\Log\ERRORLOG'. 2013-03-27 13:14:24.34 Server This instance of SQL Server last reported using a process ID of 2428 at 3/27/2013 1:14:02 PM (local) 3/27/2013 7:14:02 PM (UTC). This is an informational message only; no user action is required. 2013-03-27 13:14:24.34 Server Registry startup parameters: -d c:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\DATA\master.mdf -e c:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\Log\ERRORLOG -l c:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\DATA\mastlog.ldf 2013-03-27 13:14:24.37 Server SQL Server is starting at normal priority base (=7). This is an informational message only. No user action is required. 2013-03-27 13:14:24.37 Server Detected 16 CPUs. This is an informational message; no user action is required. 2013-03-27 13:14:24.51 Server Using dynamic lock allocation. Initial allocation of 2500 Lock blocks and 5000 Lock Owner blocks per node. This is an informational message only. No user action is required. 2013-03-27 13:14:24.51 Server Lock partitioning is enabled. This is an informational message only. No user action is required. 2013-03-27 13:14:24.56 Server Node configuration: node 0: CPU mask: 0x00000000000000ff:0 Active CPU mask: 0x00000000000000ff:0. This message provides a description of the NUMA configuration for this computer. This is an informational message only. No user action is required. 2013-03-27 13:14:24.62 spid7s Starting up database 'master'. 2013-03-27 13:14:24.69 spid7s 2 transactions rolled forward in database 'master' (1). This is an informational message only. No user action is required. 2013-03-27 13:14:24.69 spid7s 0 transactions rolled back in database 'master' (1). This is an informational message only. No user action is required. 2013-03-27 13:14:24.69 spid7s Recovery is writing a checkpoint in database 'master' (1). This is an informational message only. No user action is required. 2013-03-27 13:14:24.78 spid7s FILESTREAM: effective level = 0, configured level = 0, file system access share name = 'SQLEXPRESS'. 2013-03-27 13:14:24.83 spid7s SQL Trace ID 1 was started by login "sa". 2013-03-27 13:14:24.85 spid7s Starting up database 'mssqlsystemresource'. 2013-03-27 13:14:24.87 spid7s The resource database build version is 10.50.2500. This is an informational message only. No user action is required. 2013-03-27 13:14:25.09 spid10s Starting up database 'model'. 2013-03-27 13:14:25.09 spid7s Server name is 'WCCKEMAPP\SQLEXPRESS'. This is an informational message only. No user action is required. 2013-03-27 13:14:25.21 spid10s The tail of the log for database model is being rewritten to match the new sector size of 4096 bytes. 2560 bytes at offset 99840 in file c:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\DATA\modellog.ldf will be written. 2013-03-27 13:14:25.31 spid10s Clearing tempdb database. 2013-03-27 13:14:25.32 spid13s A new instance of the full-text filter daemon host process has been successfully started. 2013-03-27 13:14:25.37 spid7s Starting up database 'msdb'. 2013-03-27 13:14:25.40 Server A self-generated certificate was successfully loaded for encryption. 2013-03-27 13:14:25.40 Server Server is listening on [ 'any' <ipv6> 54547]. 2013-03-27 13:14:25.40 Server Server is listening on [ 'any' <ipv4> 54547]. 2013-03-27 13:14:25.40 Server Server local connection provider is ready to accept connection on [ \\.\pipe\SQLLocal\SQLEXPRESS ]. 2013-03-27 13:14:25.40 Server Server named pipe provider is ready to accept connection on [ \\.\pipe\MSSQL$SQLEXPRESS\sql\query ]. 2013-03-27 13:14:25.40 Server Dedicated administrator connection support was not started because it is disabled on this edition of SQL Server. If you want to use a dedicated administrator connection, restart SQL Server using the trace flag 7806. This is an informational message only. No user action is required. 2013-03-27 13:14:25.56 spid10s Starting up database 'tempdb'. 2013-03-27 13:14:25.60 spid13s The Service Broker protocol transport is disabled or not configured. 2013-03-27 13:14:25.60 spid13s The Database Mirroring protocol transport is disabled or not configured. 2013-03-27 13:14:25.61 spid13s Service Broker manager has started. 2013-03-27 13:14:25.77 spid7s The tail of the log for database msdb is being rewritten to match the new sector size of 4096 bytes. 2048 bytes at offset 12007424 in file c:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\DATA\MSDBLog.ldf will be written. 2013-03-27 13:14:25.84 spid7s Recovery is complete. This is an informational message only. No user action is required. 2013-03-27 13:14:25.90 Logon Error: 17187, Severity: 16, State: 1. 2013-03-27 13:14:25.90 Logon SQL Server is not ready to accept new client connections. Wait a few minutes before trying again. If you have access to the error log, look for the informational message that indicates that SQL Server is ready before trying to connect again. [CLIENT: ::1] 2013-03-27 13:14:25.90 Logon Error: 17187, Severity: 16, State: 1. 2013-03-27 13:14:25.90 Logon SQL Server is not ready to accept new client connections. Wait a few minutes before trying again. If you have access to the error log, look for the informational message that indicates that SQL Server is ready before trying to connect again. [CLIENT: 172.17.0.210] 2013-03-27 13:14:26.48 Server The SQL Server Network Interface library successfully registered the Service Principal Name (SPN) [ MSSQLSvc/WCCKemAPP.WCC.LOCAL:SQLEXPRESS ] for the SQL Server service. 2013-03-27 13:14:26.48 Server The SQL Server Network Interface library successfully registered the Service Principal Name (SPN) [ MSSQLSvc/WCCKemAPP.WCC.LOCAL:54547 ] for the SQL Server service. 2013-03-27 13:14:26.48 Server SQL Server is now ready for client connections. This is an informational message; no user action is required. 2013-03-27 13:14:31.04 spid51 Starting up database 'ReportServer'. 2013-03-27 13:14:31.37 spid51 Starting up database 'ReportServerTempDB'. 2013-03-27 13:14:31.76 spid51 Starting up database 'ReportServerTempDB'. 2013-03-27 13:14:32.07 spid51 Starting up database 'ReportServerTempDB'. 2013-03-27 13:14:32.24 Logon Error: 18456, Severity: 14, State: 5. 2013-03-27 13:14:32.24 Logon Login failed for user 'drily'. Reason: Could not find a login matching the name provided. [CLIENT: 172.17.0.210] 2013-03-27 13:15:12.28 spid55 Starting up database 'DNP'. 2013-03-27 13:15:13.75 spid55 Starting up database 'DSS'. 2013-03-27 13:19:36.62 spid57 Starting up database 'ReportServerTempDB'. 2013-03-27 13:25:31.18 spid53 Starting up database 'ReportServer$SQLExpress'. 2013-03-27 13:25:36.38 spid53 Starting up database 'DSSDL'. 2013-03-27 13:25:38.89 spid53 Starting up database 'DSSUSERDIR'. 2013-03-27 13:25:41.26 spid53 Starting up database 'permissionsAudit'. 2013-03-27 13:25:45.00 spid53 Starting up database 'PMKemmererProduction'. 2013-03-27 13:25:48.05 spid53 Starting up database 'PMKemmererProductionTEST'. 2013-03-27 13:26:01.57 spid54 Attempting to load library 'xpstar.dll' into memory. This is an informational message only. No user action is required. 2013-03-27 13:26:01.58 spid54 Using 'xpstar.dll' version '2009.100.1600' to execute extended stored procedure 'xp_instance_regread'. This is an informational message only; no user action is required. 2013-03-27 13:26:52.10 Logon Error: 18456, Severity: 14, State: 38. 2013-03-27 13:26:52.10 Logon Login failed for user 'WCC\baadmin'. Reason: Failed to open the explicitly specified database. [CLIENT: <local machine>] 2013-03-27 13:26:53.37 spid59 Starting up database 'DSSDL'. 2013-03-27 13:26:53.60 spid59 Starting up database 'DSSUSERDIR'. 2013-03-27 13:26:53.92 spid59 Starting up database 'QuietDose'. 2013-03-27 13:26:54.16 spid59 Starting up database 'ReportServer$SQLExpress'. 2013-03-27 13:26:54.36 spid59 Starting up database 'ReportServer$SQLEXPRESSTempDB'. 2013-03-27 13:26:54.66 spid59 Starting up database 'ReportServerTempDB'. 2013-03-27 13:26:54.89 spid59 Starting up database 'STX'. 2013-03-27 13:26:55.57 spid59 Starting up database 'Test'. 2013-03-27 13:26:55.76 spid59 Starting up database 'DSSDL'. 2013-03-27 13:26:55.91 spid59 Starting up database 'DSSUSERDIR'. 2013-03-27 13:26:56.08 spid59 Starting up database 'ReportServer$SQLExpress'. 2013-03-27 13:26:56.31 spid59 Starting up database 'ReportServer$SQLEXPRESSTempDB'. 2013-03-27 13:26:56.52 spid59 Starting up database 'ReportServerTempDB'. 2013-03-27 13:26:56.68 spid59 Starting up database 'STX'. 2013-03-27 13:26:57.24 spid59 Starting up database 'DSSDL'. 2013-03-27 13:26:57.28 spid59 Starting up database 'DSSUSERDIR'. 2013-03-27 13:26:57.45 spid59 Starting up database 'ReportServer$SQLExpress'. 2013-03-27 13:26:57.55 spid59 Starting up database 'ReportServer$SQLEXPRESSTempDB'. 2013-03-27 13:26:57.74 spid59 Starting up database 'ReportServerTempDB'. 2013-03-27 13:26:57.83 spid59 Starting up database 'STX'. 2013-03-27 13:29:36.55 spid54 Starting up database 'ReportServerTempDB'. 2013-03-27 13:39:36.57 spid56 Starting up database 'ReportServerTempDB'. 2013-03-27 13:41:59.55 Logon Error: 18456, Severity: 14, State: 8. 2013-03-27 13:41:59.55 Logon Login failed for user 'sa'. Reason: Password did not match that for the login provided. [CLIENT: <local machine>] 2013-03-27 13:44:07.70 Logon Error: 18456, Severity: 14, State: 8. 2013-03-27 13:44:07.70 Logon Login failed for user 'sa'. Reason: Password did not match that for the login provided. [CLIENT: <local machine>] 2013-03-27 13:49:36.57 spid53 Starting up database 'ReportServerTempDB'. 2013-03-27 13:59:36.57 spid54 Starting up database 'ReportServerTempDB'. 2013-03-27 14:09:36.56 spid53 Starting up database 'ReportServerTempDB'. 2013-03-27 14:15:50.50 spid54 Starting up database 'DSSDL'. 2013-03-27 14:15:50.75 spid54 Starting up database 'DSSUSERDIR'. 2013-03-27 14:15:51.92 spid54 Starting up database 'ReportServer$SQLExpress'. 2013-03-27 14:15:52.25 spid54 Starting up database 'ReportServer$SQLEXPRESSTempDB'. 2013-03-27 14:15:52.51 spid54 Starting up database 'ReportServerTempDB'. 2013-03-27 14:15:52.70 spid54 Starting up database 'STX'. 2013-03-27 14:18:02.83 spid51 Starting up database 'ReportServer$SQLExpress'. 2013-03-27 14:18:46.58 Logon Error: 18456, Severity: 14, State: 38. 2013-03-27 14:18:46.58 Logon Login failed for user 'WCC\baadmin'. Reason: Failed to open the explicitly specified database. [CLIENT: <local machine>] 2013-03-27 14:18:47.49 spid59 Starting up database 'DSSDL'. 2013-03-27 14:18:47.70 spid59 Starting up database 'DSSUSERDIR'. 2013-03-27 14:18:47.92 spid59 Starting up database 'ReportServer$SQLExpress'. 2013-03-27 14:18:48.04 spid59 Starting up database 'ReportServer$SQLEXPRESSTempDB'. 2013-03-27 14:18:48.33 spid59 Starting up database 'ReportServerTempDB'. 2013-03-27 14:18:48.53 spid59 Starting up database 'STX'. 2013-03-27 14:18:49.12 spid59 Starting up database 'DSSDL'. 2013-03-27 14:18:49.33 spid59 Starting up database 'DSSUSERDIR'. 2013-03-27 14:18:49.44 spid59 Starting up database 'ReportServer$SQLExpress'. 2013-03-27 14:18:49.60 spid59 Starting up database 'ReportServer$SQLEXPRESSTempDB'. 2013-03-27 14:18:49.84 spid59 Starting up database 'ReportServerTempDB'. 2013-03-27 14:18:49.98 spid59 Starting up database 'STX'. 2013-03-27 14:18:50.28 spid59 Starting up database 'DSSDL'. 2013-03-27 14:18:50.39 spid59 Starting up database 'DSSUSERDIR'. 2013-03-27 14:18:50.48 spid59 Starting up database 'ReportServer$SQLExpress'. 2013-03-27 14:18:50.53 spid59 Starting up database 'ReportServer$SQLEXPRESSTempDB'. 2013-03-27 14:18:50.66 spid59 Starting up database 'ReportServerTempDB'. 2013-03-27 14:18:50.73 spid59 Starting up database 'STX'. 2013-03-27 14:19:36.54 spid59 Starting up database 'ReportServerTempDB'. 2013-03-27 14:19:36.93 spid59 Starting up database 'ReportServerTempDB'. 2013-03-27 14:29:36.55 spid53 Starting up database 'ReportServerTempDB'. 2013-03-27 14:39:36.57 spid54 Starting up database 'ReportServerTempDB'. 2013-03-27 14:49:36.57 spid53 Starting up database 'ReportServerTempDB'. 2013-03-27 14:59:36.58 spid54 Starting up database 'ReportServerTempDB'. 2013-03-27 15:09:36.60 spid53 Starting up database 'ReportServerTempDB'. I can provide older logs too if it will be beneficial. I'm reading those now from here. Results from query select name, state_desc, is_in_standby, is_cleanly_shutdown from sys.databases; Image for Kin's request |
MySQL backup InnoDB Posted: 14 May 2013 12:19 PM PDT I have a VoIP server running 24x7. At low peak hour at lease 150+ users are connected. My server has MySQL running with InnoDB engine on Windows 2008 platform. I like to take at least 2 times full database backup without shutting down my service. As per Peter Zaitsev - the founder of percona, mysqldump –single-transaction is not always good. read here if you are interested As I'm not a DBA, I like to know in my scenario, which would be best solution to take a database backup? Thanks, |
SQL Server 2008 R2 replication high delivery latency Posted: 14 May 2013 02:07 PM PDT I am seeing an unusually high delivery latency between our distributor and subscribers and i do not understand why. We have in this configuration 3 sql servers using transactional push replication to replicate data from one master server to two reporting servers. We have 9 publications. The distribution agent for most publications are showing under 5ms but one is show as 2000+ms to both subscribers. The suspect publication has only 4 small articles (tables) that rarely, if ever, change. Ive checked and each table has an primary key. ive also checked the @status parameter for each article according to the MS KB: The distribution agent may experience high latency when you configure transactional replication with articles that are configured not to replicate changes as parameterized statements Im tempted to start droping articles to find out if one particular table is the culprit. Doe anyone have any suggestions as to what I can look at? |
Strange characters in mysqlbinlog output Posted: 14 May 2013 10:07 AM PDT Has anyone experienced this? Data replicates fine but when output in mysqlbinlog there are hidden characters that break the input? - mysqlbinlog Ver 3.3 for Linux at x86_64
- mysql 5.5.28 server
Thanks! Julie |
Connecting to a SQL Server database from a Flash program Posted: 14 May 2013 11:07 AM PDT I currently have the ability to utilize Microsoft SQL Server 2012. I am developing a project with Adobe Flash Builder 4.7. If I link my database with Adobe Flash Builder is there any additional steps I must take in order to make the database live, or as long as my computer is running will this database be accessible from any device that is utilizing it? In other words is this a LAN only system or does it automatically make itself available for the programs I link to it? |
Oracle Express edition on Ubuntu - control file missing Posted: 14 May 2013 03:07 PM PDT I have installed the Oracle Express edition on Ubuntu as mentioned here. I am facing issues when I try to create a sample table. Started oracle $ sudo service oracle-xe start Starting Oracle Database 11g Express Edition instance. Started sqlplus $ sqlplus / as sysdba Executed the CREATE command SQL> CREATE TABLE SAMPLE (ID NUMBER); CREATE TABLE SAMPLE (ID NUMBER) *** ERROR at line 1: ORA-01109: database not open** After a series of research on web, I tried to shutdown and restart oracle: Shutdown command SQL> shutdown ORA-01507: database not mounted ORACLE instance shut down. Started the oracle instance SQL> startup ORACLE instance started. Total System Global Area 688959488 bytes Fixed Size 2229688 bytes Variable Size 411044424 bytes Database Buffers 272629760 bytes Redo Buffers 3055616 bytes ORA-00205: error in identifying control file, check alert log for more info I realized that the control file is missing at /u01/app/oracle/oradata/XE . XE is the DB name. So I tried to create the control file as follows: SQL> CREATE CONTROlFILE SET DATABASE XE RESETLOGS; Control file created. Tried to create the sample table again SQL> CREATE TABLE SAMPLE(ID NUMBER); CREATE TABLE SAMPLE(ID NUMBER) ERROR at line 1: ORA-01109: database not open So I tried to issue the following command SQL> ALTER DATABASE OPEN RESETLOGS; ALTER DATABASE OPEN RESETLOGS ERROR at line 1: ORA-01194: file 1 needs more recovery to be consistent** ORA-01110: data file 1: '/u01/app/oracle/product/11.2.0/xe/dbs/dbs1XE.dbf' What should be done next? I am clueless as I am not a database guy. Note: Output of $ lsnrctl services LSNRCTL for Linux: Version 11.2.0.2.0 - Production on 04-JAN-2013 09:15:37 Copyright (c) 1991, 2011, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC_FOR_XE))) Services Summary... Service "PLSExtProc" has 1 instance(s). Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service... Handler(s): "DEDICATED" established:0 refused:0 LOCAL SERVER Service "XE" has 1 instance(s). Instance "XE", status READY, has 1 handler(s) for this service... Handler(s): "DEDICATED" established:0 refused:0 state:ready LOCAL SERVER The command completed successfully |
Still wrong to start the name of a user stored procedure with sp_? Posted: 14 May 2013 09:39 AM PDT One of my co-workers named a stored procedure in our SQL Server 2008 R2 database 'sp_something'. When I saw this, I immediately thought: "That is WRONG!" and started searching my bookmarks for this online article that explains why it is wrong, so I could provide my co-worker with an explanation. In the article (by Brian Moran) it is explained that giving the stored procedure an sp_ prefix makes SQL Server look at the master database for a compiled plan. Because the sp_sproc doesn't reside there, SQL Server will recompile the procedure (and needs an exclusive compile lock for that, causing performance problems). The following example is given in the article to show the difference between two procedures: USE tempdb; GO CREATE PROCEDURE dbo.Select1 AS SELECT 1; GO CREATE PROCEDURE dbo.sp_Select1 AS SELECT 1; GO EXEC dbo.sp_Select1; GO EXEC dbo.Select1; GO You run this, then open the Profiler (add the Stored Procedures -> SP:CacheMiss event) and run the stored procedures again. You're supposed to see a difference between the two sprocs: the sp_Select1 sproc will generate one more SP:CacheMiss event than the Select1 sproc. (The article references SQL Server 7.0 and SQL Server 2000.) When I run the example in my SQL Server 2008 R2 environment, I get the same amount of SP:CacheMiss events for both procedures (both in tempdb and in another test database). So I am wondering: - Can I have done something wrong in my execution of the example?
Is the 'do not name a user sproc sp_something' adagium still valid in newer versions of SQL Server? - If so, is there a good example that shows its validity in SQL Server 2008 R2?
Thanks a lot for your thoughts on this! EDIT I found Creating Stored Procedures (Database Engine) on msdn for SQL Server 2008 R2, which answers my second question: We recommend that you do not create any stored procedures using sp_ as a prefix. SQL Server uses the sp_ prefix to designate system stored procedures. The name you choose may conflict with some future system procedure. [...] Nothing is mentioned there about performance problems caused by using the sp_ prefix though. I'd love to know if that's still the case or if they fixed it after SQL Server 2000. |
No comments:
Post a Comment