Monday, July 29, 2013

[SQL 2012] SQL Server Multiplexing and Device CAL

[SQL 2012] SQL Server Multiplexing and Device CAL


SQL Server Multiplexing and Device CAL

Posted: 29 Jul 2013 03:19 AM PDT

So, I am working on trying to determine how many device CALs we need for our system, and we are having a bit of a problem determining how many device CALs are needed. Hoping to get some guidance here, on 2 different areas.[b]Overview of system:[/b]Devices in people's homes transmit via RF to a Gateway (hardware), which then takes all of those transmissions and sends them to a central server, where a Windows service receives it in and inputs the data into the SQL database. Now, in cases where the device in people's homes can not reach the Gateway, we have handheld devices that get loaded with information to go out and obtain this information. Typically this is loaded manually, but we do have the ability to load the handheld device automatically. If the device is loaded automatically, the files that would have been created manually, are created and then transmitted to the handheld though a wireless connection.[b]Questions:[/b]So the above leads to 2 questions.1. do the devices in peoples homes require Device CALs? They do not access the databases directly ever, and the data is passed to and from the Gateway itself.2. When handhelds are loaded automatically, do they need a Device CAL? Any help would be much appreciated.Long story about why we are not talking to a VAR yet. I have read thought all the Microsoft documentation I can find, but the rules are not clear about the above.ThanksLD

Always On Design

Posted: 29 Jul 2013 03:00 AM PDT

Not asking for the solution but I am trying to come up with a conceptual design for an AlwaysOn solution. We have 2 seperate clusters, 1 at the primary site, 1 at DR, both of which contain 2 nodes, active/passive and I'm struggling a little to redesign this within 2012.Its multi subnet and I know we can have a similar solution to the 2 cluster 2 node as exampled here: http://msdn.microsoft.com/en-us/library/ff878716.aspx but we dont have disk replication available (not sure why not, but we cant) so thats why I'm looking into availability groups.I got to a 3 node, easy enough with 2 at the primary and 1 at the DR site and looked at 1 primary 3 secondary too but I dont know if its possible to incorporate the 2 clustered setups into the availability group structure? As said, just some pointers would be really appreciated.Many thanks.

I need someone who can mentor me as a professional DBA/Developer

Posted: 28 Jul 2013 10:05 AM PDT

Hello Dear SQL central users and Database professionals I was thinking about someone who can mentor me in a way that is convenient both me and that person.I like to be improving my skills ones a while specially the followingDevelop t-sql fundamentalsSSIS fundamentalsDBA high level tasks BI skills fundamentalsso if you think you can spend few minutes to an hour a month to answering questions I may have please send me an email to Zocial411@gmail.com and thank you very much in advance

Always on Secondry database backup plan & Recovering.

Posted: 08 Mar 2013 11:14 PM PST

Hi All,Can any one suggest me one good backup plan for Always on secondaries server... ?Currently i am planning to doOne Full copy only backup--- Every 24 hour :Every 30 min Log backup.I am looking for the solution for recovering the database using this backup strategy.

Help Needed

Posted: 28 Jul 2013 10:58 PM PDT

HI,Iam migrating code from SYBASE database to SQL 2012. I came across a scenario in SYBASE that I have to deal with a statementSET TABLE COUNT <some number>When I checked this in SYBASE infocentre, I came to know that ,this statement tells the query optimizer to use user defined number of tables ,while joining multiple tables. I want to know, is there any command in sql 2012,which is functionally equivalent ?Thanks

[T-SQL] Update all but last two rows of each category

[T-SQL] Update all but last two rows of each category


Update all but last two rows of each category

Posted: 29 Jul 2013 01:01 AM PDT

I would like to update the complete flag of all the rows of a table except the most recent two rows of each category.The table has fields like: category - string, item - string, creation_date - string ('YYYYMMDDHHMI'), complete_flag - boolean, etc.Each category does not have the same amount of records with the same creation_date so I do not want to filter by creation_date.Is there a way to accomplish this?Thanks for your help,Fred

CAN ANY ONE WRITE QUERY FOR BELOW DESCRIPTION!

Posted: 14 Jan 2010 03:20 AM PST

345 0 Test Lane 0 0, Binder, CA 00002 (MLS ID: PD1LST32)6 N Training Road 0, Binder, CA 00006 (MLS ID: TRN1LST38)1 N Testing Qa 0, Binder, CA 00001 (MLS ID: QA1LST267)1 N Testing Qa 0, Binder, CA 00001 (MLS ID: QA1LST618)1 N Testing Qa 0, Binder, CA 00001 (MLS ID: QA1LST660)300 WILLIAMSON STREET, CELINA, TN 38551 (MLS ID: 125833)4005 LIVINGSTON HWY., CELINA, TN 38551 (MLS ID: 127371)1097 HORSE CREEK ROAD, CELINA, TN 38551 (MLS ID: 122846)1022 JIMMY RENEAU ROAD, CELINA, TN 38551 (MLS ID: 506612)4440 Neeley Creek Rd, Celina, TN 38551 (MLS ID: 865001)130 Lake View Drive, Celina, TN 38551 (MLS ID: 129079)1022 JIMMY RENEAU ROAD, CELINA, TN 38551 (MLS ID: 123625)387 Morgan Hill St, Simi Valley, CA 93065 (MLS ID: 70001200)3140 Griffon Ct, Simi Valley, CA 93065 (MLS ID: F1703822)4877 Canoga Ave, Woodland Hills, CA 91364 (MLS ID: F1708906)21801 SAN MIGUEL ST, Woodland Hills, CA 91364 (MLS ID: FR2056685)4412 Canoga Ave, Woodland Hills, CA 91364 (MLS ID: F1709916)4222 Canoga Dr, Woodland Hills, CA 91364 (MLS ID: F1710050)4877 Canoga Avenue, Woodland Hills, CA 91364 (MLS ID: 70003126)22300 AVENUE SAN LUIS, Woodland Hills, CA 91364 (MLS ID: FR2063889)3404 MANDEVILLE CANYON RD, LOS ANGELES, CA 90049 (MLS ID: 06-150775)21635 MEDINA ESTATES DR, Woodland Hills, CA 91364 (MLS ID: FR2043630)1 N Testing Qa 123, Binder, CA 00001 - $13,99829500 HEATHERCLIFF RD Unit: 289, Malibu, CA1 N Testing Qa 123, Binder, CA 00001 (MLS ID: QA1LST282)4935 DROUBAY DR, Las Vegas, NV 89122 (MLS ID: 748023)Newbury Park, CA - $638,9001401 Calle De Oro, Thousand Oaks, CA - $645,000Newbury Park, CA - $648,000856 Masterson Dr, Thousand Oaks, CA - $650,0001841 Pinedale Ave, Lincoln, NE 68506 - $202,5001594 Norman Ave, Thousand Oaks, CA - $650,0007600 Ringneck Dr, Lincoln, NE 68506 - $214,500Greenwich, CT 06830 (MLS ID: 69294)Aspen, CO 81611 (MLS ID: 102392)1818 SE 10Th St, Fort Lauderdale, FL 33316 (MLS ID: F751628)Palm Beach, FL 33480 (MLS ID: 07-1241)

Select variable based upon sub query

Posted: 28 Jul 2013 08:34 PM PDT

I have the following SQL statement (It's actually MYSQL but the principal is the same ;-) ) that selects all the values from a table based upon the row EditionID passed to it via $POST - in this example EditionID is 10The query gives me 2 values as a tolerance + and - 10 on which to base the list from using BETWEENHowever, I want the @tol variable to change to a larger tolerance (lets say 20) if Value1 is >= the integer specified in the variable @trig. I'm not sure how to go about this - I'm guessing that the Value1 values need to be retrieved via the sub queries before making another pass at the sub queries, this time with the correct @tol variable.Unless somehow I can set 2 variables and the query chooses the correct one?This is what I have got so farSET @tol = 10;SELECT EditionID, Make, Model, EditionDesc, Value1 FROM t1 HAVING Value1 BETWEEN (SELECT Value1, TRUNCATE(Value1-@tol,1) as Value1min FROM t1 WHERE EditionID = 10) iTable1) AND (SELECT Value1, TRUNCATE(Value1+@tol,1) as Value1max FROM t1 WHERE EditionID = 10) iTable2) ORDER BY Value1Edit: Apologies errors at my end have duplicated the Post

select or count based on condition

Posted: 29 Jul 2013 12:18 AM PDT

[font="Courier New"]I have a query that returns results similar to this:Name-Order-Item - ItemShipDateTimeJoe - O23 - I11 - 2013-07-29 13:05:00Jim - O24 - I11 - 2013-07-29 13:07:00Jim - O24 - I12 - 2013-07-29 13:07:00Sue - O25 - I21 - 2013-07-29 14:05:00Sue - O25 - I44 - 2013-07-29 15:55:00Actually, I'm not interested in the Item. I just want to count when an order is complete, and get counts per hour. The result set above represents three complete orders. Discarding "Item," my unique results would be:Name-Order-ItemShipDateTimeJoe - O23 - 2013-07-29 13:05:00Jim - O24 - 2013-07-29 13:07:00Sue - O25 - 2013-07-29 14:05:00Sue - O25 - 2013-07-29 15:55:00So, I'm getting closer to three. How would I SELECT (or COUNT) only rows that contain the MAX Value in the column ItemShipDateTime field where NAME and ORDER are identical?[/font]Thanks!

Handle error record from User Defined Function

Posted: 28 Jul 2013 10:44 PM PDT

Hi,Please let me know if we can handle error record from User defined function.SQL script enclosed with this mail.regards,Kumar.

Change sub query variable based upon sub query

Posted: 28 Jul 2013 08:27 PM PDT

I have the following SQL statement (It's actually MYSQL but the principal is the same ;-) ) that selects all the values from a table based upon the row EditionID passed to it via $POST - in this example EditionID is 10The query gives me 2 values as a tolerance + and - 10 on which to base the list from using BETWEENHowever, I want the @tol variable to change to a larger tolerance (lets say 20) if Value1 is >= the integer specified in the variable @trig. I'm not sure how to go about this - I'm guessing that the Value1 values need to be retrieved via the sub queries before making another pass at the sub queries, this time with the correct @tol variable. SET @tol = 10;SET @trig = 300;SELECT EditionID, Make, Model, EditionDesc, Value1 FROM t1 HAVING Value1 BETWEEN (SELECT Value1, TRUNCATE(Value1-@tol) as Value1min FROM t1 WHERE EditionID = 10) iTable1) AND (SELECT Value1, TRUNCATE(Value1+@tol) as Value1max FROM t1 WHERE EditionID = 10) iTable2) ORDER BY Value1

Need T-SQL Query Help- Urgent

Posted: 28 Jul 2013 06:11 PM PDT

I have a table Called Employee_AttendaneIts result Is:Employee_id IS_Al Attendance1998 1 2013-03-181998 1 2013-03-191998 1 2013-03-212830 1 2013-03-042830 1 2013-03-052830 1 2013-03-06Here, for employee_id=1998, its IS_AL=1 for Date '2013-03-18','2013-03-19', '2013-03-21'And My required Result IS:Employee_id From_date To_date1998 2013-03-18 2013-03-191998 2013-03-21 2013-03-212830 2013-03-04 2013-03-06 please help with T-SQL QUERY.

Detecting ALTER or CREATE PROCEDURE??

Posted: 28 Jul 2013 12:38 PM PDT

Can't get this working - whinges about ALTER/CREATE statement syntax ?? Help?IF (SELECT count(name) FROM sys.sysobjects WHERE (type = 'P') and name = 'SPNAME') > 0BEGIN ALTER PROCEDURE [spname] ----ENDELSE BEGIN CREATE PROCEDURE [spname] ----ENDAS <body of the sp>

need urgent help for sum aggregation

Posted: 28 Jul 2013 06:07 AM PDT

can some give me hint i am trying to get a statement that will perform a daily sum aggregation on the field cost. It must only use costs that are associated with the follow items: Z001, Z002. The cost must also only come from division this what I have tried :DECLARE @dailycost dateSELECT @dailycost = GETDATE()if @dailycost = @dailycostSELECT SUM(Cost) FROM dbo.Costwhere Item = 'ZOO1'

need help with pivot

Posted: 28 Jul 2013 10:39 AM PDT

I need to write a query without using pivot. Here is the scenarioneed to display summition of data yearly and year should be display on columns.example- InputID Amount Year1 50 19811 200 19812 300 19823 70 19852 500 1981output ID 1981 1982 1983 19851 250 2 500 300 3 70 Please help..

daily sum aggretation

Posted: 28 Jul 2013 05:31 AM PDT

can some give me hint i am trying to get a statement that will perform a daily sum aggregation on the field cost. It must only use costs that are associated with the follow items: Z001, Z002. The cost must also only come from division this what I have tried :DECLARE @dailycost dateSELECT @dailycost = GETDATE()if @dailycost = @dailycostSELECT SUM(Cost) FROM dbo.Costwhere Item = 'ZOO1'

[SQL Server 2008 issues] Pooling:more than one connection to a database

[SQL Server 2008 issues] Pooling:more than one connection to a database


Pooling:more than one connection to a database

Posted: 28 Jul 2013 06:38 PM PDT

I have a doubt that whether Pooling actually works.I used below query to check the connection to the database from different pc.[code="sql"]SELECT hostname,p.status,[program_name],loginame,nameFROM Master..sysprocesses as p join master.dbo.sysdatabases as d on p.dbID = d.dbIDWHERE p.ecid=0 and p.hostname<>'' order by name[/code]In the output i saw more than one connection from a pc to the same database.So i got doubt about this.In my vb.net application i open connection only once.

Populate object hierarchy in an additional column

Posted: 28 Jul 2013 05:14 PM PDT

Hi,In my database I have a table "ObjectHierarchy" having following columns.ObjectID [Primary Key]ObjectNameParentObjectID [ObjectID from the same table]Now in this table what I want it to have an ObjectPath column that will show the hierarchy of a particular object.For egampleObjectID ObjectName ParentObjectID ObjectPath1 AAA -1 2 BBB -13 CCC 1 AAA4 DDD 3 AAA/CCC5 EEE 4 AAA/CCC/DDD6 FFF 2 BBBObjectPath column should contain the path of the object but not the actual objectname. I want to add a fucntion so that when I run the queryALTER TABLE ObjectHierarchy ADD ObjectPath AS FN_ObjectPath(ObjectID)this will add a column with the respective object path. How that function should be?Regards,Girish

SSIS package works in BIDS Fails in In SQL AGent and Store

Posted: 28 Jul 2013 04:27 AM PDT

I have a simple SSIS package that uses and odbc connection using a dl4 driver connection. It works fine in Bids but when I try to execute through a job or through the ssis store it fails. I get the following error:Code: 0xC0047062 Source: Data Flow Task 1 Source - Query [1] Description: Microsoft.SqlServer.Dts.Runtime.DtsCouldNotCreateManagedConnectionException: Could not create a managed connection manager. at Microsoft.SqlServer.Dts.Runtime.ManagedHelper.GetManagedConnection(String assemblyQualifiedName, String connStr, Object transaction) at Microsoft.SqlServer.Dts.Runtime.Wrapper.IDTSConnectionManager100.AcquireConnection(Object pTransaction) at Microsoft.SqlServer.Dts.Pipeline.DataReaderSourceAdapter.AcquireConnections(Object transaction) at Microsoft.SqlServer.Dts.Pipeline.ManagedComponentHost.HostAcquireConnections(IDTSManagedComponentWrapper100 wrapper, Object transaction) End Error Error: 2013-07-28 09:56:03.77 Code: 0xC0047017 Source: Data Flow Task 1 SSIS.Pipeline Description: component "Source - Query" (1) failed validation and returned error code 0x80131500. End Error Error: 2013-07-28 09:56:03.77 Code: 0xC004700C Source: Data Flow Task 1 SSIS.Pipeline Description: One or more component failed validation. End Error Error: 2013-07-28 09:56:03.77 Code: 0xC0024107 Source: Data Flow Task 1 Description: There were errors during task validation. End Error DTExec: The package execution returned DTSER_FAILURE (1). Started: 9:56:03 AM Finished: 9:56:03 AM Elapsed: 0.235 seconds. The package execution failed. The step failed.Can anyone help?

general

Posted: 23 Jun 2013 11:39 PM PDT

i have table with name orders and field as order_number,order_amt,order_date,order_customer,order_salepersoni want to write a query that selects all orders save those with zeroes or NULLs in the order_amt field.how can i do this?

capture web application username and other session info in sql trigger

Posted: 27 Jul 2013 09:36 PM PDT

Hi,I have created a trigger to audit changes to table rows from my web application. My web application is using a single db user to do updates to the database. However, many different users can log in to the website and do the updates and inserts. Therefore, the db user will always be same but web users will be differentI want to have some way to capture the logged in user to the website who is making the changes to the rows so that I can audit the changes along with the web user that is making the changes.Can someone please guide me how can I capture the web session in my sql trigger so that I know who is making the changes. Can I configure the web.config file so that the session info is also passed on to the sql server trigger or if there is some other way, please let me knowRegardsArif

creating a copy of a database for testing on

Posted: 28 Jul 2013 12:47 AM PDT

Problem. I need to make available a copy of one of the databases in our dev environment everyday to a software tester. The database will need to have the ability to be written to not just read - I was going to go down the road of a daily snapshot but that would be read only as i understand it ?In effect the software tester needs to be able to do whatever needs to be done in order to test things, be it add tables, delete tables and then just delete the database at the end of the day then start again the next day with another up to date copy from the dev environment. Can anybody advise of the best method(s) for this ?

Need Urgent help to get the Final Approver name in the list

Posted: 28 Jul 2013 01:08 AM PDT

Dear friends,i need an urgent help on a deadline task-I have a table- source "Pending" which has approver columns as -The 'Pending' Source table has columns for each requisition as -Req# Approver 1 Name,Req# Approver 1 Decision,Req# Approver 1 Decision Date,Req# Approver 1 ID,Req# Approver 2 Name,Req# Approver 2 Decision,Req# Approver 2 Decision Date,Req# Approver2 ID...----------------------my requirement is that:-i need to Get the name of last approver in req approval path. E.g., say the following are listed as approvers in this order: Caldwell, Walter; Rondini, Joe; Zuccaro, Leo. This field should show Zuccaro, Leo. and to do this In pending tables, it would be the last one before a null, in the Req# Approver # Name field. hence i needed help with the code on how to grab the last approver before a null, in the Req# Approver # Name field. thanksDJ

Sunday, July 28, 2013

[how to] Database design for user information in mySQL

[how to] Database design for user information in mySQL


Database design for user information in mySQL

Posted: 28 Jul 2013 09:00 PM PDT

I'm trying to design an app where users take a new survey everyday (that's not the main part of app but it's an important part). I've never designed a database before so what I'm trying to do might not make any sense. I'm using MySQL and I have a table for user information (user_id, name, password, etc). What I plan on doing is making a new table for each new survey and linking it with the user_id. So basically, I would have one table having a relationship with hundreds of other tables. Is that how it would work? I thought about adding a column to the user table for every survey but I read that it's bad design to add columns often.

How can I drop a stored procedure whose name is in a variable?

Posted: 28 Jul 2013 07:17 PM PDT

Given a variable that contains a stored procedure name:

DECLARE @stored_procedure_name sysname;  SET @stored_procedure_name = 'some_stored_procedure_name';  

How can I drop the stored procedure?

Dual column Primary Key [on hold]

Posted: 28 Jul 2013 02:41 PM PDT

Order ID Customer ID Inventory ID Number ordered

Number ordered would depend on Order ID and Inventory ID right? Because an order ID could contain several Inventory IDs. So would Order ID and Inventory ID need to be my primary Key, and if so, would they need to be next to each other? Also is this in 3nf? At first I was thinking that order ID would be my primary key and customer id and inventory id would be foreign keys. Can Inventory ID be part of the primary key and a foreign key? Thanks in advance for any help!

Max_packet_allowed setting for Windows 7

Posted: 28 Jul 2013 06:33 PM PDT

We have windows xp machine with older version of mysql. So when we need to import huge data we just set the max_allowed_packet=1500M in the C:\Program Files\MySQL\MySQL Server 5.1 . Now we loaded the latest mysql 5.6.12. We are trying to import the same date and we find there are few my.cnf file on is in the program file and another in programme data both change yet we cant import the same data. Where else to change even after the change we started the service by going into the admistrative tool.

Why is my Amazon RDS instance so slow?

Posted: 28 Jul 2013 08:41 AM PDT

I currently run a Windows Server with SQL Server Express. I'd like follow best practice and separate the application from the database, so I've created an RDS instance and exported my database.

But... it's painfully slow. I'm running a test query from my local machine that selects 12000 rows - it takes less than 1 second against my original setup but 16 seconds against RDS. Surely it can't be question of specification; my small RDS instance actually has a better spec. than my Windows Server.

What could be going on here?

Running out of Transaction Log space during Alter Table

Posted: 28 Jul 2013 01:10 PM PDT

I'm running an alter table, alter column on a table with nearly 30 million rows and SQL Azure fails after approximately 18 minutes saying that The session has been terminated because of excessive transaction log space usage. Try modifying fewer rows in a single transaction.

I'm guessing that it's not possible to break this down into modifying fewer rows at a time, so I'm wondering what my options are for making this change to the database. SQL Azure will not let me change the size of the transaction log (limited to 1gb).

I'm guessing that my best bet is going to be creating a new table with the new layout, migrating the data across into that one, deleting the original table and then renaming the new table to match the name of the old table. If that's the case, how is it best to structure those commands?

Scheduled downtime for our system is not an issue currently so this operation can take as long as it needs to.

When MySQL table gets too large

Posted: 28 Jul 2013 12:07 PM PDT

I have a really large MySQL table counting more than 12 billion rows at the moment (table scheme: InnoDB). From time to time the problems get more and more. The biggest problem is that altering the table takes a really long time (more than 48h) and every minute downtime is one minute too much. So whenever I need to add a new index or add a new column, the problem with the altering is there.

Are 12 billion rows too much for MySQL to handle? What would you suggest to do?

Adding an index to a system catalog in Postgres

Posted: 28 Jul 2013 08:47 PM PDT

I'm having a situation very similar to the one described here:

I've got a SaaS situation where I'm using 1000+ schemas in a single database (each schema contains the same tables, just different data per tenant). I used schemas so that the shared app servers could share a connection to the single database for all schemas. Things are working fine.

and, while the application itself appears to be working fine, some queries involving system catalogs are very slow. Also, psql's auto-completion is totally useless and \dt is very slow.

In particular, I need to calculate the on-disk size of each schema using something like this:

SELECT sum(pg_total_relation_size(c.oid)) AS size  FROM   pg_namespace n  JOIN   pg_class     c ON c.relnamespace = n.oid  WHERE  n.nspname = 'abbiecarmer'  AND    c.relkind = 'r';  

which is quite slow. Looking at the query plan, I see

Aggregate  (cost=136903.16..136903.17 rows=1 width=4) (actual time=1024.420..1024.420 rows=1 loops=1)                 ->  Hash Join  (cost=8.28..136902.86 rows=59 width=4) (actual time=143.247..1016.749 rows=60 loops=1)                     Hash Cond: (c.relnamespace = n.oid)                                                                                 ->  Seq Scan on pg_class c  (cost=0.00..133645.24 rows=866333 width=8) (actual time=0.045..943.029 rows=879788 loops=1)                                                                                                    │                Filter: (relkind = 'r'::"char")                                                                                     Rows Removed by Filter: 2610112                                                                               ->  Hash  (cost=8.27..8.27 rows=1 width=4) (actual time=0.032..0.032 rows=1 loops=1)                                      Buckets: 1024  Batches: 1  Memory Usage: 1kB                                                                        ->  Index Scan using pg_namespace_nspname_index on pg_namespace n  (cost=0.00..8.27 rows=1 width=4)(actual time=0.029..0.030 rows=1 loops=1)                                                                        │                      Index Cond: (nspname = 'abbiecarmer'::name)                                                     Total runtime: 1024.476 ms                                                                                          

Which, if I understand it right, tells that 90% of the query time is spent sequentially scanning pg_class relation.

I stopped postmaster, ran the backend in single-user mode and added the following indexes:

create index pg_class_relnamespace_index on pg_class(relnamespace);  REINDEX INDEX pg_class_relnamespace_index;    create index pg_class_reltablespace_index on pg_class(reltablespace);  REINDEX INDEX pg_class_reltablespace_index;  

(I've also got thousands of tablespaces too). Now the query is ~100 times faster and the plan looks much nicer:

Aggregate  (cost=846.91..846.92 rows=1 width=4) (actual time=10.609..10.610 rows=1 loops=1)                           ->  Nested Loop  (cost=0.00..846.61 rows=60 width=4) (actual time=0.069..0.320 rows=60 loops=1)                           ->  Index Scan using pg_namespace_nspname_index on pg_namespace n  (cost=0.00..8.27 rows=1 width=4) (actual time=0.023..0.024 rows=1 loops=1)                                                                              │                Index Cond: (nspname = 'abbiecarmer'::name)                                                                  ->  Index Scan using pg_class_relnamespace_index on pg_class c  (cost=0.00..837.59 rows=75 width=8) (actual time=0.043..0.271 rows=60 loops=1)                                                                             │                Index Cond: (relnamespace = n.oid)                                                                                 Filter: (relkind = 'r'::"char")                                                                                    Rows Removed by Filter: 102                                                                          Total runtime: 10.696 ms                                                                                           

However, in the above thread, Tom Lane, who is one of Postgres core contributors, says:

There are a lot of gotchas here, notably that the session in which you create the index won't know it's there (so in this case, a reindex on pg_class would likely be advisable afterwards). I still think you'd be nuts to try it on a production database, but ...

I'm also worried by the fact that modification of system catalogs seemed to be completely disabled in Postgres 9.0 and 9.1 (I'm using 9.2) - I suppose it was done for a reason?

So, the question is: what are the gotchas in adding an index to a system catalog in Postgres and will I be nuts if I (eventually) do that on a production system?

Replication error

Posted: 28 Jul 2013 09:03 AM PDT

We have a slave server that has stopped replication due to the following error:

Slave SQL: Query caused different errors on master and slave.  

What could be the cause of this error? And what would be a way to fix it?

Version of both master and slave is MySQL 5.5.30

130726 23:55:45 [Note] C:\Program Files\MySQL\MySQL Server 5.5\bin\mysqld: Shutdown complete    130726 23:58:39 [Note] Plugin 'FEDERATED' is disabled.  130726 23:58:39 [Warning] C:\Program Files\MySQL\MySQL Server 5.5\bin\mysqld: ignoring option '--innodb-file-per-table' due to invalid value 'ON'  130726 23:58:39 [Note] Plugin 'InnoDB' is disabled.  130726 23:58:39 [Note] Server hostname (bind-address): '0.0.0.0'; port: 3306  130726 23:58:39 [Note]   - '0.0.0.0' resolves to '0.0.0.0';  130726 23:58:39 [Note] Server socket created on IP: '0.0.0.0'.  130726 23:58:39 [Note] Slave SQL thread initialized, starting replication       in log 'mysql-bin.000234' at position 1065421256,       relay log '.\slave-relay-bin.000917' position: 1065421402  130726 23:58:39 [Note] Slave I/O thread: connected to master 'replication@191.5.3.4:3306',      replication started in log 'mysql-bin.000235' at position 166680598  130726 23:58:39 [Note] Event Scheduler: Loaded 0 events  130726 23:58:39 [Note] C:\Program Files\MySQL\MySQL Server 5.5\bin\mysqld:       ready for connections.  Version: '5.5.30-log'  socket: ''  port: 3306  MySQL Community Server (GPL)  130726 23:59:04 [ERROR] Slave SQL: Query caused different errors on master and slave.      Error on master: message (format)='Incorrect key file for table '%-.200s';       try to repair it' error code=1034 ;       Error on slave: actual message='no error', error code=0.       Default database: 'shared'.       Query: 'CREATE TEMPORARY TABLE tmp_grades (                          vehsysid INT(11),                          grade INT(1),                          dt TIMESTAMP,                          INDEX(vehsysid),                          INDEX(grade),                          INDEX(dt)                      ) SELECT vehsysid, Grade, MAX(dt) AS dt                          FROM shared.tbl_valuations                           GROUP BY vehsysid, grade', Error_code: 0  130726 23:59:04 [ERROR] Error running query, slave SQL thread aborted. Fix the problem,     and restart the slave SQL thread with "SLAVE START".      We stopped at log 'mysql-bin.000234' position 1065421256  

What else I can't figure out is how a temporary table would lead to such an error (in master):
'Incorrect key file for table '%-.200s'; try to repair it' error code=1034

The last lines from master error log:

130725 23:15:57 [Warning] Warning: Enabling keys got errno 120 on shared.tmp_grades, retrying  130726 23:15:58 [Warning] Warning: Enabling keys got errno 137 on shared.tmp_grades, retrying  

Additional info:

  • both Master and Slave run on Windows (I don't know if that's relevant.)
  • the disks at both have plenty of space.
  • replication format is MIXED
  • innodb is skipped in all instances, master and slaves. MyISAM is the default.

What's a good way to model user authorization to hierarchical data?

Posted: 28 Jul 2013 07:44 PM PDT

I'm wanting to implement something that's basically an ACL in SQL Server (2008, we'll be moving to 2012). We're working with farm production information. We will have users who need to be authorized to view production records, but only at certain vertical levels in a hierarchy.

Our data hierarchy looks like this:

System  - Farm    - Group      - Animal  

The idea is that some users will have access at the System level, and can see records for all Farms, Groups, and Animals within that System. Likewise, some users will have permission starting at the Farm level, and need access only linked to that Farm and all Groups (and animals) within it.

Each table contains a primary key column, and a foreign key column linking it to the parent record (along with whatever other attributes each entity requires).

What I've implemented in the past is two-table system for linking users to the appropriate items they're allowed to see. Implemented here, it would look like this:

Table:  Authorizations          Table:  FullAuthorizations  Columns:    Id (PK)             Columns:    Id (PK)              UserId                          UserId              ObjectId                        SystemId              ObjectType                      FarmId                                              GroupId                                              AnimalId  

The application inserts a record into Authorizations, with the user to authorize, the record id (System id, Farm id, etc), and the type of record (System, Farm, etc). The FullAuthorizations table is used to denormalize the farm hierarchy for easier/faster filtering of data. A trigger is used on the Authorizations table (and each of the farm, etc, tables) to update FullAuthorizations. I considered using a View here, in a previous project with additional levels of entities, and the performance was quite poor once we began getting several hundred thousand records.

The queries would look something like:

SELECT *  FROM dbo.Animals a  WHERE EXISTS (      SELECT 1      FROM dbo.FullAuthorizations fa      WHERE fa.UserId = 1 AND fa.AnimalId = a.Id  )  

In the other project where we're doing this, the solution is performant, but feels like a bit of a hack, and I especially don't like that we can't maintain referential integrity on Authorizations with the associated objects. I'd appreciate feedback on some other possible solutions. I've been looking at things like Nested Sets, but not sure something like that fits this particular problem.

Need advice regarding RMAN's working

Posted: 28 Jul 2013 05:43 AM PDT

I have configured a RMAN script which first takes the backup of the database and then deletes the obsolete backupsets according to the present retention policy.Now is there any possibility of the given scenario

Scenario: The backup process starts and failed abruptly dues to some reason creating incomplete backupset.Then the backup deletion part of script runs and deletes the latest full/working backup(not the one that failed but the one which the script created on the previous run)of the database provided that the retention policy is redundancy 1.Hence I lose the only available/working backup of my database.

Can it happen?

Oracle shared memory exception ORA-04031

Posted: 28 Jul 2013 01:43 PM PDT

I'm trying to establish an Oracle database connection. It throws the following exception:

ORA-04031: unable to allocate 4048 bytes of shared memory     ("shared pool","unknown object","sga heap(1,0)","kglsim heap")  

I have tried connecting the database through JDBC as well as SQL Developer, however it throws the exception in both case.

How can I fix this?

Memcached plugin on MariaDB?

Posted: 28 Jul 2013 02:43 PM PDT

I'd like to try new NoSQL feature in MySQL 5.6 but I am using MariaDB 10.0.2 on Debian 6 and don't fancy coming back to MySQL.

I'm wondering whether the memcached plugin has been added to MariaDB? And if not whether one can still use it as an addon?

And if not, can I use the existing Cassandra plugin of MariaDB to the same effect?

How to embed a sub-prodecure call in a SELECT statement in an Oracle 11g PL/SQL stored procedure

Posted: 28 Jul 2013 03:43 PM PDT

I need to figure out a way to embed an Oracle PL/SQL sub-procedure call in a SELECT statement, within another procedure in the same package.

I am using SQLDeveloper 3.0 with an Oracle 11g database.

I have already developed a stored procedure 'acctg_detail_proc()' that generates a detailed list of accounting transactions within a specified period. What I am trying to do is create a summary report procedure 'acctg_summary_proc()' using the detailed data returned by acctg_detail_proc().

Since I am new to PL/SQL, all I have learned so far is how to retrieve the detailed data via a ref cursor, then LOOP through it, FETCHing each individual detail row. I want to figure out how acctg_summary_proc() can substitute that call to acctg_detail_proc() for a table name in a SELECT statement with a GROUP-BY clause. Here is the source code for an anonymous block where I tried to test it:

SET SERVEROUTPUT ON;  DECLARE      start_date VARCHAR2(50) := '04/01/2012';      end_date VARCHAR2(50) := '04/30/2012';      c_acctg_refcur    SYS_REFCURSOR;  BEGIN    acctg_rpt_pkg.acctg_detail_proc(start_date, end_date, c_acctg_refcur);    SELECT       date_posted,      debit_acct,      credit_acct,      SUM(dollar_amt)    FROM c_acctg_refcur    GROUP BY      date_posted,      debit_acct,      credit_acct;    CLOSE c_acctg_refcur;  END;  

When I try to execute this code, I get the following error:

PL/SQL: ORA-00942: table or view does not exist

I realize I could use a nested SELECT statement instead of a table name, but I want to avoid duplication of source code. Is there any way to 'alias' a ref cursor so I can reference its data in a SELECT statement?

Here is some further background info: The called sub-procedure has ~600 lines of code, and selects 40 columns of data from a de-normalized VIEW. The corporate DBAs will not let me create any VIEWs that contain WHERE clauses, so that is not an option for me.

Thanks in advance, Ken L.

Oracle database link via ODBC to SQL Server does not recover automatically

Posted: 28 Jul 2013 08:43 AM PDT

I have a test environment with an Oracle 10XE database which is linked to an SQL-Server 2005 via an ODBC data source. For reference here I named the connection MSSQL_LINK. The production environment will have an Oracle 11g and an SQL-Server 2008. The connection once set up works fine.

As a test of error conditions in production we unplugged the network cable. This broke the connection. A subsequent executed statement (e.g. select * from test_table@MSSQL_LINK;) failed with an error ORA-28500, which is a wrapper for the original ODBC error:

[Generic Connectivity Using ODBC][S1000] [9013]General error in nvITrans_BeginT - rc = -1. Please refer to the log file for details.  ORA-02063: preceding 2 lines from MSSQL_LINK  

This was somewhat expected. Unfortunately plugging the cable back into the machine does not lead to an automatic reconnect of the link. The problem persists until I drop the link and create it anew with the commands

drop database link MSSQL_LINK;  create database link MSSQL_LINK connect to "user" identified by "passwd" using 'MSSQL_LNK';  

Also the same happens if I just disconnect the cable for a short time and reconnect it before I execute any statements over the link. The interim breakup is still detected by the ODBC driver but not fixed on its own.

As mentioned by Ste, creating a link will tell Oracle only about the connection parameters like host, user name, password, but not create a connection by itself. The link can be created without the network cable connected to the Oracle machine. If I create the link without the cable connected, then connect it to and execute a simple select on the linked server, the same Oracle error occurs.

Obviously in production mode this is an unacceptable situation which needs to be fixed, preferably by an automatic procedure.

Three questions do I have about this:

  1. Is there a way to test a database link other than e.g. a select on a test table which resides on the linked server?
  2. Is dropping an re-creating the database link on my oracle machine the best option I have when this problem occurs?
  3. What is the best automatic procedure for recovering from this situation.

Repairing Broken Binary Replication in PostgreSQL 9.0

Posted: 28 Jul 2013 12:43 PM PDT

I have a binary replication that was stopped for so long that the WALs were removed and as a result, it ended up being old. I'm trying to reestablish the replication and the best way I've found so far is following the steps on the PostgreSQL wiki:

  • Issue select pg_start_backup('clone',true); on master
  • rsync everything except for pg_xlog from master to slave
  • Issue select pg_stop_backup(); on master
  • rsync pg_xlog

But the database is too big (300 GB), my connection is not really amazing (like 800 kB/s) and the files in base keep changing. So I was wondering if there's a more optimal way to do this.

Failed copy job deletes all users

Posted: 28 Jul 2013 05:43 PM PDT

Since the progression of this was apparently somewhat difficult to follow the first time around:

I attempted a completely boring, been-done-a-thousand-times-before copy of a database using the copy database wizard with the detach/reattach method.

The copy failed. The log indicates that it was unable to execute a CREATE VIEW action for a particular view, because the datasource for the view did not exist. This is interesting in its own right, as the source most certainly exists, and the view(s) in question are fully functional in the source database. I'm not really clear, just yet, on how significant this is, as I've yet to figure out precisely why this generated an error.

This resulted in the deletion of all non-system user associations from the source database, leaving me with users dbo, information_schema, sys, and guest. Non-system roles were also deleted. Schemas were unaffected.

I have since restored the damaged database from backup. Academically, however, I would like to know the following:

  1. Why would a failed copy operation strip the user associations from the source database?
  2. Is there any sort of maintenance/rebuild/repair operation that could be performed on the source database to repair it?
  3. The loss of the users was immediately obvious, but given the rather mysterious nature of a failed copy job apparently damaging the source database, how concerned should I be about less obvious effects? In theory, I would expect restoring from backup would alleviate this concern, but do I have any cause to worry about, e.g., the master database?

This is entirely repeatable. I've made a handful of copies (manually) for the sake of experimenting with this particular issue, and in each case, the failed copy job obliterates the users and roles from the source database.

Removing the views that generated errors allows the copy to complete, and, as one would expect, produces a copy with identical data, users, etc., in addition to leaving the source database unaltered.

If it's important, I've tried rebuilding the indexes of the system databases, as well as the damaged database, to no appreciable effect.

The error generated:

1:00:25 PM,5/28/2013 1:00:25 PM,0,0x,ERROR : errorCode=-1073548784 description=Executing the query "CREATE VIEW [Sourcing].[PermittedArrProducts]  AS  SELECT     dbo.tblArrProducts.ArrProductID, dbo.tblArrProducts.ArrangementID, dbo.tblArrProducts.ContainerTypeID, dbo.tblArrProducts.Quantity  FROM         Sourcing.PermittedArrangements INNER JOIN                        dbo.tblArrProducts ON Sourcing.PermittedArrangements.ArrangementID = dbo.tblArrProducts.ArrangementID    " failed with the following error: "Invalid object name 'Sourcing.PermittedArrangements'.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.  

Validate each parent intermediary is also a parent to itself

Posted: 28 Jul 2013 07:43 AM PDT

I want to validate where each parent intermediary is also a parent to itself. So this query should return invalid parents in SELECT if table has any.

I need to do is first of all get all parents records. This should be done by joining Intermediries to itself on ParentIntermediaryID = IntermediaryID (inner join). Now, from all the parents return those parents that don't have any rows with ParentIntermediaryID = IntermediaryID.

I hope the requirement is clear. Can you give me query for this?

Create the test table

CREATE TABLE Intermediary      (      IntermediaryPK INT ,      IntermediaryID NVARCHAR(20),      IntermediaryName NVARCHAR(200),      IntermediaryTypeID NVARCHAR(1),      ParentIntermediaryID NVARCHAR(20),      IntermediaryTypePK TINYINT,      ParentIntermediaryPK INT      GrandParentIntermediaryPK INT      GrandParentIntermediaryID NVARCHAR(20)      )  

Insert the test data into the test table

INSERT INTO Intermediary     (IntermediaryPK, IntermediaryID, IntermediaryName, IntermediaryTypeID, ParentIntermediaryID,      IntermediaryTypePK, ParentIntermediaryPK, GrandParentIntermediaryPK, GrandParentIntermediaryID)  SELECT 552, '200244584261', 'A', '1', '201841943403', 1, 6459, 6459, '201841943403' UNION ALL  SELECT 553, '200983879092', 'B', '1', '200707895681', 1, 6462, 6459, '200707895681' UNION ALL  SELECT 554, '200925413387', 'C', '1', '200707895681', 1, 6462, 6462, '200707895681' UNION ALL  SELECT 555, '200472620781', 'D', '1', '200707895681', 1, 6462, 6462, '200707895681' UNION ALL  SELECT 556, '201902784325', 'E', '1', '200707895681', 1, 6462, 6462, '200707895681' UNION ALL  SELECT 557, '201874832909', 'F', '1', '200707895681', 1, 566 , 6462, '200707895681' UNION ALL  SELECT 558, '201264024229', 'G', '1', '200707895681', 1, 566 , 6462, '200707895681' UNION ALL  SELECT 559, '201725870455', 'H', '1', '201062751762', 1, 566 , 6462, '200707895681'  

I have tried with this query:

SELECT  *   FROM    Intermediary AS I1   WHERE   ParentIntermediaryPK IS NOT NULL   AND     NOT EXISTS (               SELECT  *               FROM    Intermediary AS I2               WHERE   I2.IntermediaryPK = I2.ParentIntermediaryPK               AND I1.ParentIntermediaryPK = I2.IntermediaryPK )  

Azure SQL Administrator Can't Access Master Database

Posted: 28 Jul 2013 09:43 AM PDT

I created an SQL server and database. I can log in to the database manage page with my administrator account, and manage the database I created, but when I try to look at the master database, it says:

Failed to create context. Details: The current user does not have VIEW DATABASE STATE permissions on this database.

What am I doing wrong? I couldn't find anything else like this on the internet.

I log in with the account that the sql server page on the azure managing portal says is the "Administrator Login".

SQL Server 2005 Replication

Posted: 28 Jul 2013 08:44 PM PDT

I am in the process of creating Replication between 2 Remote Servers, server 1 is the Distributor and Publisher and server 2 is the Subscription.

server 1 windows 2003 server 192.168.10.1 connected by vpn SQL Server 2005 domain1.local

server 1  windows 2003 server  192.168.10.1 connected by vpn  SQL Server 2005  domain1.local  

server 2 windows 2003 server 192.168.10.6 connected by vpn SQL Server 2005 domain2.local

server 2  windows 2003 server  192.168.10.6 connected by vpn  SQL Server 2005  domain2.local  

When I setup up Replication everything looked fine until I looked at the sync status and it said:

The Agent could not be started    An exception occurred while executing a transact-sql statement or batch    sqlserveragent error request to run job  server1-username blah blah blah  

From user sa refused because the job is already running from a request by user sa changed database context to technical error 22022.

I have cleared jobs in the server agent as well as restarted the service.

Could this be something to do with authentication between two non trusted domains as I can browse and even control each sql server via SQL studio but just not setup replication?

Yes I can manage each SQL Server in SSMS and we are using merge with snapshot.

Constructing an SQL query with time intervals

Posted: 28 Jul 2013 06:43 AM PDT

I've a simple but large table in an SQL database which has four fields (id(int PK), time (unix timestamp, not null), value (double, not null) and an ID_fk (integer foreign key).

Data is recorded every minute, but at a seemingly random second for subsets of sensors, each sensor has a foreign key value though.

My question is this, I need to write a query that will tell me when, over all two minute intervals in the dataset, a series of sensors have a value greater than 0.9.

I was thinking for trying to create a temporary table and do it that way, simplifying the time by only storing it at minute resolution?

Any advice would be greatly received, Thanks, James

Multiple parents and multiple children in product categories

Posted: 28 Jul 2013 11:43 AM PDT

I am making a ecommerce site. In this site I want to categorise the items into three different layers

primary category             sub category           sub category    >>electronic             >>smart phone          samsung    cameras                       tablets              nokia                                    laptop               apple                                  headphone  

In the above table, I want to display the sub category after the customer selects the primary one. At the same time the 'samsung' also comes under 'camera'. Like this a parent can have any number of children and one child can have many parents. In the future we may change the primary and secondary.

What is the best solution for this? Which model will adopt our category: tree or nested?

Mysqldump tables excluding some fields

Posted: 28 Jul 2013 06:43 PM PDT

Is there a way to mysqldump a table without some fields?

Let me explain:
I have a MySQL database called tests. In tests I have 3 tables: USER, TOTO and TATA. I just want to mysqldump some fields of table USER, so excluding some fields like mail, ip_login, etc.

How can I do this?

Sybase SQL Anywhere 12 - Get all indexes which are unique -> ambigious sysindexes error

Posted: 28 Jul 2013 10:43 AM PDT

we are using a Sybase SQL Anywhere 12 db.

In the db there are indices, which are unique, but shouldn't be unique.

Therefore I search for a quick way to list all tables with unique primary keys.

I tried

SELECT z.name FROM sysobjects z JOIN sysindexes ON (z.id = i.id) WHERE type = 'U'  

The result was an error message: Anweisung konnte nicht ausgeführt werden. Tabellenname 'sysindexes' ist mehrdeutig SQLCODE=-852, ODBC 3-Status="42S13" Zeile 1, Spalte 1

Roughly translated: sysindex is ambiguous.

I found on internet the query:

select 'Table name' = object_name(id),'column_name' = index_col(object_name(id),indid,1),  'index_description' = convert(varchar(210), case when (status & 16)<>0 then 'clustered' else 'nonclustered' end  + case when (status & 1)<>0 then ', '+'ignore duplicate keys' else '' end  + case when (status & 2)<>0 then ', '+'unique' else '' end  + case when (status & 4)<>0 then ', '+'ignore duplicate rows' else '' end  + case when (status & 64)<>0 then ', '+'statistics' else case when (status & 32)<>0 then ', '+'hypothetical' else '' end end  + case when (status & 2048)<>0 then ', '+'primary key' else '' end  + case when (status & 4096)<>0 then ', '+'unique key' else '' end  + case when (status & 8388608)<>0 then ', '+'auto create' else '' end  + case when (status & 16777216)<>0 then ', '+'stats no recompute' else '' end),  'index_name' = name  from sysindexes where (status & 64) = 0  order by id  

Which looked what i wanted. But there was still the same result of ambigious sysindexes.

What dows ambigious indexes mean in this context? Will/Can this cause any error in future?

As workaround I used sybase central (which by the way opens always on first monitor, not on the one where it was closed - ugly behaviour), and found that a item indices showed what i searched for.

But I still want to know how a programmatically solution looks like.

Why do we need to rebuild and reorganize Indexes in SQL Server

Posted: 28 Jul 2013 07:41 PM PDT

After searching the internet i couldn't find the reason for

  1. Why do we need to rebuild and reorganize indexes in SQL Server ?

  2. what does internally happens when we rebuild and reorganize ?

An article on a site says :

Index should be rebuild when index fragmentation is great than 40%. Index should be reorganized when index fragmentation is between 10% to 40%. Index rebuilding process uses more CPU and it locks the database resources. SQL Server development version and Enterprise version has option ONLINE, which can be turned on when Index is rebuilt. ONLINE option will keep index available during the rebuilding.

I couldn't understand this, though it says that WHEN to do this, but I would like to know WHY do we need to rebuild and reorganize indexes ?

Mongo connection failing with 'Transport endpoint is not connected'

Posted: 28 Jul 2013 04:43 PM PDT

I have a 2 server installation.

A web server with Apache and a DB server with MongoDB.

I am load testing it, and on ~300 RPS I am getting this error:

PHP Fatal error: Uncaught exception 'MongoConnectionException' with message 'Transport endpoint is not connected'.

The only thing I am noticing is that right before the fail, I am getting a lots of connections on Mongo:

insert  query update delete getmore command flushes mapped  vsize    res faults locked % idx miss %     qr|qw   ar|aw  netIn netOut  conn       time        0    659      0      0       0       1       0   208m  1.28g    40m      0        0          0       0|0     0|0    62k   217k   486   03:57:20   

Almost 500 connections here... but never more than that!

Mongo is 2.0.3. PHP is 5.3.x (latest of Debian install...)

Help!

Copy Database option not found in SQL Server Management Studio 2008 R2

Posted: 28 Jul 2013 08:58 AM PDT

I am unable to see the Copy Database option in SQL Server Management Studio 2008 R2.

Can anyone tell me whats going on please? Copy Database not found Image

First time running a replication on SQL Server 2008. "The process could not connect to subscriber 'SERVER2'"

Posted: 28 Jul 2013 05:53 PM PDT

I've set up Replication between two remote servers, Using Peer to Peer & Transactional updates. The subscription is all set, but the actual replication doesn't take place.

When viewing the Synchronization Status of the subscription, i get an error, saying "The process could not connect to Subscriber 'SERVER2\DBINSTANCE'."

Clicking on start, this is the error message I get:


TITLE: View Synchronization Status

The agent could not be started.

For help, click: go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.50.1600.1&EvtSrc=Microsoft.SqlServer.Management.UI.ReplUtilitiesErrorSR&EvtID=CantStartAgent&LinkId=20476


ADDITIONAL INFORMATION:

An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)


SQLServerAgent Error: Request to run job SERVER\DBINSTANCE-DBNAME-DBPUBLICATION-SERVER2\DBNAME-3 (from User sa) refused because the job is already running from a request by Start Sequence 0. Changed database context to 'DBNAME'. (Microsoft SQL Server, Error: 22022)

For help, click: go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.50.1600&EvtSrc=MSSQLServer&EvtID=22022&LinkId=20476


Viewing the Agent History, trying to get more specific into the error, i get the following Error:


Error messages: The process could not execute 'sp_replcmds' on 'SERVER\DBNAME'. (Source: MSSQL_REPL, Error number: MSSQL_REPL20011) Get help: help/MSSQL_REPL20011 Cannot execute as the database principal because the principal "dbo" does not exist, this type of principal cannot be impersonated, or you do not have permission. (Source: MSSQLServer, Error number: 15517) Get help: help/15517 The process could not execute 'sp_replcmds' on 'SERVER\DBNAME'. (Source: MSSQL_REPL, Error number: MSSQL_REPL22037) Get help: help/MSSQL_REPL22037


I have looked each and every one of the errors online, went through all this kind of explanations and changes, and nothing worked. Many thanks to anyone who can pour some light on the issue.

How to the get current date without the time part

Posted: 28 Jul 2013 06:40 PM PDT

In SQL Server 2005 how do I get the current date without the time part? I have been using GETDATE() but would like it to have a time of 00:00:00.0

Search This Blog