Thursday, October 10, 2013

[how to] Declare variable mysql

[how to] Declare variable mysql


Declare variable mysql

Posted: 10 Oct 2013 09:11 PM PDT

I am new to mysql, I have been using SQL in the past. I am trying to insert rows into a table via creating a loop. But struggling with it. Any help?

DECLARE @minid INT = (SELECT MIN(Product_ID) FROM Product_Report_Offer_Interaction_Jan_2012 pn) DECLARE @maxid INT = (SELECT max(Product_ID) FROM Product_Report_Offer_Interaction_Jan_2012 pn)

DECLARE @topid INT= @minid + 50000 BEGIN

WHILE ( @minid < @maxid ) BEGIN insert into Product_Report_Offer_Jan_2012 (Month, Product_ID, Offerview, EnlargeImage, DynamicPopUp, ShareonFacebook, BuyNowPress, Sendtofriend, Printed, StoreLocated, uLike, iPhoneOfferDetails, iPhoneStoreLocated, WishlistAd, Tweet, Catalogue1pageview, Catalogue2pageview, SearchListing, Featured, RelatedAdImpression, Wishlist) select pn.Month, pn.Product_ID, pn.Offerview, pn.EnlargeImage, pn.DynamicPopUp, pn.ShareonFacebook, pn.BuyNowPress, pn.Sendtofriend, pn.Printed, pn.StoreLocated, pn.uLike, pn.iPhoneOfferDetails, pn.iPhoneStoreLocated, pn.WishlistAd, pn.Tweet, pm.Catalogue1pageview, pm.Catalogue2pageview, pm.SearchListing, pm.Featured, pm.RelatedAdImpression, pm.Wishlist -- select count(*) -- select *
from Product_Report_Offer_Interaction_Jan_2012 pn left join Product_Report_Offer_Impression_Jan_2012 pm on pn.product_ID = pm.product_ID where pn.product_id != '0' and pn.Product_ID BETWEEN @minid AND @topid

    SET @minid = @minid + 50000 + 1      SET @topid = @topid + 50000 + 1  

END END

Thanks jakub

Database schema for record belonging to another record or its sub-record

Posted: 10 Oct 2013 06:22 PM PDT

I have 2 tables; buildings and tenants. The relationship is 1-m where buildings has many tenants Here is the schema:-

CREATE TABLE buildings  (    id serial NOT NULL,    name varchar NOT NULL,  )    CREATE TABLE tenants  (    id serial NOT NULL,    building_id integer NOT NULL,    name integer NOT NULL,  )  

I'm trying to implement another table called images that could contain images of either the building or the tenants.

CREATE TABLE images  (    building_id int NOT NULL,    tenant_id int NOT NULL,    url varchar NOT NULL  )  

In this case, the records for images table will be as follow:-

1 | 0 | img-0001.jpg  1 | 5 | img-0002.jpg  

img-0002.jpg belongs to tenant 5 of building 1 and img-0001.jpg belongs to building 1.

Any comments or improvement for such a design?

Permissions different between test and live databases

Posted: 10 Oct 2013 05:22 PM PDT

We have 2 database Servers - Test and Live. We restore the live database back to test periodically. We have noticed that a SQL User has access to execute a stored proc on live but not on the test database but we can't work out why there is a difference. (Note: The user should be able to execute the stored proc).

We have looked in the users properties and they are all in the same groups, the groups look to be the same on both servers. However when I look at the Effective Permissions on a particular stored proc the user has permission to EXECUTE on live but not on Test:

SQL Permissions

(Live on the left)

How can I find out what is giving access for this user to have permission to Execute on Live?

Syntax for restoring SQL Server backup from non-disk source

Posted: 10 Oct 2013 05:06 PM PDT

I have backed up and restored many a SQL Server database using T-SQL script, but always FROM DISK or TO DISK. I am not a provisioning-concerned database administrator, though, so don't have as much experience with the full range of available backup types.

Now I'm working with a database new to me where the physical_device_name from table msdb.dbo.backupmediafamily for the full backups is a GUID followed by a number, e.g. {F1554525-23BC-48A2-89E0-4069C2D3ACF3}1. How do I use this? I is it tape? Or a physical device name? I have searched online for references to using a GUID in restores or to restoring FROM TAPE and can't seem to find any examples.

Unfortunately at this moment I don't have on-demand access to the server where the backup is and the server doesn't have enough disk space for an extra copy of the database, anyway. But, when I DO get in this situation again, I'd like to know the correct syntax.

RESTORE DATABASE DBName FROM {F1554525-23BC-48A2-89E0-4069C2D3ACF3}1; doesn't look like good syntax to me.

RESTORE DATABASE DBName FROM TAPE = '{F1554525-23BC-48A2-89E0-4069C2D3ACF3}1'; looks more likely. Is it a tape device?

Or is there some additional set of values I need from the backupmediafamily table that will help me know how to write the restore statement (and a RESTORE FILELISTONLY statement, too)?

Entering Images url faster in mysql database

Posted: 10 Oct 2013 02:45 PM PDT

First, I would like to say I dont have much experience with entering data into database so my wording might be a bit off.

I'm working on adding products into my database for a shopping cart. I'm using OpenCart for the shopping cart.

My question is how can I edit a large about of mysql fields without manually doing it.

I have edited in all of the products information, like model id, manufacturer, and so on, but not the images because of how slow entering in the images are on OpenCart.

So I'm wondering is their any way to import all of the images urls quickly. The images are uploaded to the server all I have to do is fill in the image field for each product. I have saved the product's images with the model id. for example one poducts model is 1194, so the image is 1194.jpg. I have also organized the images to match the manufacturer, and example would be data/manufacturer/1194.jpg. Now is there a way to quickly add all the urls and have the it read the model id and insert it into the url.

like instert 'image' data/manufacturer/$model.jpg

Best way set clustered primary key on a partitioned table?

Posted: 10 Oct 2013 02:40 PM PDT

I have the following table.

create table T (K1 date, K2 int, K3 varchar(10), C1 int, C2....)  

The table will be partitioned by K1 (K1 has low selective. the data will be appended in the order by K1). Which of the following primary key is preferred?

alter table T add primary key (K1, K2, K3)  alter table T add primary key (K2, K3, K1)  

PostgreSQL Failover Issue

Posted: 10 Oct 2013 12:31 PM PDT

I have 3 PostgreSQL (9.1) server's, 1 master and 2 slave. I have configured streaming replication between them. Now when i shutdown the primary server and convert one of the slave to act as primary ( using the trigger ), the recovery.conf is changed to recovery.done.

Now my problem is while using rails application with postgres the replication works fine, But when there is change in primary i.e a slave is promoted to primary the rails app does not know that. What gem should I use to resolve this, so that the rails app will know the change in primary. And how should the config must be done.

Can't backup a Postgres database: FATAL: Peer authentication failed for user "postgres"

Posted: 10 Oct 2013 03:20 PM PDT

I can connect to my database no problem remotely, either from PHP scripts on our webserver or using PGAdmin3.

Unfortunately, when I try to run pg_dump backups locally on the server itself, I get:

pg_dump --username=postgres -W omega | gzip > omega.10.10.13.gz  
pg_dump: [archiver (db)] connection to database "omega" failed:    FATAL:  Peer authentication failed for user "postgres"  

Previously I'd had no password at all for my database, but to try to get around this I actually gave the postgres user a password. Still no dice, peer authentication fails every time.

Here's the settings in my pg_hba.conf file... please tell me what I can do to fix this. Really want to run some backups on my database.

# Database administrative login by Unix domain socket  local   all             postgres                                peer    # TYPE  DATABASE        USER            ADDRESS                 METHOD    # "local" is for Unix domain socket connections only  local   all             all                                     peer  # IPv4 local connections:  host    all             all             127.0.0.1/32            md5  # IPv6 local connections:  host    all             all             ::1/128                 md5  # Allow replication connections from localhost, by a user with the  # replication privilege.  #local   replication     postgres                                peer  #host    replication     postgres        127.0.0.1/32            md5   #host    replication     postgres        ::1/128                 md5    host    all             all             70.89.205.250/24        md5   host    all             all             23.21.112.163/24        md5   host    all             all             24.188.1.163/24         md5   

Should I join datetime to a date using cast or range?

Posted: 10 Oct 2013 12:29 PM PDT

This question is a take-off from the excellent one posed here:

Cast to date is sargable but is it a good idea?

In my case, I am not concerned with the WHERE clause but in joining to an events table which has a column of type DATE

One table has DATETIME2 and the other has DATE... so I can effectively JOIN using a CAST( AS DATE) or I can use a "traditional" range query (>= date AND < date+1).

My question is which is preferable? The DATETIME values will almost never match the predicate DATE value.

I expect to stay on the order of 2M rows having the DATETIME and under 5k having the DATE (if this consideration makes a difference)

Should I expect the same behavior on the JOIN as I might using the WHERE clause? Which should I prefer to retain performance with scaling? Does the answer change with MSSQL 2012?

My generalized use-case is to treat my events table like a calendar table

SELECT      events.columns      ,SOME_AGGREGATIONS(tasks.column)  FROM      events  LEFT OUTER JOIN      tasks          --This appropriately states my intent clearer          ON CAST(tasks.datetimecolumn AS DATE) = events.datecolumn           --But is this more effective/scalable?          --ON tasks.datetimecolumn >= events.datecolumn           --AND tasks.datetimecolumn < DATEADD(day,1,events.datecolumn)  GROUP BY      events.columns  

I want sql code retrieves fields from different tables in Visual Basic2010

Posted: 10 Oct 2013 11:21 AM PDT

0down votefavorite

I want sql code retrieves fields from different tables in Visual Basic2010.

Ihave four tables which like this: 1)tblTeacher consist of TID(pk),TName 2)tblSubject consist of SID(pk),subName,TID(fk),TMID(fk) 3)tblTime consist of TMID(pk),TMValue 4)tblLecture consist of LecID(pk),LecDate,TID(fk),subID(fk),TMID(fk) this table for registerd Lectures. there is a relation betwen tables(one to many) (tblLecture is many). and I want to offer LecID,LecDate,TName,subName,TMValue(which registered in tblLecture only) in DataGridView.(in my project I use access DataBase and sql can you use select statement?DataSourse,DataSet And TapleAdapter) and how I can call the sql in my project please?

Inserting two rows from single row

Posted: 10 Oct 2013 04:25 PM PDT

I have to import users from the usersToImport table into the userContact table. usersToImport contains telephone and e-mail information in a single row for each user, but userContact stores one kind of contact information per row.

Here's how I'm doing it currently: http://sqlfiddle.com/#!6/c9b2e/1 (for some reason, it's not outputting anything, but the code works in SSMS).

Is there a way to do this without using two different selects? Maybe self-joining usersToImport or using case somehow.

PostgreSQL Cascade Delete not working

Posted: 10 Oct 2013 11:49 AM PDT

What am I missing here?

I have two tables:

CREATE TABLE source  (    "sourceid" serial,    "state" character varying(255) REFERENCES states (state),    "leadtime" character varying(255),    CONSTRAINT source_pkey PRIMARY KEY (sourceid)  )  CREATE INDEX      ON source (state ASC NULLS LAST);    CREATE TABLE updates  (    "updateid" serial,    "sourceid" integer REFERENCES source (sourceid) ON DELETE CASCADE,    "skip" boolean,    CONSTRAINT update_pkey PRIMARY KEY (updateid)  )  CREATE INDEX      ON updates (sourceid ASC NULLS LAST);  

source is the parent table and updates the child. I want any records deleted from source to cascade down and delete all records in updates that have the same sourceid.

I have also tried adding the REFERENCES constraint to the sourceid field of the source table.

Thanks! Brad

Is there a general guideline for table partitioning? [on hold]

Posted: 10 Oct 2013 02:00 PM PDT

  1. Will you partition the table when it grows to MMM MB bytes? or NNN rows?
  2. What the size of each partition you want to keep? XXX MB bytes? or YYY rows?

What's the MMM, NNN, XXX or YYY values you choose? For example, if you have a table with size of 500GB, you probably definitely go partitioning. How about a table with 200M rows? Or a table with size of 5GB?

Multi Server Query generating Login failures

Posted: 10 Oct 2013 02:01 PM PDT

My problem is that when using multi server queries using SSMS from SQL Server 2008 R2 against SQL Server 2005, SQL Server 2008R2 and SQL Server 2000 I occasionally get error messages such as:

SQL Server 2008R2

Login failed for user 'DomainName\ADName'. Reason: Failed to open the explicitly specified database. [CLIENT: 21.12.0.0]
Error: 18456, Severity: 14, State: 38.

SQL Server 2005

2013-10-10 15:55:34.28 logon Login failed for user 'DomainName\ADName'.
Error: 18456, Severity: 14, State: 16.

SQL Server 2000

Login failed for user 'DomainName\ADName'. [CLIENT: 21.12.0.0]

Now my query works fine but given what my query does is search for errors in logs it is not ideal that it should also generate such errors.

My methodology may be flawed, what I have done is create a group under Local Server Groups and registered a few dozen servers of various versions. Then I right click on this group, select New Query to open a new batch window. Lastly I open my saved script and run it. In rough outline this is what it does

use master  go  determine version of SQL as like '%8.0%' or not  create temporary table based on version and pump results of xp_readerrorlog 0-6 into it  translate temporary table to another and insert to results temporary table  select on results table  drop temporary tables  go  

I am a sysadmin on all machines. Two things I tried was the 'use master' statement at the top of the script and changing the registered servers to have a "Connect to database" of master. Neither of these has eliminated the problem. Does anyone have any further suggestions?

I want sql code retrieves fields from different tables? [on hold]

Posted: 10 Oct 2013 10:37 AM PDT

I want sql code retrieves fields from different tables in Visual Basic 2010. thankyou very much,Ihave four tables which like this: 1)tblTeacher consist of TID(pk),TName 2)tblSubject consist of SID(pk),subName,TID(fk),TMID(fk) 3)tblTime consist of TMID(pk),TMValue 4)tblLecture consist of LecID(pk),LecDate,TID(fk),subID(fk),TMID(fk) this table for registerd Lectures. there is a relation betwen tables(one to many) (tblLecture is many). and I want to offer LecID,LecDate,TName,subName,TMValue(which registered in tblLecture only) in DataGridView.(in my project I use access DataBase and sql can you use select statement?DataSourse,DataSet And TapleAdapter) and how I can call the sql in my project please?

Can this query be optimized further?

Posted: 10 Oct 2013 03:43 PM PDT

I have 100K records in my Documents table. This is my query which gives a page of 50 records in roughly 750 ms. I would like to further bring it down if possible. I am using Row_Number function to page.

exec sp_executesql N'  set arithabort off;  set statistics time on;  set transaction isolation level read uncommitted;    With cte as  (      Select          peta_rn = ROW_NUMBER() OVER (ORDER BY  d.LastStatusChangedDateTime  desc),          d.DocumentID, d.IsReEfiled, d.IGroupID, d.ITypeID, d.RecordingDateTime,          d.CreatedByAccountID, d.JurisdictionID,          d.LastStatusChangedDateTime as LastStatusChangedDateTime,  d.IDate,          d.InstrumentID, d.DocumentStatusID , u.Username, it.Abbreviation AS ITypeAbbreviation,          ig.Abbreviation AS IGroupAbbreviation, d.DocumentDate          From ITypes it          Inner Join Documents d on it.ITypeID = d.ITypeID          Inner Join Users u on d.UserID = u.UserID          Inner Join IGroupes ig on ig.IGroupID = d.IGroupID          Where              1=1 AND              (  d.DocumentStatusID = 9  ) And              (CreatedByAccountID = @0 Or DocumentStatusID = @1 Or DocumentStatusID = @2)  And              (  d.IGroupID = @3   Or d.IGroupID = @4   ) And              (d.JurisdictionID = @5 Or DocumentStatusID = @6 Or DocumentStatusID = @7)  )  Select      cte.DocumentID, cte.IsReEfiled, cte.IGroupID, cte.ITypeID, cte.RecordingDateTime,      cte.CreatedByAccountID, cte.JurisdictionID,      cte.LastStatusChangedDateTime as LastStatusChangedDateTime, cte.IDate, cte.InstrumentID,      cte.DocumentStatusID,cte.IGroupAbbreviation, cte.Username, j.JDAbbreviation, inf.DocumentName,      cte.ITypeAbbreviation, cte.DocumentDate, ds.Abbreviation as DocumentStatusAbbreviation,      ds.Name as DocumentStatusName, (SELECT COUNT(*) FROM CTE) AS TotalRecords      FROM cte      Inner Join DocumentStatuses ds On cte.DocumentStatusID = ds.DocumentStatusID      Left Join InstrumentFiles inf On cte.DocumentID = inf.DocumentID      Inner Join Jurisdictions j on j.JurisdictionID = cte.JurisdictionID      Where          1=1 And          peta_rn>@8 AND          peta_rn<=@9      Order by peta_rn',      N'@0 int,@1 int,@2 int,@3 int,@4 int,@5 int,@6 int,@7 int,@8 int,@9 int',      @0=44,@1=5,@2=9,@3=3,@4=4,@5=1,@6=5,@7=9,@8=90000,@9=90050  

This is my statistics output: SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms.

(50 row(s) affected)     SQL Server Execution Times:     CPU time = 717 ms,  elapsed time = 727 ms.  

and this is my query plan XML. All indexes are created properly. I am unable to post here due to body limit of 30K.

https://www.dropbox.com/s/vrx5ec6cpi8vn33/plan.txt

Any help would be appreciated. Thanks!

MySQL performance for key value vs multiple columns [on hold]

Posted: 10 Oct 2013 06:57 PM PDT

For a key value store I will have 3 columns

id as a primary key,

attr as an index with a set of values,

value can be anything.

Each id can have multiple attr.

For selecting, I will be doing

SELECT id, GROUP_CONCAT('attr')  FROM `my_table`  WHERE `attr` IN ("a", "b", "c"...)  AND `value` = 1  GROUP BY `id`  

For multiple columns

I have id as a primary key and a column for each value in attr.

For selecting, I will be doing

SELECT *  FROM `my_table`  WHERE "a" = 1  AND "b" = 1  AND "c" = 1  ...  

My question is performance-wise, which design would be better query performance when there are a lot of attrvalues?

Duplicating database on the same server without data loss

Posted: 10 Oct 2013 08:09 PM PDT

I want to duplicate an innodb database on my mysql server for testing another version of the software.
Using tools for example phpmyadmin reduce the database around 20 MB and a "lot" of rows are missing.

Just copying the folder under /var/lib/mysql won't work, because I didn't enabled something special for the innodb engine.

How can I create a duplicate of an innodb database?

Should I create a table separately for each employee or one table for all? [duplicate]

Posted: 10 Oct 2013 11:42 AM PDT

I'm trying to figure out how to set up the Database structure.

There are ~10000 employees.There will be a table to store the basic details of the employee, and another table for storing transactions of the employee, so I decided to create separate tables for all the employee to record their transactions, which means ~10000 tables.

So my question is, is it efficient to create that much of tables(~10000), or any other approach you can suggest, I will be using MySQL as database and a Java client, in a LAN.

Postgres functions vs prepared queries

Posted: 10 Oct 2013 03:39 PM PDT

I'm sure it's there but I've not been able to find a simple definitive answer to this in the docs or via Google:

In Postgres, are prepared queries and user defined functions equivalent as a mechanism for guarding against SQL injection? Are there particular advantages in one approach over the other? Thanks

Classifieds Database Design

Posted: 10 Oct 2013 10:25 AM PDT

I have always worked with CMSs, but I am trying to get into using frameworks like Laravel and Yii. My main issue is when working with CMSs, I didn't have to think much about the database design since it was done for me. I have my plan drawn out on paper, but I am not really sure where to go from here....

I am trying to develop a Craigslist clone, but a little more specific. I have Googled all over for designs, and this is currently what I have.

However, I want certain categories to have specific fields. Some categories may have fields in common with other categories, but not all categories are the same.

For example:

    - Categories        - Electronics          - Title          - Type          - Description          - Year        - Furniture          - Title          - Type          - Description          - Brand        ....  

Those are just two examples, but I have a huge list of Categories and the required fields for each category.

My current plan is to load all of these fields into the ad table. What effect will this have on performance? At some point there could be 60 fields attached to the ad table, but only 5-10 may be filled at a time, and the others would be empty/NULL.

What is the best way to go about associating images with ads? I was thinking to just create an assets folder and create subfolders based on the ad id, and upload images to those subfolders of the corresponding ad id. Something like...

    - Public        - Assets          - Images            - 1 (post id = 1)              - img001.jpg              - img002.jpg            ...  

What's the best way to set up this kind of database? Would sticking to MySQL be best for this? What if I want some states to have certain categories but not others?

InnoDB Failure of some kind

Posted: 10 Oct 2013 02:25 PM PDT

I have MySQL 5.5 installed. I tried to install Joolma but it failed. I went into their sql and replace EGNINE=InnoDB with MyISAM and the install worked.

InnoDB is listed under SHOW ENGINES;

Any idea what the cause is or how to fix this so other InnoDB sites can be used?

I had these errors:

MySQL ERROR No: 1030  MysqL Error: Got error -1 from storage engine  

MySQL is running but not working

Posted: 10 Oct 2013 04:26 PM PDT

In an attempt to tune MySQL to make it work with a recent installation of Drupal I had to modify the MySQL settings on my server. After modifying the configuration file for MySQL (/etc/my.cnf) MySQL stopped working. After some attempts I make it start again but now all my php/MySQL webistes are not being able to connect to their DBs.

Here is why is so confusing:

  • If I check a phpinfo.php on any given site, the MySQL info is there
  • phpmyadmin runs just fine
  • I can run mysql from SSH using root but I see that mysql activity is reduced, look:

[root@server mysql]# mysqladmin processlist

+-----+-----------+-----------+-----------+----------------+------+--------------------+------------------+  | Id  | User      | Host      | db        | Command        | Time | State              | Info             |  +-----+-----------+-----------+-----------+----------------+------+--------------------+------------------+  | 7   | eximstats | localhost | eximstats | Sleep          | 30   |                    |                  |  | 103 | DELAYED   | localhost | eximstats | Delayed insert | 30   | Waiting for INSERT |                  |  | 104 | DELAYED   | localhost | eximstats | Delayed insert | 149  | Waiting for INSERT |                  |  | 105 | DELAYED   | localhost | eximstats | Delayed insert | 149  | Waiting for INSERT |                  |  | 119 | root      | localhost |           | Query          | 0    |                    | show processlist |  +-----+-----------+-----------+-----------+----------------+------+--------------------+------------------+  

My websites using MySQL almost all say:

Error establishing a database connection  

Another say:

Warning: mysql_connect() [function.mysql-connect]: Access denied for user 'website_USER'@'localhost' (using password: NO)  

This is my current my.cnf:

[mysqld]  #datadir=/var/lib/mysql  socket=/var/lib/mysql/mysql.sock  #pid-file=/var/lib/mysql/mysqld.pid  #skip-bdb  #skip-innodb  #err-log=/var/log/mysql.log  #bind-address = server.escalehost.com  log-bin = /var/lib/mysql/log-bin      #innodb_buffer_pool_size=2M  #innodb_additional_mem_pool_size=500K  #innodb_log_buffer_size=500K  #innodb_thread_concurrency=2  #max_connections=125  #table_cache=2500  #thread_cache_size=100  #thread_stack=250K  #wait_timeout=10  #join_buffer=5M  #myisam_sort_buffer_size=15M  #query_cache_size=15M  #read_rnd_buffer_size=5M  max_allowed_packet = 64M  #open_files_limit=8602    #[client]  #port           = 3306  #socket=/var/lib/mysql/mysql.sock    #[mysql.server]  #user=mysql  #basedir=/var/lib    [mysqld_safe]  #socket=/var/lib/mysql/mysql.sock  #err-log=/var/log/mysqld.log  pid-file=/var/run/mysqld/mysqld.pid  

I commented most of it to return it to its simplest version... How can I make the web side to connect to mysql?

Mistake during Oracle 11g PITR

Posted: 10 Oct 2013 12:25 PM PDT

I tried using set time until.. and mis-typed the date. Can anyone help me understand how to get my backups into a manageable state?

After the accidental recover, most of my backupset disappeared.

I recovered them and used 'catalog recovery area' .. and they're listed in 'list backupset'. But something still isn't right.

When I do a PITR now, I get messages that my dbf files aren't available

and... the 'list backupset' seems to show backupsets. But they are listed differently than the files which weren't included in the 'bad' recovery.

Gists with the error and the list of backupsets are here https://gist.github.com/akinsgre/5561254

How can I convert an Oracle dump file into SQL Server?

Posted: 10 Oct 2013 12:58 PM PDT

I want to ask about converting Oracle dump files (.dmp) into SQL Server files (.bak) where during conversion I don't have to be connected to any database server.

I've searched for related technologies, such as Oradump to SQL Server. Do you have another suggestion to solve this? Open source ones I mean.


Thanks for both of your response. I see how difficult it will, but is there any possibility to use another way in converting oracle dump file? because all of solution's converter tools always provide a connection database server. I'm so thankful what if you can suggest another tool. thanks anyway

SQL Server LocalDB Instance; error restoring backup (MasterDBPath not available)

Posted: 10 Oct 2013 04:34 PM PDT

I have SQL Server 2012 Local DB Installed, and am connecting to it via (localdb)\v11.0.

When trying to restore a backup file, I get:

TITLE: Microsoft SQL Server Management Studio

Property MasterDBPath is not available for Information 'Microsoft.SqlServer.Management.Smo.Information'. This property may not exist for this object, or may not be retrievable due to insufficient access rights. (Microsoft.SqlServer.Smo)

For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=11.0.2100.60+((SQL11_RTM).120210-1846+)&EvtSrc=Microsoft.SqlServer.Management.Smo.ExceptionTemplates.PropertyCannotBeRetrievedExceptionText&EvtID=MasterDBPath&LinkId=20476

I have tried setting MasterDBPath string value in:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL11E.LOCALDB\MSSQLServer\CurrentVersion`  

but this hasn't helped.

Anyone have any thoughts?

Table not getting created under dbo schema

Posted: 10 Oct 2013 09:05 PM PDT

When creating tables in SSMS, I've realized that if you execute the following statement:

CREATE TABLE [tableName];  

the table will be created under your own schema (and not dbo). So in order to create it under the dbo schema, you'll need to explicitly say so, like this:

CREATE TABLE [dbo].[tableName];  

Does anyone know of a way (eg. server-wide setting) so that when a table is created, the [dbo] part doesn't need to be specified?

No NULLs, yet invalid byte sequence for encoding "UTF8": 0x00

Posted: 10 Oct 2013 02:31 PM PDT

I've spent the last 8 hours trying to import the output of 'mysqldump --compatible=postgresql' into PostgreSQL 8.4.9, and I've read at least 20 different threads here and elesewhere already about this specific problem, but found no real usable answer that works.

MySQL 5.1.52 data dumped:

mysqldump -u root -p --compatible=postgresql --no-create-info --no-create-db --default-character-set=utf8 --skip-lock-tables rt3 > foo  

PostgreSQL 8.4.9 server as destination

Loading the data with 'psql -U rt_user -f foo' is reporting (many of these, here's one example):

psql:foo:29: ERROR:  invalid byte sequence for encoding "UTF8": 0x00  HINT:  This error can also happen if the byte sequence does not match the encoding expected by the server, which is controlled by "client_encoding".  

According the following, there are no NULL (0x00) characters in the input file.

database-dumps:rcf-temp1# sed 's/\x0/ /g' < foo > nonulls  database-dumps:rcf-temp1# sum foo nonulls  04730 2545610 foo  04730 2545610 nonulls  database-dumps:rcf-temp1# rm nonulls  

Likewise, another check with Perl shows no NULLs:

database-dumps:rcf-temp1# perl -ne '/\000/ and print;' foo  database-dumps:rcf-temp1#  

As the "HINT" in the error mentions, I have tried every possible way to set 'client_encoding' to 'UTF8', and I succeed but it has no effect toward solving my problem.

database-dumps:rcf-temp1# psql -U rt_user --variable=client_encoding=utf-8 -c "SHOW client_encoding;" rt3   client_encoding  -----------------   UTF8  (1 row)    database-dumps:rcf-temp1#  

Perfect, yet:

database-dumps:rcf-temp1# psql -U rt_user -f foo --variable=client_encoding=utf-8 rt3  ...  psql:foo:29: ERROR:  invalid byte sequence for encoding "UTF8": 0x00  HINT:  This error can also happen if the byte sequence does not match the encoding expected by the server, which is controlled by "client_encoding".  ...  

Barring the "According to Hoyle" correct answer, which would be fantastic to hear, and knowing that I really don't care about preserving any non-ASCII characters for this seldom-referenced data, what suggestions do you have?

Update: I get the same error with an ASCII-only version of the same dump file at import time. Truly mind-boggling:

database-dumps:rcf-temp1# # convert any non-ASCII character to a space  database-dumps:rcf-temp1# perl -i.bk -pe 's/[^[:ascii:]]/ /g;' mysql5-dump.sql  database-dumps:rcf-temp1# sum mysql5-dump.sql mysql5-dump.sql.bk  41053 2545611 mysql5-dump.sql  50145 2545611 mysql5-dump.sql.bk  database-dumps:rcf-temp1# cmp mysql5-dump.sql mysql5-dump.sql.bk  mysql5-dump.sql mysql5-dump.sql.bk differ: byte 1304850, line 30  database-dumps:rcf-temp1# # GOOD!  database-dumps:rcf-temp1# psql -U postgres -f mysql5-dump.sql --variable=client_encoding=utf-8 rt3  ...  INSERT 0 416  psql:mysql5-dump.sql:30: ERROR:  invalid byte sequence for encoding "UTF8": 0x00  HINT:  This error can also happen if the byte sequence does not match the encod.  INSERT 0 455  INSERT 0 424  INSERT 0 483  INSERT 0 447  INSERT 0 503  psql:mysql5-dump.sql:36: ERROR:  invalid byte sequence for encoding "UTF8": 0x00  HINT:  This error can also happen if the byte sequence does not match the encod.  INSERT 0 502  INSERT 0 507  INSERT 0 318  INSERT 0 284  psql:mysql5-dump.sql:41: ERROR:  invalid byte sequence for encoding "UTF8": 0x00  HINT:  This error can also happen if the byte sequence does not match the encod.  INSERT 0 382  INSERT 0 419  INSERT 0 247  psql:mysql5-dump.sql:45: ERROR:  invalid byte sequence for encoding "UTF8": 0x00  HINT:  This error can also happen if the byte sequence does not match the encod.  INSERT 0 267  INSERT 0 348  ^C  

One of the tables in question is defined as:

                                        Table "public.attachments"       Column      |            Type             |                        Modifie  -----------------+-----------------------------+--------------------------------   id              | integer                     | not null default nextval('atta)   transactionid   | integer                     | not null   parent          | integer                     | not null default 0   messageid       | character varying(160)      |   subject         | character varying(255)      |   filename        | character varying(255)      |   contenttype     | character varying(80)       |   contentencoding | character varying(80)       |   content         | text                        |   headers         | text                        |   creator         | integer                     | not null default 0   created         | timestamp without time zone |  Indexes:      "attachments_pkey" PRIMARY KEY, btree (id)      "attachments1" btree (parent)      "attachments2" btree (transactionid)      "attachments3" btree (parent, transactionid)  

I do not have the liberty to change the type for any part of the DB schema. Doing so would likely break future upgrades of the software, etc.

The likely problem column is 'content' of type 'text' (perhaps others in other tables as well). As I already know from previous research, PostgreSQL will not allow NULL in 'text' values. However, please see above where both sed and Perl show no NULL characters, and then further down where I strip all non-ASCII characters from the entire dump file but it still barfs.

Is it possible to restore sql-server bak and shrink the log at the same time?

Posted: 10 Oct 2013 08:11 PM PDT

We have a bak file from a client that we have transferred to our developer offices for problem investigation. The backup is currently 25GB and the restored database is about the same size however it needs 100GB to be restored. I believe this is because there database is set up to have a 75GB transaction log size. After restoring the database we can shrink the log file but is there a way to do this in the restore?

[Articles] Manners

[Articles] Manners


Manners

Posted: 09 Oct 2013 11:00 PM PDT

In dealing with other professionals, you would like to think most people have good manners. Steve Jones looks to remind us this should apply on the Internet as well.

[MS SQL Server] Policy Based Management - Naming Convention for Indexes after Create Date

[MS SQL Server] Policy Based Management - Naming Convention for Indexes after Create Date


Policy Based Management - Naming Convention for Indexes after Create Date

Posted: 10 Oct 2013 02:34 AM PDT

HI there,I've been trying to figure out a custom way to Evaluate a policy naming convention for indexes, however I would like it to run only against indexes created let's say after 10/1/2013.I have the Naming convention working for every index.Condition must start with usp_However, how can I get it to run only against indexes created after 10/1/2013?I tried setting the following but does not work.Facet: IndexExpression:Field = ExecuteSql('Numeric', 'Set NOCOUNT ON; select crdatefrom sysindexes i join sysobjects o ON o.id = i.id')Operator: >=Value: ExecuteSql('Numeric', 'Set NOCOUNT ON; Set NOCOUNT ON; Declare @CreateDate datetime set @CreateDate=''10/01/2013'' Select @CreateDate')Any suggestions or ideas? Everything is possible. :)

Access a Postgre SQL DB from SQL 2008

Posted: 10 Oct 2013 02:00 AM PDT

Hi,we have an internal SQL 2008 Server and a remote Postgre SQL Server and want to import data from Postgre to MSSQL. We made tests with Excel and an OLEDB driver and it was succesful, so we do not have any firewall problems.Then I tried to create a linked server in MSSQL, entered name and host, changed the remote login to a specific user, but it failed. The error message said the the Postgre SQL server could not be reached at port 5432, the standard port. Thats correct... because I need to use a different port 5504 to connect through the remote Firewall.How can I change the used port to my special port?Any help is very much appreciated!RegardsOliver

Ask Question to Vendor?

Posted: 09 Oct 2013 09:48 PM PDT

Hi.what are the points need to ask vendor for database and DBA related points?1. Database design and performance?2. Database administration & different type of documents from vendor?rgdsananda

[SQL 2012] Installer for SQLDOM.msi?

[SQL 2012] Installer for SQLDOM.msi?


Installer for SQLDOM.msi?

Posted: 03 Oct 2013 03:56 AM PDT

I just got my Dev 2012 edition and when I checked the prereqs, it said I needed to download and install SQLDOM.msi. So I downloaded it, but when I try to install it, it tells me Windows Installer is not the right program to install it.This confuses me. If Windows Installer isn't the needed program, what is? (I'm on Windows 7 Pro 64 bit, btw).Any ideas?

Change tables from Partitioned to non-Partitioned

Posted: 09 Oct 2013 02:22 PM PDT

We're considering moving a database from SQL 2012 Enterprise to Standard, but it has a number of partitioned tables I'd like to change to be non-partitioned. I found this link which gives some examples of doing this: http://www.patrickkeisler.com/2013/01/how-to-remove-undo-table-partitioning.htmlBut I'm curious to know if I can just create new tables on [PRIMARY], copy the data from the partitioned tables to the unpartitioned tables, delete the original partitioned tables, then delete the partitioned schemes and functions. The tables I'm looking at are all heaps so no indexes. Thanks for any guidance or suggestions.

Account type for SQL server services

Posted: 09 Oct 2013 07:52 AM PDT

When you setup a service account for sql server services, do you use a domain account or MSA account or virtual account. Most of our SQl servers has SSIS, so it needs to access other resources on other SQL servers.In this case what is the best option for the service, a domain account, msa or virtual account.Currently we are using SQL 2008, we use domain account for the SQL server serivce.http://technet.microsoft.com/en-us/library/ms143504.aspxI am a little confused a bout MSA account, it says :When resources external to the SQL Server computer are needed, Microsoft recommends using a Managed Service Account (MSA), configured with the minimum privileges necessary.it also says:You cannot use a MSA to log into a computer, but a computer can use a MSA to start a Windows service. So if it cannot login to a computer, how the SSIS to connect to another server using SQL agent service using a msa account?Thanks,

[T-SQL] Conversion Error

[T-SQL] Conversion Error


Conversion Error

Posted: 09 Oct 2013 11:05 PM PDT

I have a error message in my queryMsg 245, Level 16, State 1, Line 72Conversion failed when converting the nvarchar value 'H-00001' to data type int.After using this,CAST(columnname as NVARCHAR(1000)) the error message persists.Please help.

Why are virtual tables in a FROM clause unable to reference aliases from the outer query?

Posted: 09 Oct 2013 02:29 AM PDT

I guess I am asking "Why doesn't a virtual table in a FROM clause behave like other subqueries in SELECT or WHERE that are able to make use of the aliases in the FROM clause?"Is it because the aliases in the outer FROM haven't been created yet because the virtual table has to be created first?Forgive me for not posting sample data. I was hoping this example would speak for itself. It fails with "multi-part identifier "ce.id" could not be bound". Also my formatting is being mangled a bit by quote paste.[quote]SELECT ce.id as ClientEnrollmentKey, max(cs.ID) as MaxClientStatusKeyFROM ClientEnrollment ceINNER JOIN ClientPacket cp on ce.ID = cp.ClientEnrollmentKeyINNER JOIN ClientStatus cs on cp.ID = cs.PacketKeyINNER JOIN (SELECT max(cp.Packet_Date) as MaxClientStatusDate FROM ClientPacket cp INNER JOIN ClientStatus cs on cp.ID = cs.PacketKey where cp.ClientEnrollmentKey = ce.id) t1 -- ce from outer from is invisible on cp.Packet_date = t1.MaxClientStatusDategroup by ce.id[/quote]

Data Insertion

Posted: 09 Oct 2013 11:18 PM PDT

Hi,I am stuck into a scenario of data insertion.please help[code="sql"]Create table table1( OfficeID int, ProjectID int, TaskID int,TaskName varchar(100) constraint pk_table primary key( OfficeID, ProjectID, TaskID))[/code]I am inserting data into this table through an XML file in which taskid is 0 always.[code="sql"]><INPUT><CID>17064</CID><OwnerID>17064</OwnerID><TaskId>0</TaskId>><MSG>test follow up</MSG></INPUT><INPUT><CID>17064</CID><OwnerID>17064</OwnerID><TaskId>0</TaskId>><MSG>test follow up 123</MSG></INPUT>[/code]Everytime when data gets insert into table1 taskid should be max on basis of officeid and ProjectID. I am storing this xml into a temp table then put join on base table with this temp table.Output desired:OfficeID PRojectID TaskID TaskName17064 17064 1 Test follow up1 ----- Already exist in table17064 17064 2 Test follow up217064 17064 3 Test follow up 123

Capitalizing only first letter of each Word in a Column using Query

Posted: 09 Oct 2013 07:00 PM PDT

Hi , I am Having a Column full of Capital Letter words like "ABC DEF GHI " , i want to convert that into "Abc Def Ghi" Please help me out its very Urgent !! I have googled it but i am getting all Stored procedures i dont want Stored procedures . I just want to use update command and finish it . Please help me out .Thank you

String Split and LEFT OUTER JOIN in Two Tables

Posted: 09 Oct 2013 07:44 AM PDT

Hello EveryoneI am working on an issue that involves string splitting. It really bothers me that people will Not design a database correctly, and ends up storing a delimited list in a single column.I do have the function named "DelimitedSplit8K" I love that function.The thing I am working on is that I need to split two different strings, place the now "rows" of data into table, each of their own.(Done) Using a LEFT OUTER JOIN, select the rows that are in one table, and not in the other.(Need Assistance)I know how to use the string splitter, and it works fine. But I am having trouble performing the LEFT OUTER JOIN.[code="sql"]CREATE TABLE #FirstDataString( FDataString varchar(250))CREATE TABLE #SecondDataString( SDataString varchar(250))INSERT INTO #FirstDataString( FDataString)VALUES( '1,3,6,8,9,10,12,13,15,16,20,21,24,26,28,31,33,34,35,49,51,52,53,57,58,61,63,66,69,70,71,75,76,77,78,79,81,82,85,88,90,91,92,93,94,95,96,97,98,99,100,105,107,110,111,112,118,122,124,125')INSERT INTO #SecondDataString( SDataString)VALUES( '1,3,5,8,9,10,13,15,16,20,24,26,28,31,33,35,49,51,52,57,58,61,63,69,70,75,76,77,78,79,81,85,88,90,91,92,94,95,96,97,98,100,105,110,111,118,122,125')SELECT Item AS FirstItemFROM #FirstDataString fCROSS APPLY dbo.DelimitedSplit8k(F.FDataString,',') splitSELECT Item AS SecondItemFROM #SecondDataString sCROSS APPLY dbo.DelimitedSplit8k(S.SDataString,',') splitDROP TABLE dbo.#FirstDataString;DROP TABLE dbo.#SecondDataString;[/code]I hope that someone can assist me with this query. I feel that I am close, but something is just not making it, and I cannot figure that part out.Thank You in advance for all your assistance, comments and suggestions.Andrew SQLDBA

query to order by multiple columns

Posted: 09 Oct 2013 04:33 AM PDT

I have a query to pull the first contact of students.The table has all contacts like parent/guardian, friends family, emergency contact etc.I would like to the first priamry contact in this order, 1. initial contact, 2. same as student address and also have to be parents, 3. live with and also a parent, 4. parents 5. friends.I don't know how to pull 2 and 3 . because it looks like it needs to concacenate the columns.here is my initial querySelect Min(U2.Id) Keep (Dense_Rank First Order By U2.Initial_Contact Desc, U2.Same_As_Students_Address Desc,u2.lives_with DESC,U2.Guardian Desc) From ContactsThanks

[SQL Server 2008 issues] Connect from LEI Server

[SQL Server 2008 issues] Connect from LEI Server


Connect from LEI Server

Posted: 09 Oct 2013 06:32 PM PDT

Didn't know where to ask or how to google this issue, so I'll throw it out here..Trying to import data from Lotus Domino to SQL Server 2008 R2. This server is clustered server with 2 clusters. When Node B is active I can connect from LEI to SQL Server, but when Node A is active it just fails. Any idea what is wrong with it?

Case statement returning duplicates

Posted: 08 Oct 2013 09:56 PM PDT

HiI have written a query in sql.When i run it with the case statement it returns duplicate figuresAny reason why it would be doing this?

MS SQL 2008 DBA certification

Posted: 09 Oct 2013 02:30 PM PDT

What happen when the certification retired?

Need help in SQL Query

Posted: 08 Oct 2013 08:40 PM PDT

Hi, I have a group called Leaders group. Users belonging to this Leaders group should be able to see only people information belonging to Leaders group. The following people belong to Leaders group[a,,c,d,e,f]. I need to display this for a SSRS report. Can anyone help me with the query.

Difficulty in creating the proper Update Statement

Posted: 09 Oct 2013 10:18 AM PDT

I am in the process of converting some old tables into a new format and I need to do an update as an intermediate step. I thought this would be fairly straight forward, but of course. . . :(Here is the DDL and Insert statements:[code]declare @t1 table( [RIN] [int], [Type] [tinyint], [Code] [int], [Lot] [varchar](50), [ShouldBe] [int] NULL)insert @t1 values (130363, 2, 145, 0033, NULL) ,(132757, 2, 145, 0033, NULL) ,(165041, 2, 145, 0033, NULL) ,(160574, 2, 145, 0034, NULL) ,(160575, 2, 145, 0034, NULL) ,(132763, 2, 145, 0035, NULL) ,(137219, 2, 145, 0035, NULL) ,(140963, 2, 145, 0035, NULL)[/code]What I am trying to do is point the duplicate rows ShouldBe column to point to the RIN of the first row of the duplicates.I have tried the following, but I am pretty sure that I am violating an update principle of SQL. [code];with cte as( select RIN, TYPE, Code, Lot, ShouldBe, ROW_NUMBER() over (Partition by Type, Code, Lot order by RIN) RowNum from @t1)update c set c.ShouldBe = case when c.RowNum = 1 then 0 when c.RowNum = 2 then p.RIN else p.ShouldBe end from cte as c left outer join cte as p on c.RowNum = p.rownum + 1 and c.Lot = p.Lotselect *, ROW_NUMBER() over (Partition by Type, Code, Lot order by RIN) RowNum from @t1 [/code]This is the output I am looking for:[code]RIN Type Code Lot ShouldBe RowNum130363 2 145 33 0 1132757 2 145 33 130363 2165041 2 145 33 130363 3160574 2 145 34 0 1160575 2 145 34 160574 2132763 2 145 35 0 1137219 2 145 35 132763 2140963 2 145 35 132763 3[/code] Any thoughts?

Adding Zero

Posted: 09 Oct 2013 02:07 AM PDT

HI,I have the below query set up. What i am trying to do is create a line that populates as value to be zero if there is no count of unit in a given month like below. Is this possible?There are 4 units, so for each month i want 4 lines, but currently if there are no actuals in a month its giving 3 lines.SELECT count([Unit]) as Actual,unit,[1st_of_month],last_of_monthFROM [lchs_live].[dbo].[Full_Referrals_Dataset_live]where RfR1 = '18 month project'group by unit,[1st_of_month],[last_of_month]Results6 NW 2013-08-01 2013-08-314 SE 2013-08-01 2013-08-315 SW 2013-08-01 2013-08-31Required Result6 NW 2013-08-01 2013-08-314 SE 2013-08-01 2013-08-315 SW 2013-08-01 2013-08-310 NE 2013-08-01 2013-08-31

sys.sp_cdc_enable_table does not consider additions or removal of columsn to the table

Posted: 09 Oct 2013 05:51 AM PDT

I am building a CDC on my tables. However, I noticed, if I add a new column to the table being audited, the Audit table doesnot have this column. Instead, the only way I can see is to disable the CDC on the table and re-enable it, which menas the data sitting in the audit table is Lost.Is there any way this can be done seamlessly in CDC?[code="sql"]--EXEC sys.sp_cdc_enable_db--EXEC sys.sp_cdc_enable_table @source_schema = N'dbo', @source_name = N'AuditTest', @role_name = NULL/*Operation 1 = DeleteOperation 2 = InsertOperation 3 = Old value before the operationOperation 3 = New Value after the Operation*/CREATE TABLE dbo.AuditTest(RowID INT IDENTITY(1,1),VALUE VARCHAR(50),UserName VARCHAR(50), CreationDate DATETIME DEFAULT GETDATE(),ModifiedDate DATETIME)EXEC sys.sp_cdc_enable_table @source_schema = 'dbo', -- sysname @source_name = 'AuditTest', -- sysname @capture_instance = NULL, -- sysname @role_name = NULL -- sysnameINSERT INTO dbo.AuditTest ( VALUE ,UserName )VALUES ( 'QQQ' ,'User123' )-------THIS NEW COLUMN ADDED IS NOT AVAILABLE IN "dbo_AuditTest_CT"--------ALTER TABLE AuditTest add TESTCol varchar(10)UPDATE AuditTest SET Value='TESTINGzzz' WHERE Value='QQQ'DELETE from AuditTestselect CASE WHEN [__$operation]=1 THEN 'DELETE' WHEN [__$operation]=2 THEN 'INSERT' WHEN [__$operation]=3 THEN 'OLD VALUE' WHEN [__$operation]=4 THEN 'NEW VALUE' END AS OPERATION ,*from cdc.dbo_AuditTest_CT[code="sql"][/code]Also, what are the other ways we can Audit transactional changes to the tables?Cheers Ram

Error converting data type

Posted: 09 Oct 2013 03:27 AM PDT

I have a query that is comparing two sections and the I am converting the type to numeric (38,0) to compare them.All the section should be numeric ex: '1123' or '1907' but one of the section has alphanumeric characters for the previous year(2012 and 2011) and I am running query to select the data for the current year(2013)So I had filtered out previous year data by using where clause and the query was working fine earlier but today I am getting the error converting data type varchar to numeric even though I am selecting only for current year which does not contain alphanumeric sections.Select * from professortbl A left join classtable B where Cast(A.class_section as numeric(38,0))= Cast (B.class_Section as Numeric(38,0)) Where term >='2013'I am not sure why something would stop working suddenly if it has worked earlier.PLease help.Thnaks,Blyzzard

how to split the comma delimited string into rows without using LOOP, CTE, XML.

Posted: 08 Oct 2013 10:58 PM PDT

DECLARE @PARAMETER NVARCHAR(MAX)SELECT @P = 'MONDAY,SUN,FUN,D'

Version

Posted: 09 Oct 2013 01:48 AM PDT

Hi, I'm just trying to verify if I have a CTP version of SQL 2008 R2.@@version gives me:Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64) Jun 28 2012 08:36:30 Copyright (c) Microsoft Corporation Enterprise Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1) SERVERPROPERTY('ProductLevel') gives me SP2 only.I suspect I'm not looking in somewhere really obvious!!

Finding Server Name

Posted: 09 Oct 2013 01:20 AM PDT

We have Server name- TESTXXABDatabase name - TEST_USERThis DB have many TablesNow we have to write a query which will search the data in the tables which contains the Server name(TESTXXAB) in it..:w00t:can this be done??

Data Collector - collection_set_4_noncached_collect_and_upload job failing

Posted: 20 Jun 2011 06:16 PM PDT

Morning AllI have not seen this before with all the instances of data collector I have set up but could anyone shed some light.The 4th job in data collector (collection_set_4_noncached_collect_and_upload) is failing with the following error messageOLE DB error has occurred. Error code: 0x80004005. An OLE DB record is available. Source: "Microsoft SQL Server Native Client 10.0" Hresult: 0x80004005 Description: "Invalid object name 'tempdb.dbo.sysutility_batch_time_internal'.". . SSIS error. Component name: GenerateTSQLPackageTask, Code: -1071636406, Subcomponent: OLE DB Source [1], Description: Unable to retrieve column information from the data source. Make sure your target table in the database is available. . SSIS error. Component name: GenerateTSQLPackageTask, Code: -1071636406, Subcomponent: Generate T-SQL Package Task, Description: Unable to retrieve column information from the data source. Make sure your target table in the database is available. . The master package exited with error, previous error messages should explain the cause. Process Exit Code 5. The step failed.I understand the error, the object isn't there so the task fails.The job step is a cmdexec step with the following linedcexec -c -s 4 -i "$(ESCAPE_DQUOTE(MACH))\$(ESCAPE_DQUOTE(INST))" -m 1Can anyone shed some light on this step so I can fix the issue?This seems to have run three times as under SSIS -> Stored Packages -> MSDB -> Data Collector -> Generated there are 3 sets of collect upload packages, but this may be me reading into the error incorrectly.Any assistance would be great.ThanksAnt

Impact on production server

Posted: 08 Oct 2013 09:13 PM PDT

Hi Team,Heard that "Running SQL Profiler on Production environments will degrade performance of server and it adds additional load on the CPU"What is impact on production server if i ran the profiler thru remote.Please suggest.

Transaction COMMIT

Posted: 08 Oct 2013 09:09 PM PDT

Hi I have this query:BEGIN TRANSACTION INSERT CUSTOMER(NAME, CITY, STATE) VALUES('John C', 'Chicago', 'IL')COMMIT TRANSACTIONBEGIN TRANSACTION INSERT CUSTOMER(NAME, CITY, STATE) VALUES('Bubba C', 'Austin', 'TX')ROLLBACK TRANSACTION SELECT * FROM CUSTOMERNow when I execute the first query, it addds John to my table as it's suppose to, and when I execute the second query it doesn't add Bubba as it's suppose to, but it also delete John who's added in the first query.Why is it so?

setup bootstrap\update cache

Posted: 08 Oct 2013 07:40 PM PDT

can you remove the contents of setup bootstrap\update cache?Its taking a few gbs on my system drive

Search This Blog