Saturday, March 16, 2013

[how to] How to get only month from MySQL date/time?

[how to] How to get only month from MySQL date/time?


How to get only month from MySQL date/time?

Posted: 16 Mar 2013 09:04 PM PDT

In my MySQL server, I have rows that have the date of my created content. Example: 2013-03-17 02:53:47

For now, I am using:

$date = $fetch[thedaterownamegoeshere];

But instead of getting the full content of the date (2013-03-17 02:53:47), I'd like to get only the month – in this example, I wish to have the number "03" as the result. How can I do that?

Thanks.

Back up strategy and recovery plan for production environment

Posted: 16 Mar 2013 04:34 PM PDT

I have been tasked with designing a database server installation, including a high level backup strategy for the following scenario :

  1. Single Server Production Environment with full recovery and redundancy
  2. Backup Data Files & Physical Database Data files should be on separate file systems
  3. Disk Storage and RAID configurations are important
  4. Backup routines should take account of both Database Backup and Backup Archiving to Tape/Other Media
  5. Disk layouts should reflect the differences between Operating System, DBMS Software, Database Files, Transaction Logs and Backup Sets.
  6. Should be able to recover to any point in the last 7 days

I plan on tackling this along the following lines:

To be able to implement full recovery, I plan on implementing a Grandfather-father-son strategy where a weekly full back up is done and tape is held off site. Daily back ups of transaction logs are done and tape is held offsite(this is necessary in order to be able to roll to a point in time). In order to satisfy quick recovery and redundancy, a backup server off site could be used as to perform database mirroring. This would satisfy point 2 above too. As for RAID configuration, would RAID level 1 with just mirroring be sufficient instead of RAID 10 (stripe of mirrors)? If anybody could offer some feedback on my intended course of action and how might I should tackle points 4 and 5 too, I would be grateful

Thanks in advance

Multiligual database design

Posted: 16 Mar 2013 02:47 PM PDT

I assume that the best approach for many languages (5 and more) is concept based on:

    CREATE TABLE `language` (      `language_id` char(2) NOT NULL,      `collation` varchar(64) NOT NULL,      PRIMARY KEY (`language_id`)  );    CREATE TABLE `product` (      `product_id` int(11) NOT NULL auto_increment,      PRIMARY KEY (`product_id`)  );    CREATE TABLE `product_translation` (      `product_id` int(11) NOT NULL,      `language_id` char(2) NOT NULL,      `group_id` int(11) NOT NULL,      `name` varchar(50) NOT NULL,      `url` varchar(50) NOT NULL,      `price` decimal(9, 2) NOT NULL,      `description` text NOT NULL,      UNIQUE KEY `language_id_2` (`language_id`, `url`),      KEY `language_id` (`language_id`, `group_id`, `name`),      FULLTEXT KEY `name` (`name`, `description`),      FOREIGN KEY (`product_id`) REFERENCES `product` (`product_id`),      FOREIGN KEY (`language_id`) REFERENCES `language` (`language_id`),      PRIMARY KEY (`product_id`, `language_id`)  );  

But what if content is filled by user? Should this structure be used only for static content ? Registred user can at any time change his language settings also the data that he is adding (private messages,articles) can be added in any language we cannot force him to add only english content. Should all data be stored in one table and column (body), am I right ?

For example private messages:

    CREATE TABLE `msg_messages` (    `id` int(11) NOT NULL AUTO_INCREMENT,    `thread_id` int(11) NOT NULL,    `body` text NOT NULL,    `priority` int(2) NOT NULL DEFAULT '0',    `sender_id` int(11) NOT NULL,    `cdate` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,    PRIMARY KEY (`id`)  ) ENGINE=InnoDB  DEFAULT CHARSET=utf8;  

How to restore a filegroup from its backup in SQL Server

Posted: 16 Mar 2013 06:11 PM PDT

I need guidance on how to restore a file group in Database from the File group backup taken from another same DB server. Is it possible to restore the filegroup by running restore DB for File Group.

I was trying to restoring filegroup from its back up like below

RESTORE DATABASE  FILEGROUP = 'MFG12' FROM DISK = 'E:\MFG12.bak'    WITH RECOVERY  

The backup set holds a backup of a database other than the existing DBNAME database.

When I was trying to restore in to same DB , restoring works perfectly,

Processed 840976 pages for database 'DB', file 'MFG12_File' on file 1.  RESTORE DATABASE ... FILE= successfully processed 840976 pages in 59.375 seconds (110.654 MB/sec).  

When I run the Select query against partion of filegroup. it throws error

 One of the partitions of index '' for table 'xyz'  (partition ID 72057594656129024) resides on a filegroup ("MFG12")   that cannot be accessed because it is offline, restoring,   or defunct. This may limit the query result  

When I see status of all file group.

I see one of the File Group MFG 12 has status = "Restoring" , How to set back to Online Mode.

When I try to restore log file also.i get below error

The roll forward start point is now at log sequence number (LSN) 66787000000001800001.  Additional roll forward past LSN 66787000000008900001 is required to complete the restore sequence".   

Please help how to bring back missing file group data back even though filegroup backup has been taken from another same Database in another server or in the same server DB

MSSQL Schemas ownership

Posted: 16 Mar 2013 03:08 PM PDT

I'm currently learning MS SQL server 2008 and have come across schemas and there is one thing I don't understand about them that they have to have an owner -Why can't they own themselves?

My background is mostly with Java and it sounds like a schema is like a package which does not have an owner.

MySQL user created via command line can't access PHPMyAdmin

Posted: 16 Mar 2013 11:35 AM PDT

I'm trying to create a read-only user with access to one database in PHPMyAdmin. Since I'm a server newbie I followed some instructions line-by-line on setting up my database and PHPMyAdmin. As such, my root account is not accessible via the GUI; only a full access child account. This account cannot create users via the GUI, so I did the following via command line:

mysql -u root -p  mysql> create user 'readonly-user'@'localhost' IDENTIFIED BY 'password';  mysql> grant select on mydatabase.* to 'readonly-user'@'localhost';  

If I try to log in via PHPMyAdmin as readonly-user, however, I'm not able to get in.

Is there something I need to do to register this account with PHPMyAdmin?

NoSQL approach: design table "scheme"

Posted: 16 Mar 2013 08:30 AM PDT

We need to store simple log data in a database, but we also need to get and filter data in realtime, based on 'user', simple 'tag' (ie. redis, memcache, php, mysql, etc.) and timestamp. We need to scale horizontally and real fast data access on billions rows.

In a SQL approach, table can be like this:

ID | timestamp | tag | user       | log text  1  | 19543532  | 1   | root       | { text log }  2  | 19543532  | 3   | redis-user | { text log }  

where tag 1 and 3 are different and related to another table (ie. tag_id | tag_name). I think this is a relational approach and we can create three index (timestamp, tag and user) in order to speed up data access.

What is a good practice to reproduce this in a NoSQL database like DynamoDB (AWS) where we can create only HASH or RANGE index? Does a SQL database fit better than a DynamoDB?

My first attempt is:

First table: ID hash index

ID | log text  1  | { text log }  2  | { text log }  

Second table: USER, TIMESTAMP range index

user | timestamp | id  root | 123145122 | 1  redis| 123491241 | 2  

Third table: TAG index

tag        | id  debug      | 1  production | 2  

Thank you in advice!

How to implement security model?

Posted: 16 Mar 2013 10:13 AM PDT

I have installed Oracle 11g on fedora 14. I connected with SQL*Plus as 'SYS' and created some tables, then created a user named "account" and a role, "payroll" with the following privileges. I assigned the role to the "account" user but I cannot access the created tables when I log-in as "account".

Here are my queries:

As SYS

create type pay_t as object(  id int,  name varchar(10))  /    create table pay of pay_t;   insert into pay values(pay_t(10,'p01'));    create user account identified by paccount;  grant create session to account;  grant connect to account;    create role pay_roll;  grant select on pay to pay_roll;  grant pay_roll to account;  

As account:

select * from pay;  

This gives me an error saying there is no such table or view.

Changing passphrase for EncryptByPassPhrase

Posted: 16 Mar 2013 07:44 AM PDT

I am using EncryptByPassPhrase to encrypting some columns.

If I want to change the passphrase in the future, how can I change it?

Cluster Node Failed SQL Server 2012

Posted: 16 Mar 2013 08:18 AM PDT

I would like to install SQL Server 2012 but I get some errors. Do you know what I have to download or do to fix this issue?

enter image description here

MySQL Database Recovery from ibdata1 and .frm Files Fails

Posted: 16 Mar 2013 08:20 AM PDT

My MySQL database server crashed but I managed to recover copies of the ibdata1 and .frm files.

I have followed instructions, including some from this site, that say to do a clean install and overwrite the new ibdata1 and .frm files with the old ones.

I have also tried forcing InnoDB recovery with innodb_force_recovery=6 but no success yet. I am guessing that my ibdata1 file is corrupt. The most recent backup I have is like a month old and no good for my purposes.

Anymore ideas on how I could recover my database(s)?

Tape rotation strategies

Posted: 16 Mar 2013 12:24 PM PDT

I am trying to find a tape rotation strategy that would be optimal under the following conditions:

Backup Media Rotation strategy should support the following:

  1. Media Archives required every 28 days retained up to 3 years
  2. Require to recover to any point in time within past 7 days
  3. Can sustain up to 1 week gap in recovery time for next 4 weeks

I believe a good strategy would be to use the Grandfather-father-son strategy. However, I am unsure about point 3 above. Can someone explain what point three entails?

PRIMARY KEY CONSTRAINT fundamentals

Posted: 16 Mar 2013 02:19 PM PDT

Can you explain why "primary key constraint" exists if, as you know, it is possible to create relationships between tables using unique indexes only ?

The RDBMS that is used is SQL Server.

In fact, after creating a unique index (not null), I can create a foreign key constraint on an other table pointing to the column that is indexed without having to declare that this column is "the primary key" ... so, what is the purpose of it ? Is it a question of esthetics ?

Can't choose which date sql server 2005 backup to read in sql server 2008.

Posted: 16 Mar 2013 05:29 AM PDT

I am trying to restore a database from sql server 2005 to sql server 2008. In Management studo 2008, I am pointing to a hard drive copy of the backup file for the database, and am getting only one choice, even though there are multiple backups of different dates on the file. I can only backup the earliest one which is the only one listed. I need a later one off the backup file. How do get the complete to choose from in sql server management studio 2008 from backup file created in management studio 2005?

How to best use connection pooling in SQLAlchemy for PgBouncer transaction-level pooling?

Posted: 16 Mar 2013 04:29 AM PDT

Using SQLAlchemy to query a PostgreSQL database behind PgBouncer, using transaction-level pooling.

What is the best pattern to use for this kind of set up? Should I have one-engine-per-process, using a ConnectionPool, or should I create an engine per-request, and use NullPool for each one of them? Is there a different pattern altogether that I should be using?

Thanks very much! Let me know if more information is needed and I'll update ASAP.

how to chain postgres RULEs?

Posted: 16 Mar 2013 12:22 PM PDT

I have implemented data denormalization strategy using postgresql RULEs. I picked rules instead of triggers for performance reasons.


Schema is structured like this:

  • Application has many clients
  • Client has many projects
  • Project has many users

One part of the system is storing hits for every user in stats table. Hit is an imaginary metric, it is not really relevant. System can collect many of these metrics. There are a lot of records in stats table (> 1,000,000 per day).

I want to know how many hits are per user, per project, per client and per application for given day.

To make it work fast, I've groupped stats by day and stored the output into user_hits table. During this process, also the application_id, client_id and project_id has been added (as columns), and appropriate indexes created.

I want to further optimise the process by grouping things by project_id, client_id and finally application_id. The data pipeline is like this:

stats -> user_hits -> project_hits -> client_hits -> application_hits

I want to make sure when I delete the data from user_hits for given day, that the data in project_hits for that same date is also deleted. This process should propagate to last table in chain.

I defined these simple rules:

CREATE RULE delete_children AS ON DELETE TO user_hits    DO ALSO    DELETE FROM project_hits WHERE day = OLD.day;    CREATE RULE delete_children AS ON DELETE TO project_hits    DO ALSO    DELETE FROM client_hits WHERE day = OLD.day;    CREATE RULE delete_children AS ON DELETE TO client_hits    DO ALSO    DELETE FROM application_hits WHERE day = OLD.day;  

However, when I issue statement like this:

DELETE FROM user_hits WHERE day = current_date;  

I expect it to run these 3 queries in return:

DELETE FROM project_hits WHERE day = current_date;  DELETE FROM client_hits WHERE day = current_date;  DELETE FROM application_hits WHERE day = current_date;  

However, it doesn't.

It completes the operation, but it takes couple of minutes to do that (with test data). With real data it takes hours, while running those 3 queries by hand takes couple of milliseconds. The time it takes seems proportional to number of combinations (users x projects x clients x applications).

What is the problem here? Am I missing something? Can this be implemented with triggers in an optimised way?


Included sample script which reproduces the problem:

https://gist.github.com/assembler/5151102


UPDATE: Transition from user_hits to project_hits (and so forth) is done by worker process in background (since it involves contacting 3rd party services for additional info). It is smart enough to recalculate everything for missing dates. So the only thing i need is a way to DELETE records cascadingly in optimised way.


UPDATE: stats table is filled on daily basis. The only possible scenario is to unconditionally delete the data for whole day and then replace it with new values.


UPDATE: I noticed that the number of affected rows (extracted from explain statement) is exactly equal to the product of affected rows in user_hits, project_hits, client_hits, and application_hits tables (hundreds of millions of rows).

It turns out that it works like this:

  1. I run DELETE FROM user_hits WHERE day = current_date;
  2. For each row in user_hits table, RULE is triggered which deletes EVERY row from project_hits
  3. For each row of project_hits, RULE is triggered which deletes EVERY row from client_hits
  4. For each row of client_hits, RULE is triggered which deletes EVERY row from application_hits

So, the number of operations is equal to the product of count of affected rows in these tables.

How to model inheritance of two tables mysql

Posted: 16 Mar 2013 08:50 AM PDT

I have some tables where I store data and depending on the type of person (worker,civil) that did a job I want to store it a table 'event', now these guys rescue an animal (thereis a table animal).

Finally I want to have a table to store the event that a guy (WORKER,CIVIL), saved an animal, but How should I add a foreign key or how to know the id of the civil or worker that did the job?

Now in this design I do not know how to relate whose person did the job if, I would had only a kind of person (aka civil) i would only store the civil_id in person field in this last table....but how to know if it was civil or worker, do I need other intermediate table?

How to make this design in mysql? enter image description here

could you help me with the sqlfiddle?

UPDATE

I have model it the following way:

DROP    TABLE IF EXISTS `tbl_animal`;   CREATE TABLE `tbl_animal` (      id_animal       INTEGER     NOT NULL PRIMARY KEY AUTO_INCREMENT,      name            VARCHAR(25) NOT NULL DEFAULT "no name",      specie          VARCHAR(10) NOT NULL DEFAULT "Other",      sex             CHAR(1)     NOT NULL DEFAULT "M",      size            VARCHAR(10) NOT NULL DEFAULT "Mini",      edad            VARCHAR(10) NOT NULL DEFAULT "Lact",      pelo            VARCHAR(5 ) NOT NULL DEFAULT "short",      color           VARCHAR(25) NOT NULL DEFAULT "not defined",      ra              VARCHAR(25) NOT NULL DEFAULT "not defined",      CONSTRAINT `uc_Info_Animal` UNIQUE (`id_animal`)             ) ENGINE=InnoDB  DEFAULT CHARSET=utf8;      INSERT INTO `tbl_animal` VALUES (1,'no name', 'dog', 'M','Mini','Lact','Long','black','Bobtail');  INSERT INTO `tbl_animal` VALUES (2,'peluchin', 'cat', 'M','Mini','Lact','Long','white','not defined');  INSERT INTO `tbl_animal` VALUES (3,'asechin', 'cat', 'M','Mini','Lact','Corto','orange','not defined');    DROP    TABLE IF EXISTS `tbl_person`;    CREATE TABLE `tbl_person` (      type_person  VARCHAR(50) NOT NULL primary key          ) ENGINE=InnoDB  DEFAULT CHARSET=utf8;  INSERT INTO `tbl_person` (type_person) VALUES ('Worker');  INSERT INTO `tbl_person` (type_person) VALUES ('Civil');        DROP    TABLE IF EXISTS `tbl_worker`;    CREATE TABLE `tbl_worker`(      id_worker           INTEGER  NOT NULL PRIMARY KEY,      type_person         VARCHAR(50) NOT NULL ,       name_worker         VARCHAR(50) NOT NULL ,          address_worker      VARCHAR(40) NOT NULL DEFAULT "not defined",           delegation          VARCHAR(40) NOT NULL DEFAULT "not defined",      FOREIGN KEY (type_person)               REFERENCES `tbl_person` (type_person),      CONSTRAINT `uc_Info_worker` UNIQUE (`id_worker`)             ) ENGINE=InnoDB  DEFAULT CHARSET=utf8;    INSERT INTO `tbl_worker` VALUES (1,'Worker','N_CEDENTE1', 'DIR Worker 1', 'DEL');  INSERT INTO `tbl_worker` VALUES (2,'Worker','N_worker1', 'DIR Worker 2', 'DEL');  INSERT INTO `tbl_worker` VALUES (3,'Worker','N_worker2', 'address worker','delegation worker');       DROP    TABLE IF EXISTS `tbl_civil`;   CREATE TABLE `tbl_civil`(      id_civil                        INTEGER  NOT NULL PRIMARY KEY,      type_person         VARCHAR(50) NOT NULL ,      name_civil                      VARCHAR(50)  ,      procedence_civil                VARCHAR(40)  NOT NULL DEFAULT "Socorrism",        FOREIGN KEY (type_person)             REFERENCES `tbl_person` (type_person),      CONSTRAINT `uc_Info_civil` UNIQUE (`id_civil`)             ) ENGINE=InnoDB  DEFAULT CHARSET=utf8;      INSERT INTO `tbl_civil`  VALUES (1,'Civil','N_civil1' , 'Socorrism');      CREATE TABLE `tbl_event` (      id_event     INTEGER NOT NULL,      id_animal    INTEGER NOT NULL,      type_person  VARCHAR(50) NOT NULL ,       date_reception DATE DEFAULT '2000-01-01 01:01:01',      FOREIGN KEY (id_animal)   REFERENCES `tbl_animal`    (id_animal),      FOREIGN KEY (type_person )  REFERENCES `tbl_person`   (type_person ),      CONSTRAINT `uc_Info_ficha_primer_ingreso` UNIQUE (`id_animal`,`id_event`)       )ENGINE=InnoDB  DEFAULT CHARSET=utf8;    INSERT INTO `tbl_event` VALUES (1,1, 'Worker','2013-01-01 01:01:01' );  INSERT INTO `tbl_event` VALUES (2,2, 'Civil','2013-01-01 01:01:01' );  

However, is there a way to get rid of nulls? the queries I have are:

SELECT  a.*,b.*,z.*  FROM    tbl_event a          left JOIN tbl_worker b              ON a.type_person = b.type_person          left JOIN tbl_animal z              ON   z.id_animal = a.id_animal ;    SELECT  a.*,b.*,z.*  FROM    tbl_event a          left JOIN tbl_civil b              ON a.type_person = b.type_person          left JOIN tbl_animal z              ON   z.id_animal = a.id_animal ;  

Here is updated sqlfiddle

Naming foreign key columns

Posted: 16 Mar 2013 06:41 PM PDT

Now I hear that the best practice for naming foreign key columns is the name of the table and field (and maybe the function if there are multiple foreign keys to the same table), for example, a foreign key that references the users table id column would be:

creatorUserId  

Now the thing is that right now I am building my application and for the sake of development speed, I am going to be referencing everything by a surrogate foreign key (INT AUTO_INCREMENT). I really don't want to get into the whole which is faster than what in which case this early in development, I rather leave that stuff to the end (to try to prevent premature optimizations and I am more interested in working on the front-end of this application then the backend).

With that being said, a concern I have is if I in-fact want to change some of the foreign keys from the surrogate key to a natural key (all relavent tables will have both) and I use column names like

creatorUserId   

I am going to have to go through all the backend code in order to make the change, a process that would not be very pleasant.

I was thinking about doing something that just use the function and table name for foreign keys column like:

creatorUser  

In the case, if I change what column I am using for the foreign key, the backend code changes are a lot simpler and there would be a lot less of them (if you are using an ORM which I am). Also, since surrogate keys are generally numbers and natural keys are generally alpha characters, it should still be relative easy to know which one is being used just by looking at the value.

Would this be considered bad practice?

SELECTing multiple columns through a subquery

Posted: 16 Mar 2013 05:41 AM PDT

I am trying to SELECT 2 columns from the subquery in the following query, but unable to do so. Tried creating alias table, but still couldn't get them.

SELECT DISTINCT petid, userid,  (SELECT MAX(comDate) FROM comments WHERE petid=pet.id) AS lastComDate,  (SELECT userid FROM comments WHERE petid=pet.id ORDER BY id DESC LIMIT 1) AS lastPosterID    FROM pet LEFT JOIN comments ON pet.id = comments.petid  WHERE userid='ABC' AND deviceID!='ABC' AND comDate>=DATE_SUB(CURRENT_TIMESTAMP, INTERVAL 2 MONTH);  

Basically, I am trying to get the lastComDate & lastPosterID from the same row - the row which is the latest one in comments for the specific pet. Please suggest how can I get them in an efficient way.

The above query works, but seems overkill as same row is fetched twice. Moreover, the ORDER BY clause is significantly slower than the aggregate function - as I found while profiling query. So, a solution avoiding sorting would be appreciated.

Designing Simple Schema for Disaggregation of Demand Forecast

Posted: 16 Mar 2013 11:41 AM PDT

I am doing a simple database design task as a training exercise where I have to come up with a basic schema design for the following case:

I have a parent-child hierarchy of products (example, Raw Material > Work in Progress > End Product).

  • Orders are placed at each level.
  • Number of orders shall be viewable in weekly buckets for the next 6 months.
  • Demand forecast can be done for each product level.
  • Demand forecast for any week within next 6 months can be done today.
  • Demand forecast is done for weekly buckets, for the next 6 months.

Demand Forecast is usually done at the higher level in hierarchy (Raw Material or Work in Progress level) It has to be disaggregated to a lower level (End Product).

There are 2 ways in which demand forecast can be disaggregated from a higher level to lower level:

  1. User specifies percentage distribution for end product. Say, there's a forecast of 1000 for Work In Progress.. and user says I want 40% for End Product 1 and 60% for End Product 2 in bucket 10.. Then for 10th week (Sunday to Saturday) from now, forecast value for End Product 1 would be 400 and, for End Product 2 would be 600.
  2. User says, just disaggregate according to orders placed against end products in Bucket 5, and orders in bucket 5 for End Product 1 and 2 are 200 and 800 respectively, then forecast value for EP1 would be ((200/1000) * 100)% and for EP2 would be ((800/1000) * 100)% of forecast for 'Work in Progress'.

Forecast shall be viewable in weekly buckets for the next 6 months and the ideal format should be:

product name | bucket number | week start date | week end date | forecast value | created_on  

PRODUCT_HIERARCHY table could look like this:

id  |   name                |   parent_id  __________________________________________  1   |   raw material        |   (null)  2   |   work in progress    |   1  3   |   end product 1       |   2  4   |   end product 2       |   2  

ORDERS table might look like this:

id | prod_id | order_date | delivery_date | delivered_date  

where,

prod_id is foreign key that references id of PRODUCT_HIERARCHY table,

How to store forecast? What would be a good basic schema for such a requirement?


My idea to select orders for 26 weekly buckets is:

SELECT      COUNT(*) TOTAL_ORDERS,      WIDTH_BUCKET(          delivery_date,          SYSDATE,          ADD_MONTHS(sysdate, 6),           TO_NUMBER( TO_CHAR(SYSDATE,'DD-MON-YYYY') - TO_CHAR(ADD_MONTHS(sysdate, 6),'DD-MON-YYYY') ) / 7      ) BUCKET_NO  FROM      orders_table  WHERE      delivery_date BETWEEN SYSDATE AND ADD_MONTHS(sysdate, 6);  

But this will give weekly buckets starting from today irrespective of the day. How can I convert them to Sunday to Saturday weeks in Oracle?

Please help designing this database structure.

(will be using Oracle 11g)

DB2 Authentication TRUST_CLNTAUTH

Posted: 16 Mar 2013 03:41 PM PDT

I am using db2inst1 to connect to a database in DB2 which I have installed on my machine. Therefore, db2inst1 user does not require username/password authentication (borrows them from the OS). I would like to change that, and force every time a connection is initiated a username/password to be requested.

More specifically, this is how the authentication configuration looks like:

db2 get dbm cfg|grep -i auth

 GSS Plugin for Local Authorization    (LOCAL_GSSPLUGIN) =    Server Connection Authentication          (SRVCON_AUTH) = NOT_SPECIFIED   Database manager authentication        (AUTHENTICATION) = CLIENT   Alternate authentication           (ALTERNATE_AUTH_ENC) = NOT_SPECIFIED   Cataloging allowed without authority   (CATALOG_NOAUTH) = NO   Trusted client authentication          (TRUST_CLNTAUTH) = SERVER   Bypass federated authentication            (FED_NOAUTH) = NO  

db2 connect to dbName

   Database Connection Information       Database server        = DB2/LINUXX8664 10.1.0     SQL authorization ID   = DB2INST1     Local database alias   = DBNAME  

db2 connect to dbName user db2inst1 using password

   SQL1639N  The database server was unable to perform authentication because      security-related database manager files on the server do not have the required      operating system permissions.  SQLSTATE=08001  

I have played with some authentication combinations for "AUTHENTICATION" and "TRUST_CLNTAUTH" without much luck.

SQL Server BPA 2008R2

Posted: 16 Mar 2013 09:41 AM PDT

I have been using SQL server BPA for getting good information from Microsoft. I was using its 2005 version in which i used to export the results in csv format but recently I got two new servers which has got SQL Server 2008 R2 installed on it and I know I can't run BPA 2005 on these, so chose the R2 version of it, but it doesn't have an option to save the report on csv format, only xml, I have tried the excel to convert it into csv but no use, even-though it display the details but can't narrow down the results any idea of converting the results to csv format?

SQL Server 2008 R2 Express password overwrite

Posted: 16 Mar 2013 07:41 AM PDT

I'm using a program that requires SA log-in to SQL Server. We had issues where some 3rd party support numpty changed the SA password and never informed us, so the program couldn't log-in.

I've accessed our information via the SQL Server Management Studio. Fortunately, the 'remember password' option had been checked on the machine.

Via the object explorer, I've accessed the SA log-in credentials and changed the password accordingly. However, every time I re-load the SQL Server Management Studio, the password re-sets to the original one.

When I test the connection of the program that uses the SA log-in by manually entering the new password it appears to work OK. But when I let the program start-up automatically, Windows services is detailing that the log-in is failing due to incorrect log-in details - which I'm guessing is because SQL Server keeps reverting back to the old password?

Can anyone advise how I stop it from reverting back to the old password and overwrite with the new password definitively? (SQL Server 2008 R2 Express) I'm hoping that should help fix my more specific problem with the other program?

Setting up DRBD on an active MySQL server

Posted: 16 Mar 2013 06:42 PM PDT

When it comes to setting up DRBD and MySQL, is the following possible?

  • Set up DRBD on an active MySQL server
  • Set up DRBD with no downtime allowed

MySQL optimization - year column grouping - using temporary table, filesort

Posted: 16 Mar 2013 01:41 PM PDT

I have a transactions table which is having 600,000 records, I need to list the count for the dashboard on financial year basis. The table used is MyISAM. I tried adding index for the transaction date (tran_date). Even though it is using the index it creates temporary table which is taking more time because of the temporary table and the filesort. Is there any way to optimize the query to improve the query time?

  SELECT COUNT( * ) AS cnt, CASE WHEN MONTH( tran_date ) >=3  THEN concat( YEAR( tran_date ) , '-', YEAR( tran_date ) +1 )  ELSE concat( YEAR( tran_date ) -1, '-', YEAR( tran_date ) )  END AS financial_year  FROM `transactions1`  WHERE tran_date >= '2010-06-01'  GROUP BY financial_year    Showing rows 0 - 4 (5 total, Query took 1.2095 sec)  
  id select_type  table       type  possible_keys     key key_len    ref  rows    Extra  1   SIMPLE    transactions1 range PRIMARY,tran_date tran_date 8  NULL   346485  Using where; Using index; Using temporary; Using filesort  
  Keyname     Type    Unique  Packed  Field       Cardinality   Collation   PRIMARY     BTREE   Yes       No    tran_date      205720         A                                           tran_ID        617162         A   coupon_No   BTREE   No        No    coupon_No      617162         A       account_typeBTREE   No        No    account_type   3              A       prodCode    BTREE   No        No    prodCode       430            A                                           tran_date      308581         A   tran_date   BTREE   No        No    tran_date      205720         A       cust_ID     BTREE   No        No    cust_ID        3265           A                                           tran_date      308581         A                                       account_type   308581         A                                       points_earned  617162         A  

Update :

Tried adding partition which is not that much helpful in comparison with non partitioned one. Does replication help in this case for reading this table?. There will be more grouping based on the dates (using the date functions) when reading the data.

Edit:

I altered the query and reduced the query execution time. The query I used is,

  SELECT SUM( count )  FROM (  SELECT COUNT( * ) AS count,  CASE WHEN MONTH( tran_date ) >=3  THEN concat( YEAR( tran_date ) , '-', YEAR( tran_date ) +1 )  ELSE concat( YEAR( tran_date ) -1, '-', YEAR( tran_date ) )  END AS format_date  FROM transactions1  GROUP BY tran_date  ) AS s  GROUP BY format_date    Showing rows 0 - 4 (5 total, Query took 0.5636 sec)  
  id  select_type     table     type  possible_keys   key     key_len     ref     rows    Extra  1   PRIMARY     <derived2>    ALL       NULL        NULL      NULL      NULL    229676  Using temporary; Using filesort  2   DERIVED     transactions1 index     NULL        tran_date   8       NULL    617162  Using index  

But when using

  SELECT COUNT( * ) AS count,  CASE WHEN MONTH( tran_date ) >=3  THEN concat( YEAR( tran_date ) , '-', YEAR( tran_date ) +1 )  ELSE concat( YEAR( tran_date ) -1, '-', YEAR( tran_date ) )  END AS format_date  FROM transactions1  GROUP BY tran_date    Showing rows 0 - 29 (229,676 total, Query took 0.0006 sec)  

gives less time without using the SUM(count) in the derived table. Is there any other way to get the sum without using the subquery in MySQL or can the subquery be optimized to get the index.

MySQL join for multiple child records in a single joined row

Posted: 16 Mar 2013 02:41 PM PDT

I have one master table (teacher) structured like

teacherId   Name        Class  1       Praveen     10  2       John        9  

and having a child table (student) structured like

studentId   teacherId   Name  1       1       David  2       1       Creg  3       2       Mike  4       2       Steve  5       2       Jim  

How I can get a result set like

teacherId   teacher     studentId_1 student_1   studentId_2 student_2   studentId_3 student_3    1       Praveen     1       David       2       Creg        null        null  2       John        3       Mike        4       Steve       5       Jim  

Thanks in advance..

Regards, Praveen

What's the difference between a temp table and table variable in SQL Server?

Posted: 16 Mar 2013 01:43 PM PDT

This seems to be an area with quite a few myths and conflicting views.

So what is the difference between a table variable and a local temporary table in SQL Server?

How can I change the default storage engine in phpmyadmin?

Posted: 16 Mar 2013 01:33 PM PDT

I use InnoDB almost exclusively in my applications. However, if I'm not careful when setting up the table, I forget to change it and phpmyadmin sticks me with MyISAM. Is there a way to change the default storage engine?

[MS SQL Server] Suggestions on TEMPDB Setup

[MS SQL Server] Suggestions on TEMPDB Setup


Suggestions on TEMPDB Setup

Posted: 15 Mar 2013 05:46 AM PDT

I have a teradata environment with database aroudn 3TB. This server has around 40 core processors. Microsoft suggested to Use only 32 processors instead of 40 as there was a bug.So, Inorder to build temp database I am starting with 8 temp data files and 1 log files. I have two solid state drives each of size 290GB, so Total of two drives is 580 GB . So, I am diving my files in the following way. I need your suggestionin dividing the files.My environment uses lot of tempdb. Please suggest with my ideas.T1: 70 GBT2:70 GBT3:70 GBT4:70 GBT5:70 GBT6:70 GBT7:70 GBT8:70 GBTlog: 10 GB.By dividing into this way do I across any problems.

access named instance from SSMS

Posted: 15 Mar 2013 05:46 AM PDT

To access a named instance from SSMS it is SERVER\INSTANCE_NAME or SERVER$INSTANCE_NAME? I have tried both, for SERVER\INSTANCE_NAME I get an immediate "Login Failed" message so I tend to think that is it and I just don't have access? But I need a way to confirm. I can not open the SQL SERVER Configuration Manager, I get a "Cannot connect tp WMI provider, you don't have permission or the service is unreachable". I am trying to access a Reporting server database that had a named instance of sql server installed for it but whoever did this didn't add permissions for our sqlAdmins group.... hmmm.. Any advice on this? Thank you.

Log File Shrink

Posted: 15 Mar 2013 05:42 AM PDT

Now that we got the log file backups under control, now it's time to shrink the log file. I'm going to start with the Test DB which is only 56 gig, and when they (consultants) created the log file initialy, they set the log file size to 59 gig. See below the results of DBCC SQLPERF(logspace). It looks as though I can really shrink this sucker. What do you think would be an appropriate size, remembering this is a test DB with not a lot of transactions going through. [img]http://i1359.photobucket.com/albums/q791/midnight251/LogFileSize_zps62bd6b1a.jpg[/img]

[SQL 2012] Instance won't start after changing Processor Affinity

[SQL 2012] Instance won't start after changing Processor Affinity


Instance won't start after changing Processor Affinity

Posted: 15 Mar 2013 04:55 AM PDT

I have a server which I'm trying to setup identically to our running production server and I'm running into issues. The hardware is identical, as follows:2x 8-Core Opteron 621232gb RAM670GB of Disk Space split 60 C: / 610 E:OS is Windows Server 2008 R2 EnterpriseSQL Server is 2012 SP1 with rollup (11.0.3128)The server has four instances installed, across which we're trying to split resources by setting Max RAM and Process/IO Affinity, like this:\NCR - 8GB, Process on Cores 1,2,3, I/O on Core 4\GTA - 8GB, Process on Cores 5,6,7, I/O on Core 8\ENT - 8GB, Process on Cores 9,10,11, I/O on Core 12\DOCINDEX - 4GB, Process on Core 13, I/O on Core 14Leaving 4GB, and Cores 0 and 15 free for the OS. The production server is setup like this and running fine. The development server is fine right up until I change the Process and IO affinity. As soon as I do that and restart the SQL Server Services or the whole box, GTA and DOCINDEX's services will start, then immediately stop. Since I can't get them started, I end up uninstalling and reinstalling the instance. Then they run fine with no process or IO affinity set. I can change the Max RAM, but if I change the Process or IO affinity again they do the same thing. I've tried stopping all the SQL services and just turning on DOCINDEX or just turning on GTA, thinking that it might be a problem with resource availability, but they still won't start. I've even gone as far as reformatting the entire box again from scratch, making sure I follow the procedure I wrote up when I built the production box and run into the same issue.The system event logs have no helpful information at all. The only event related to the services in question are a bunch of "This is informational" events describing the startup, then one saying "The service has entered the stopped state". The ERRORLOG has the same lack of information. I can post both if needed. Help!Jon

[T-SQL] Backup large database to multiple files

[T-SQL] Backup large database to multiple files


ABC123DEF456'2, '2ColABC2ColDef2Colxyz'Tried few combinations using FOR XML, but could not get the desired result at each row level.

Backup large database to multiple files

Posted: 15 Mar 2013 06:13 AM PDT

Hi everyone,I manage 100+ SQL servers and some of them contain over 120 – 800 GB + databases. I googled for backup script that backs up database depending on the size and strips the .bak file to multiple files. So far no luck. The closest one I got to is this script. Is there a way I can backup smaller databases in one .bak file and 60 + GB databases to multiple .bak (stripped) files dynamically in the same backup script? This way I can standardize it across all the servers. Thanks in advance or your help.Hi everyone,I manage 100+ SQL servers and some of them contain over 120 – 500 GB + databases. I googled for backup script that backs up database depending on the size and strips the .bak file to multiple files. So far no luck. The closest one I got to is this script. Is there a way I can backup smaller databases in one .bak file and 60 + GB databases to multiple .bak (stripped) files? Thanks in advance or your help.DECLARE @Baksql VARCHAR(8000) DECLARE @BackupFolder VARCHAR(100) DECLARE @BackupFile VARCHAR(100) DECLARE @BAK_PATH VARCHAR(4000) DEclare @BackupDate varchar(100) DEclare @Size varchar(100)-- Setting value of backup date and folder of the backup SET @BackupDate = REPLACE(REPLACE(REPLACE(CONVERT(VARCHAR,GETDATE(),120),'-',''),':',''),' ','_') -- 20110517_182551 SET @BackupFolder = 'C:\temp\' SET @Baksql = '' -- Declaring cursor DECLARE c_bakup CURSOR FAST_FORWARD READ_ONLY FOR SELECT NAME FROM SYS.DATABASES WHERE state_desc = 'ONLINE' -- Consider databases which are online AND database_id > 4 -- Exluding system databases -- Opening and fetching next values from sursor OPEN c_bakup FETCH NEXT FROM c_bakup INTO @BackupFile WHILE @@FETCH_STATUS = 0 BEGIN SET @BAK_PATH = @BackupFolder + @BackupFile -- Creating dynamic script for every databases backup SET @Baksql = 'BACKUP DATABASE ['+@BackupFile+'] TO DISK = '''+@BAK_PATH+'_FullBackup_'+@BackupDate+'.bak'' WITH INIT;' -- Executing dynamic query PRINT (@Baksql) EXEC(@Baksql) -- Opening and fetching next values from sursor FETCH NEXT FROM c_bakup INTO @BackupFile END -- Closing and Deallocating cursor CLOSE c_bakup DEALLOCATE c_bakup

Custom sp_who

Posted: 15 Mar 2013 08:09 AM PDT

Hello All,Couldn't find anything that addresses this specifically, so here goes.I have a group that I want to allow to view process on their dev server, but I don't want to grant view server state permissions and all that it exposes, so I'm trying to write a proc that will execute sp_who as me (the DBA) for users, but it still only returns one row instaed of all of them; what am I missing? MS's docs say this should work.USE [master]GO/****** Object: StoredProcedure [dbo].[usp_who] Script Date: 03/14/2013 14:57:25 ******/SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGO-- =============================-- Author: -- Created: 03/14/13-- Desc: sp_who for users-- =============================ALTER PROCEDURE [usp_who]with execute as selfASBEGIN SET NOCOUNT ON; exec sp_who;END

Column update based on expiration criteria

Posted: 15 Mar 2013 06:52 AM PDT

I am trying to get a homegrown database to automatically update a 'IsActive' column. I have a merge statement for the update, but I have no idea how to get this statement to run when the record is suppose to go from active to inactive and vice versa.My merge statement uses a view like this.[code="sql"]SELECT T1.SerialNumber, CASE WHEN (DATEADD(MONTH, T2.Schedule, T2.Date1) >= CAST(CURRENT_TIMESTAMP AS DATE)) THEN 1 WHEN (T2.Extension = 1 AND T2.DateExtended >= CAST(CURRENT_TIMESTAMP AS DATE)) THEN 1 WHEN (DATEADD(MONTH, T2.Schedule, T2.Date1) < CAST(CURRENT_TIMESTAMP AS DATE) AND (T2.DateExtended IS NULL OR T2.DateExtended < CAST(CURRENT_TIMESTAMP AS DATE))) THEN 0 ELSE CAST(NULL AS TINYINT) END AS DetermineIsActive, CAST (CURRENT_TIMESTAMP AS DATE) AS CurrentTimestampFROM dbo.Table1 AS T1 INNER JOIN dbo.Table2 AS T2 ON T1.SerialNumber = T2.SerialNumber[/code]My merge statement is[code="sql"] MERGE dbo.Table1 T1 USING (SELECT SerialNumber, DetermineIsActive, CurrentTimestamp FROM vIsActiveUpdate) AS vT2 ON T1.SerialNumber = vT2.SerialNumber WHEN MATCHED THEN UPDATE SET T1.IsActive = vT2.DetermineIsActive;[/code]This does the update I want, but I have no idea how to get it to execute when, for example T2.DateExtended < CURRENT_TIMESTAMP. I looked at triggers, but I don't think I want to do that. Everyone says not to use triggers, plus I can't figure out how the trigger would know when to execute. The criteria isn't really updating, it is expiring.Hopefully I was able to explain this clearly enough.Thanks

SQL Query Help using XML

Posted: 15 Mar 2013 03:18 AM PDT

Looking for SQL Query help on following.Have a key value pair, as below.declare @t table (id int, [key] varchar(10), value varchar(10))insert @t values (1, 'ColA', 'ABC123')insert @t values (1, 'colB', 'DEF456')insert @t values (2, 'colA', '2ColABC')insert @t values (2, 'colC', '2ColDef')insert @t values (2, 'colE', '2Colxyz')Need to bundle the Key-value combination into XML for each ID row. The expected results are1, '

[SQL Server 2008 issues] Read Committed_Snapshot

[SQL Server 2008 issues] Read Committed_Snapshot


Read Committed_Snapshot

Posted: 15 Mar 2013 06:14 PM PDT

"Read Committed_Snapshot" isolation level in sql has any negatives or disadvantages?

Collation settings

Posted: 15 Mar 2013 05:53 PM PDT

Hi all, Today i found that my reporting database and reporting tempdb are in different collation rom the normal databases. Can databases in a instance be in different collations .

Charindex,substring.....

Posted: 15 Mar 2013 10:21 AM PDT

I have a string column very long with multiple underscores in between. My requirement is to only select those columns where the string between second and third underscore matches a certain criteria. For instances lets say The Compare Values is "RIC" AND I have a row value DMV1004/343M.TR3432_PLC089_RIC_9843CLHere i need to check if the string between the 2nd and 3rd underscore is RIC which in this case is true.So i would need to compare and if true insert that value. How would i do that efficiently? thnx.

Database name is visible but not available.

Posted: 15 Mar 2013 05:02 AM PDT

When I reboot my server and log into SSMS my database is listed but it cannot be expanded. The table is not available. If I rightclick and take the DB offline then bring it online the database is then expandable and available. Server is VM with database on ISCI SAN on a volume.Suggestions Please!

How to create this result set

Posted: 15 Mar 2013 09:43 AM PDT

Hi,Hi have this two tablesTable Node:[quote]CREATE TABLE [dbo].[Node]( [Id] [int] NOT NULL, [Name] [varchar](50) NULL,PRIMARY KEY CLUSTERED ( [Id] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY][/quote]Table Edge:[quote]CREATE TABLE [dbo].[Edge]( [FromNode] [int] NOT NULL, [ToNode] [int] NOT NULL, [Dist] [decimal](10, 3) NOT NULL, CONSTRAINT [PK__Edge__023D5A04] PRIMARY KEY CLUSTERED ( [FromNode] ASC, [ToNode] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY]GOALTER TABLE [dbo].[Edge] WITH CHECK ADD CONSTRAINT [FK__Edge__FromNode__03317E3D] FOREIGN KEY([FromNode])REFERENCES [dbo].[Node] ([Id])GOALTER TABLE [dbo].[Edge] CHECK CONSTRAINT [FK__Edge__FromNode__03317E3D]GOALTER TABLE [dbo].[Edge] WITH CHECK ADD CONSTRAINT [FK__Edge__ToNode__0425A276] FOREIGN KEY([ToNode])REFERENCES [dbo].[Node] ([Id])GOALTER TABLE [dbo].[Edge] CHECK CONSTRAINT [FK__Edge__ToNode__0425A276][/quote]Data of Table node:1 Buenos Aires2 New York3 Liverpool4 Casa Blanca5 Cape TownData of Table Edge:1 2 6.0001 4 5.0001 5 4.0002 3 4.0003 4 3.0003 5 6.0004 3 3.0004 5 6.0005 2 8.000What I need is to create a procedure. The procedure receives a parameter with the startpoint and Endpoint and then calculates the possivel roots to the endpoint.for instance:If I create a procedure with the name "ReturnNodes" that receives the parameter 1 (startpoint) and parameter 2 (endpoint)then it need to show me all the ways that I have to go from the startpoint to the end point.ReturnNodes 1,3 should return the following resukt set:1,2,31,5,31,4,2,3Can someone help create this code?I have tryed but I can't find a solution...Thank you

Ways to replicate

Posted: 11 Mar 2013 09:39 PM PDT

Hello, I have a live db that users need to access it for reporting purposes. What is the best way to let them access ir without affectinglive? They need the recent copy of the db every minute. Is there any way of replicating the data somewhere else and be able to let the users access that without issues?Your help is much appreciated. Riri

How to find where sp_spaceused is being executed from?

Posted: 15 Mar 2013 03:30 AM PDT

I inherited an environment where a couple of times a day, we start getting a single spid blocking other spids which causes errors in the application. In researching it using sp_who2, I see the command as DBCC, the program as OSQL-32 and the execution as sp_spaceused @updateusage = 'TRUE'. This runs for 15 minutes or so and then the errors go away because the spid is no longer blocking.I don't know where this is being executed from? I have researched the jobs that are scheduled in Windows during that time and don't see it. I have looked at SQL schedules and don't see it. Is there an easy way to track down where this is coming from?

Setting rowcount also limits the available databases in the database drop down...

Posted: 15 Mar 2013 07:16 AM PDT

All, When setting the rowcount in a query in SQL Server Management Studio, the count of databases in the database drop down in SSMS also shrinks to the number in the "SET ROWCOUNT" statement.For example: SET ROWCOUNT 0 - Shows all databasesSET ROWCOUNT 2 - Shows ONLY the top 2 database on the serverIt also seems that the setting this in the "Tools/Options/Query Execution" has the same behavior.Is this the desired behavior since the drop down is using the same session as the query or just a bug in SSMS?Anton

how to combine and convert two integer columns to datetime

Posted: 15 Mar 2013 07:13 AM PDT

I have date stored in a column as integer type and time stored in a column as integer type. How do I combine and convert both columns into datetime type?For Example: Col A has 20130314 (yyyymmdd) and Col B has 123000 (hhmmss). My Output should be 2013-03-14 12:30:00

OLE DB provider 'VFPOLEDB' cannot be used for distributed queries because the provider is configured to run in single-threaded apartment mode.

Posted: 24 Sep 2011 06:43 PM PDT

hello all.I want to use vfpoledb provider in sql server 2008 and windows server 2008 and download vfpoledb from http://www.microsoft.com/download/en/details.aspx?id=14839and run this query :INSERT INTO OPENROWSET('VFPOLEDB', 'dBase IV;Database=C:\Temp;','select * from dskwor00')select * from testbut get error:Cannot create an instance of OLE DB provider "VFPOLEDB" for linked server "(null)".then exec this :sp_configure 'show advanced options', 1reconfigure gosp_configure 'Ad Hoc Distributed Queries', 1reconfigure gosp_configure 'show advanced options', 0reconfigure goUSE [master]GOEXEC master.dbo.sp_MSset_oledb_prop N'VFPOLEDB', N'AllowInProcess' , 1GOUSE [master]EXEC master.dbo.sp_MSset_oledb_prop N'VFPOLEDB' , N'DynamicParameters' , 1GOnow get error:OLE DB provider 'VFPOLEDB' cannot be used for distributed queries because the provider is configured to run in single-threaded apartment mode.what do i do for solve this error?is there vfpoledb for win server 2008?please help me.

Why is CDC returning Insert/Delete when I UPDATE a row in a table?

Posted: 15 Mar 2013 06:11 AM PDT

This is my second post asking this question. Hopefully someone will respond this time. I am experimenting with using CDC to track user changes in our application database. So far I've done the following:[code="sql"]-- ENABLE CDC ON DV_WRP_TESTUSE dv_wrp_test GO EXEC sys.sp_cdc_enable_db GO -- ENABLE CDC TRACKING ON THE AVA TABLE IN DV_WRP_TESTUSE dv_wrp_test GO EXEC sys.sp_cdc_enable_table @source_schema = N'dbo', @source_name = N'AVA', @role_name = NULL GO-- UPDATE A ROW IN THE AVA TABLE FROM SSMSUPDATE AVA SET AvaDesc = 'Test3' WHERE AvaKey = 119-- GET CDC RESULTS FOR CHANGES TO AVA TABLEUSE dv_wrp_test GO SELECT * FROM cdc.dbo_AVA_CT GO--RESULTS SHOW OPERATION 3 (BEFORE UPDATE) AND 4 (AFTER UPDATE) CORRECTLY--__$start_lsn __$end_lsn __$seqval __$operation __$update_mask AvaKey AvaDesc AvaArrKey AvaSAPAppellationID--0x0031E84F000000740008 NULL 0x0031E84F000000740002 3 0x02 119 Test2 6 NULL--0x0031E84F000000740008 NULL 0x0031E84F000000740002 4 0x02 119 Test3 6 NULL[/code]The results shown above are what I expect to see. My problem occurs when I use our application to update the same column in the same table. The vb.net application passes a Table Valued Parameter to a stored procedure which updates the table. Below is the creation script for the stored proc:[code="sql"]SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGOif exists (select * from sysobjects where id = object_id('dbo.spdv_AVAUpdate') and sysstat & 0xf = 4) drop procedure dbo.spdv_AVAUpdateGOCREATE PROCEDURE [dbo].[spdv_AVAUpdate] @AVA AS tvpAVA READONLY -- table valued parameterASDECLARE @ErrLogID AS INTEGERBEGIN TRY SET NOCOUNT ON BEGIN TRANSACTION UPDATE AVA SET AVA.AvaDesc = TVP.AvaDesc, AVA.AvaArrKey = TVP.AvaArrKey FROM @AVA TVP INNER JOIN AVA AVA ON (AVA.AvaKey = TVP.AvaKey) -- Commit the transaction COMMIT TRANSACTION -- Return '' for success SELECT '' AS ErrMessageEND TRY BEGIN CATCH -- Roll back any active or uncommittable transactions IF XACT_STATE() <> 0 BEGIN ROLLBACK TRANSACTION END -- Log the error into the ErrorLog table EXECUTE dbo.uspLogError @ErrLogID = @ErrLogID OUTPUT -- Retrieve logged error information. SELECT dbo.ErrorLog.* FROM dbo.ErrorLog WHERE ErrLogID = @ErrLogID END CATCH GOGRANT EXECUTE on dbo.spdv_AVAUpdate TO publicGO[/code]When I look at the results of CDC, instead of operations 3 and 4, I see 1 (DELETE) and 2 (INSERT) for the change that was initiated from the stored procedure:[code="sql"]-- GET CDC RESULTS FOR CHANGES TO AVA TABLEUSE dv_wrp_test GO SELECT * FROM cdc.dbo_AVA_CT GO-- RESULTS SHOW OPERATION 1 (DELETE) AND 2 (INSERT) INSTEAD OF 3 AND 4--__$start_lsn __$end_lsn __$seqval __$operation __$update_mask AvaKey AvaDesc AvaArrKey AvaSAPAppellationID--0x0031E84F000000740008 NULL 0x0031E84F000000740002 3 0x02 119 Test2 6 NULL--0x0031E84F000000740008 NULL 0x0031E84F000000740002 4 0x02 119 Test3 6 NULL--0x0031E84F00000098000A NULL 0x0031E84F000000980003 1 0x0F 119 Test3 6 NULL--0x0031E84F00000098000A NULL 0x0031E84F000000980004 2 0x0F 119 Test4 6 NULL[/code]Does anyone know why this might be happening, and if so, what can be done to correct it? Also, is there any way to get the user id associated with the CDC?Thanks,Gina

BULK Question

Posted: 15 Mar 2013 12:34 AM PDT

I export some data from a query into a text file. Then I create a temporary table and bulk insert the data from the text file.Most all is inserted properly - just a few aren't.For example (×Ϲ⻪ÓîÆ´ÒôÊäÈë) gets messed up during the bulk insert. It comes like (+ù+Å-¦+ó-+-¬+ô+«+å-¦+Æ+¦+è+ñ+ê+½).(×Ϲ⻪ÓîÆ´ÒôÊäÈë) exist in the text file and in the database, but when I check in temporary table it isn't there.declare @file sysname;SET @file = 'C:\Table.txt' SET nocount ON CREATE TABLE #tempfile (Column1 varchar(600)) EXEC ('bulk INSERT #tempfile FROM "' + @file + '"') SELECT DISTINCT *FROM #tempfileWHERE Column1 = '×Ϲ⻪ÓîÆ´ÒôÊäÈë' ORDER BY DisplayName0 ASC DROP TABLE #tempfile

Extended Events

Posted: 13 Mar 2013 03:03 AM PDT

I only recently discovered Extended Events. Why? I don't know!I jumped in and learned how to use it and then I let the rest of our sql team in on the discovery. One of the team members quickly stated that it's worthless and is too difficult to use and read, and about the only thing it might be good for is locating long running queries. I thought it was easy to create scripts using 2012 and also easy to use ssms to read the xel in table-format.Am I just dealing with a case of Debbie Downer here or am I just too excited with the shinny-toy? Perfmon seems to be the tool of choice for, Debbie.Anyone else think that this is a great tool?What types of things do you prefer to use it for?Thanks!!

SQL 2008 - Min and Max server memory

Posted: 15 Mar 2013 12:06 AM PDT

For SQL 2008, the min and max server memory settings are limited by total memory. If my server has 4gb memory and 12 processrors whats the max value i can give?

Xe not getting data for sql_text (inputbuffer)

Posted: 15 Mar 2013 03:09 AM PDT

We have an event session in which we are trying to capture the stored proc executed along with the parameters passed. Session here:[code="sql"]--Using a comparator predicateIF EXISTS(SELECT * FROM sys.server_event_sessions WHERE name='InputbufferTrace') DROP EVENT SESSION AuctionTrace ON SERVER;CREATE EVENT SESSION AuctionTraceON SERVERADD EVENT sqlserver.sp_statement_completed( ACTION(sqlserver.client_app_name, sqlserver.client_hostname, sqlserver.client_pid, sqlserver.session_id, sqlserver.sql_text, sqlserver.tsql_stack) WHERE (sqlserver.client_hostname = 'HostName') ),--ADD EVENT sqlserver.rpc_starting(-- ACTION(sqlserver.client_app_name, sqlserver.client_hostname, sqlserver.client_pid,-- sqlserver.session_id, sqlserver.sql_text, sqlserver.tsql_stack)-- WHERE (sqlserver.client_hostname = 'hostname')-- ),ADD EVENT sqlserver.module_end ( ACTION(sqlserver.client_app_name, sqlserver.client_hostname, sqlserver.client_pid, sqlserver.session_id, sqlserver.sql_text, sqlserver.tsql_stack) WHERE (sqlserver.client_hostname = 'HostName') )--,--ADD EVENT sqlserver.sp_statement_completed(-- ACTION(sqlserver.client_app_name, sqlserver.client_hostname, sqlserver.client_pid,-- sqlserver.session_id, sqlserver.sql_text, sqlserver.tsql_stack)-- WHERE (sqlserver.client_hostname = 'hostname')-- )ADD TARGET package0.ring_buffer( SET max_memory=4096)with (track_causality = on, MAX_DISPATCH_LATENCY = 1 SECONDS)GO [/code]We are confident through testing that we are getting the event information for the proper requests, but it always says that the sql text is unavailable. Through testing, we can get this in profiler. Any thoughts on how to really get this inputbuffer information through extended events? Is it a 2008 R2 issue?

Dynamic Bulk Insert executing as background task?

Posted: 15 Mar 2013 01:43 AM PDT

Question.. the following code:[code="sql"] set @strSQL = 'bulk insert MYServer..[' + @tablename + '] from ''\\mylocation\subfolder\' + @tableName + '.dat'' with (fieldterminator = ''|'');' exec sp_executesql @strSQL print @tableName[code]will run for a while and then spit out 20 or 30 table names...then run for a while and then spit out 20 or 30 more names.......as it runs through my list of files to import.Can someone tell me why it's not operating in a sequential order? ie, 1. bulk insert statement created, 2. run bulk insert statement, 3. PRINT THE DAMN TABLE NAME before going back to step one.Is this a dynamic sql issue or a bulk insert issue or a completely unrelated issue?

Can't import Excel xml format

Posted: 15 Mar 2013 02:03 AM PDT

I'm trying to select a Excel XML but my code doesn't work. The file i'm trying to import looks like this:----------------------------- 1 Unit 1 2 Unit 2 3 Unit 3
---------------------------My code looks like this select @xmlData = cast(c1 as xml)from OPENROWSET (BULK 'C:\Test2.xml',SINGLE_BLOB) as T1(c1)SELECT ref.value('Cell[1]', 'nvarchar(255)') AS UnitID, ref.value('Cell[2]', 'nvarchar(255)') AS UnitName FROM @xmlData.nodes('/Workbook/Worksheet/Table/Row') xmlData( ref )I works fine if I strip it down to this:------------------------------------------------ 1 Unit 1 2 Unit 2 3 Unit 3
---------------------------------------Is there any way to get it to work or is it because Excel's XML format is invalid?

secondary database is sink

Posted: 14 Mar 2013 11:02 PM PDT

secondary database in logshipping is sink how can i resume ?

Security Problems after SSRS Domain Migration

Posted: 14 Mar 2013 09:41 PM PDT

Morning all,I recently carried out an SSRS Migration from one domain to another.I used the backup and restore method, and all reports are generating fine and all access for existing users appears to be fine.However, there is one (fairly significant) problem;Some users appear to have been duplicated (seems that a problem with new SIDs being generated / mapped for each domain user affected, and this has resulted in me being unable to edit the security on the site levels/folders where this is a problem.I can neither add, amend or delete users and was advised to delete the first instance of a user to correct this, but it seems this has to be ruled out, due to the fact that even the top level folder contains duplicate users, and an error occurs when I attempt to delete thr user via the site security.The original error was:"Response is not well-Formed XML"The error when I try to delete a user now in one of the affected folders is;"role assignment is not valid. The role assignment is either empty or it specifies a user or group name that is already used in an existing role assignment for the current item. (rsInvalidPolicyDefinition)"I tried amending the ownership and modifiedby entries in the database for one of the specific users to myself, and then deleted the user and policies assigned to him, but it unfortunately never resolved the problem (The user still remains in place with security entries on the site, even if deleted within the database).Has anyone else ever encountered this problem and is there a way of resolving this which doesn't mean having to resort to backing up the original db with the affected users stripped out, in order for a restore to be carried out.A lot of further configuration work and report imports has occurred since this was put in place.Thanks in advance for any responses.

update a column based on Ideal

Posted: 14 Mar 2013 08:26 PM PDT

hi,when the table is ideal or not access in sql server for some time say for 3 hours then i need to update the one column in that table as flag 1.i will update my last_modified_date when there is an select/update/insert/delete, so based on last_modified_date i need to maintain the session for 3 hours and close the same when it pass 3 hours.

Cmd Shell

Posted: 14 Mar 2013 09:58 PM PDT

Hi Team,Is there any query to find cmd shell is enabled to disabled.i want to know the current status.Please help...

restore backup

Posted: 14 Mar 2013 08:14 PM PDT

how to restore a database of size 30 gb with the backup size of 10 GB of disk space 15 gb in sql

down grade Service pack

Posted: 14 Mar 2013 09:17 PM PDT

is it possible to down grade SP4 to SP3 in sql 2005 ?what is the procedure for this

EncryptByPassPhrase depends on datatype?

Posted: 14 Mar 2013 07:58 PM PDT

Can i use EncryptByPassPhrase for column of any datatype? I tried it for using integer,varchar and nvarchar. it worked properly. But when i use it for column of type nchar(10) i am getting errorMsg 8152, Level 16, State 5, Line 1String or binary data would be truncated.The statement has been terminated.In the article http://technet.microsoft.com/en-us/library/ms190357(v=sql.105).aspx it is used only for varbinary. Does it can be used for any type of column because it is working for integer,varchar and nvarchar. Please clarify my doubts.Thank you

endpoints

Posted: 14 Mar 2013 08:16 PM PDT

what are end points in mirroring ?

Search This Blog