Tuesday, March 5, 2013

[how to] How to properly kill MySQL?

[how to] How to properly kill MySQL?


How to properly kill MySQL?

Posted: 05 Mar 2013 09:27 PM PST

I have CentOS 64bit with CPanel installed and I use

service mysql stop  

However, it keeps doing ..... for minutes and it never stops. It used to be instant. Any idea why it does that and how to fix? Right now I have to do killall -9 mysql but is there a better way?

The server is also very very active.

RSExecRole missing

Posted: 05 Mar 2013 09:23 PM PST

I have multiple servers and 4 of them have reporting services installed with the ReportServer & ReportServerTempDB databases.

A few days ago I was setting up security for a new login and I found out that the RSExecRole role is missing in my dev server (in the reportserver database). It exists on the ReportServerTempDB, master & msdb databases.

I have found a way to create it on the master & msdb databases on msdn but it didn't help me creating it on reportserver with all the securibles & properties similar to the other environents I run.

Has anyone encountered this problem before? Can anyone help me with a script and an explanation about this role?

Web based tool to backup PostgreSQL

Posted: 05 Mar 2013 09:14 PM PST

Please suggest any web based tool to take an hourly backup of PostgreSQL database.

Run job in agent based on table data

Posted: 05 Mar 2013 09:19 PM PST

Is there a way to start a SQL Server agent job if a table is not empty?

Basically, first I want to check a table, and if the table is not empty then start the job. If it is empty, then check every 1 hour to see whether it has the required information.

I want to run this job once a day once I have data in the table.

Error while connecting SQL Server management studio 2008

Posted: 05 Mar 2013 09:12 PM PST

I have installed SQL Server 2008 management studio. When I connect, it shows an error. Also when I go to the SQL Server configuration manager and click on SQL Server SERVER services, this message is shown:

There are no items in list to view

No connections are found in the SQL Server configuration manager while my local network is connected.

How can I fix this?

What does the bit data type map to in a Java Domain Object

Posted: 05 Mar 2013 07:25 PM PST

When you have a column of the bit data type what Class would be used to represent that in a Java POJO mapping that table to an object which is using javax.persistence annotations?

I tried Boolean, which didn't work and produced the following errors:

org.hibernate.exception.SQLGrammarException: could not insert: [com.DomainClassName]
java.sql.SQLException: Incorrect syntax near '-'.

EDIT: a Java boolean does map to the bit data type in SQL Server. I had a column name with a dash in it that I had to change.

MySQL Fulltext match with forward slashes

Posted: 05 Mar 2013 06:20 PM PST

I'm trying to run a fulltext match against a product database, where the product names have forward slashes in them. These are staple sizes, so it's quite important that they get matched.

However, for every query I execute with a forward slash, I get 0 results.

The query I'm using is:

SELECT `product`.*  FROM `product`  WHERE MATCH (`product`.`name`) AGAINST('26/6')  

I have also tried

SELECT `product`.*  FROM `product`  WHERE MATCH (`product`.`name`) AGAINST('\"26/6\"')  

However, running the following returns 6 results, as expected:

SELECT `product`.*  FROM (`product`)  WHERE `product`.`name` LIKE '%26/6%'  

The table is MyISAM and the product.name is varchar(255), with a FULLTEXT index on that column alone. If I search using fulltext for something else, like 'tool', I get the results as expected, and the server is configured for 4 character or more matching.

What do I need to do to make this fulltext match work?

MySQL Table not repairing

Posted: 05 Mar 2013 09:06 PM PST

Table info:

Database name: user_motiva  Table name: wp_options.frm  wp_options.MYD  wp_options.MYI  wp_options.TMD  

when I do a mysqlcheck -r --all-databases it gets hung on that table even if you let it sit all day.

Is there anther way to fix/repair/recover that table?

Should I use myisamchk? I saw something like:

shell> myisamchk --recover City  

My config on a 16GB ram box

 cat /etc/my.cnf  [mysqld]  default-storage-engine=MyISAM  local-infile=0  symbolic-links=0  skip-networking  max_connections = 500  max_user_connections = 20  key_buffer = 512M  myisam_sort_buffer_size = 64M  join_buffer_size = 64M  read_buffer_size = 12M  sort_buffer_size = 12M  read_rnd_buffer_size = 12M  table_cache = 2048  thread_cache_size = 16K  wait_timeout = 30  connect_timeout = 15  tmp_table_size = 64M  max_heap_table_size = 64M  max_allowed_packet = 64M  max_connect_errors = 10  query_cache_limit = 1M  query_cache_size = 64M  query_cache_type = 1  low_priority_updates=1  concurrent_insert=ALWAYS  log-error=/var/log/mysql/error.log  tmpdir=/home/mysqltmp  myisam_repair_threads=4  [mysqld_safe]  open_files_limit = 8192  log-error=/var/log/mysql/error.log    [mysqldump]  quick  max_allowed_packet = 512M    [myisamchk]  key_buffer = 64M  sort_buffer = 64M  read_buffer = 16M  write_buffer = 16M  

and could this have happened because of a crashed table from doing killall -9 mysqld because it would not shutdown and restart?

EDIT:

root@server [/var/lib/mysql/user_motiva]# myisamchk -e *.MYI  Checking MyISAM file: wp_options.MYI  Data records:    1827   Deleted blocks:       3  myisamchk: warning: 3 clients are using or haven't closed the table properly  - check file-size  - check record delete-chain  - check key delete-chain  - check index reference  - check data record references index: 1  - check data record references index: 2  - check records and index references  MyISAM-table 'wp_options.MYI' is usable but should be fixed  root@server [/var/lib/mysql/user_motiva]# myisamchk --safe-recover wp_options.MYI  - recovering (with keycache) MyISAM-table 'wp_options.MYI'  Data records: 1827  myisamchk: error: Can't create new tempfile: 'wp_options.TMD'  MyISAM-table 'wp_options.MYI' is not fixed because of errors  Try fixing it by using the --safe-recover (-o), the --force (-f) option or by not using the --quick (-q) flag  root@ns2 [/var/lib/mysql/user_motiva]# myisamchk -o -f wp_options.MYI  - recovering (with keycache) MyISAM-table 'wp_options.MYI'  Data records: 1827  

Does this mean that it is now fixed? If so how do I move it back? (this was done on a different server) Is there a way to maybe bring MySQL down on the main server and run a command to fix all the files?

How do I get more details on a full text catalog error?

Posted: 05 Mar 2013 11:55 AM PST

I have an error that is repeatedly generated on a SQL Server 2005 instance running the databases for MS Team Foundation Server.

 'Severity 16' occurred on INSTANCENAME  

Looking in the SQL Server logs gives me

 Errors were encountered during full-text index population for table or indexed view '[TfsWorkItemTracking].[dbo].[WorkItemLongTexts]', database 'TfsWorkItemTracking' (table or indexed view ID '1810105489', database ID '14'). Please see full-text crawl logs for details.  

And the actual full-text crawl log (\MSSQL\LOG\SQLFT0001300005.LOG) gives me:

 Error '0x800b0100' occurred during full-text index population for table or indexed view      The component 'MSFTE.DLL' reported error while indexing. Component path 'D:\Program Files\Microsoft SQL Server\MSSQL.3\MSSQL\Binn\MSFTE.DLL'  

If I rebuild the full-text catalog like so...

 ALTER FULLTEXT CATALOG [TeamFoundationServer10FullTextCatalog]   

The error will stop happening for a while, but it will start up again days later. I have searched for hours and cannot figure out what is causing this. Any suggestions?

Optimizing an audit table for inserts and deletes

Posted: 05 Mar 2013 12:15 PM PST

I'm trying to identify the best way to design a table for audit purposes. Basically I log several last events for many users in one table. There's a limit on the number of records for each user. The new records come, the older go.

Something like this:

CREATE TABLE Audit  (    UserId INT NOT NULL,    EventId INT NOT NULL,    CreationDate DATETIME NOT NULL,    UpdateDate DATETIME NULL,    -- Other data fields  )  

The question is what to do with indexes. I'm thinking of creating the clustered index on (UserId, EventId). But since user activity happens independently it would mean inserts in the middle of the table and deletes in the middle of the table. Probably not good.

Another thought is to add an artificial AuditId field just to have new records get increasing numbers. Like this:

CREATE TABLE Audit  (    Id INT, -- Becomes the clustered index    -- The same as above  )  

This way new audit entries will be appended to the end but deletes will still happen in the middle of the table. It's probably better than the first option but I'm not sure.

This table will be used frequently, basically every user activity is going to be logged (1 insert) and the oldest activity gets deleted in the same transaction (1 delete). This needs to be fast. Well, I want it to be instantaneous ideally and not noticeable performance-wise.

I also need to be able to retrieve the set of records for a particular user quickly. It can probably be covered by a non-clustered index.

I'm asking for advice in designing this table for optimal performance.

EDIT: I think I have missed something important to mention.

What I'm trying to track is not instantaneous in time but rather a period in time. There are several places in the system where I need this. Consider what the user is doing an activity of some sort that may span some period of time. If certain conditions are met then an existing activity is reused (refreshed, updated). I only wish to delete older sort of abandoned activities. For example, within 2 weeks one user may have issued like 50 of activities, but for another user to produce than many may take over a year. That's why I don't want a generally ordered log for all users together.

It is also not clear how should I cluster per datetime (as suggested). Do I do it on initial creation event or on the update event?

Error 1044 Access denied to user

Posted: 05 Mar 2013 01:50 PM PST

This is driving me crazy.

When I try to create a few tables from my Workbench model I get this error 1044.

I've been trying to find a solution but nothing works for me.

Curiously when I run SELECT USER(),CURRENT_USER(); I get two versions of the same user. One is techboy@(an ip address) and the other is techboy@%.

Access is denied to both at one point or another.

The MySql server is a remote hosted server with the user permissions correctly set.

How do I understand this deadlock?

Posted: 05 Mar 2013 11:43 AM PST

I have two threads doing inserts into a table at the same time, causing a deadlock. The interaction with the table is in a new transaction, so I'm pretty confident there is nothing else going on here.

What is the issue?

The following is table (the primary key is the only index)

CREATE TABLE [dbo].[ImageCache](      [ImageStoreKey] [nvarchar](255) NOT NULL,      [ImageData] [varbinary](max) NULL,      [LastModified] [datetime] NULL,      [StoredInRemote] [bit] NOT NULL,   CONSTRAINT [PK_ImageCache] PRIMARY KEY CLUSTERED   (      [ImageStoreKey] ASC  )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,       IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = OFF,       ALLOW_PAGE_LOCKS = OFF) ON [PRIMARY]  ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]  

I have a deadlock that looks like two statements doing this at the same time on the same table, and then deadlocking each other (I'm taking a sample here of a different execution because the deadlock XML doesn't have the full declaration of the prepared statement):

declare @p1 int  set @p1=2218  exec sp_prepexec @p1 output,N'@P0 varbinary(max),@P1 datetime2,@P2 bit,      @P3 nvarchar(4000)',N'insert into ImageCache (imageData, lastModified,       storedInRemote, imageStoreKey)       values (@P0, @P1, @P2, @P3)                                ',      [binary data],'2013-03-05 10:44:53.6050000',0,N'257-27c440c1980070224a79'  select @p1  

Multiple threads inserting into this table in the same way cause the following deadlock:

<deadlock-list>   <deadlock victim="processf1d828">    <process-list>     <process id="processc3eb08" taskpriority="0" logused="0"       waitresource="OBJECT: 7:1282220464:0 " waittime="218" ownerId="5521931466"       transactionname="implicit_transaction" lasttranstarted="2013-03-04T15:54:48.543"       XDES="0x7498c0700" lockMode="X" schedulerid="1" kpid="7288" status="suspended"       spid="145" sbid="0" ecid="0" priority="0" transcount="2"       lastbatchstarted="2013-03-04T15:54:48.543"       lastbatchcompleted="2013-03-04T15:54:48.497"       clientapp="Microsoft JDBC Driver for SQL Server" hostname="newappserver"       hostpid="0" loginname="User" isolationlevel="read committed (2)"       xactid="5521931466" currentdb="7" lockTimeout="4294967295"       clientoption1="671088672" clientoption2="128058">      <executionStack>      <frame procname="adhoc" line="1" stmtstart="120"           sqlhandle="0x02000000732b8e307aef74c20d8606c2b827936fe195eee9">      insert into ImageCache (imageData, lastModified, storedInRemote, imageStoreKey)       values (@P0, @P1, @P2, @P3)           </frame>      <frame procname="unknown" line="1"           sqlhandle="0x000000000000000000000000000000000000000000000000">             unknown           </frame>      </executionStack>      <inputbuf>          Select 1          </inputbuf>     </process>     <process id="processf1d828" taskpriority="0" logused="0"           waitresource="OBJECT: 7:1282220464:0 " waittime="218"           ownerId="5522008674" transactionname="implicit_transaction"           lasttranstarted="2013-03-04T15:54:54.843" XDES="0x66e46ca90"           lockMode="X" schedulerid="6" kpid="11456" status="suspended"           spid="316" sbid="0" ecid="0" priority="0" transcount="2"           lastbatchstarted="2013-03-04T15:54:54.843"           lastbatchcompleted="2013-03-04T15:54:54.843"           clientapp="Microsoft JDBC Driver for SQL Server"           hostname="newappserver" hostpid="0" loginname="User"           isolationlevel="read committed (2)" xactid="5522008674"           currentdb="7" lockTimeout="4294967295" clientoption1="671088672"           clientoption2="128058">      <executionStack>      <frame procname="adhoc" line="1" stmtstart="120"           sqlhandle="0x02000000732b8e307aef74c20d8606c2b827936fe195eee9">          insert into ImageCache (imageData, lastModified, storedInRemote, imageStoreKey)           values (@P0, @P1, @P2, @P3)           </frame>      <frame procname="unknown" line="1"           sqlhandle="0x000000000000000000000000000000000000000000000000">          unknown           </frame>      </executionStack>      <inputbuf>          Select 1          </inputbuf>     </process>    </process-list>    <resource-list>     <objectlock lockPartition="0" objid="1282220464" subresource="FULL"           dbid="7" objectname="Database.dbo.ImageCache" id="locke394cf80"           mode="IX" associatedObjectId="1282220464">      <owner-list>       <owner id="processc3eb08" mode="IX"/>       <owner id="processf1d828" mode="IX"/>      </owner-list>      <waiter-list>       <waiter id="processf1d828" mode="X" requestType="convert"/>       <waiter id="processc3eb08" mode="X" requestType="convert"/>      </waiter-list>     </objectlock>    </resource-list>   </deadlock>  </deadlock-list>  

Slow Queries Not Logging

Posted: 05 Mar 2013 01:37 PM PST

I am attempting to enable slow query logging on our server in order to identify any queries that could use optimization. Sounds simple enough, however my file is not being written to. I get no errors or anything of the like, it just doesn't seem to be logging the slow queries. I have remembered to restart mysql after my config changes.

I am using MySQL Ver 5.1.61 . Here is what I have in my.cnf:

slow-query-log=1  slow-query-log-file=/var/logs/my.slow.log  long_query_time=1  

the file /var/logs/my.slow.log has mysql as the owner, also in the interest of debugging I gave read/write to all on the log file.

I have the long_query_time set to 1 in the above as I just want to see if it is working. I have tried setting it lower(e.g 0.3) but I'm still not getting anything logged. I know that the queries my app are running take longer than 1 second and I've also ran intentionally log queries (SELECT sleep(10);) in the terminal for testing and the log is still empty.

I've looked through the docs, from what I can see this should be working. Anyone have any suggestions as to what I am doing wrong? Any advice would be appreciated, thanks much!

Edit: As asked in comments I ran the query:

 `SELECT variable_value FROM information_schema.global_variables WHERE variable_name IN ('slow_query_log','slow_query_log_file','long_query_time');`  

The result:

10.0000000  /var/run/mysqld/mysqld-slow.log  OFF  

Obviously my configuration changes are not being taken into account as I believe these are defaults. I am quite certain the my.cnf file I am altering is being parsed as if I put in an invalid value mysql will error on restart. What could be going on here?

Another Edit:

After taking @RolandoMySQLDBA 's advice and moving my slow query config lines to under [mysqld] my settings seem to be saving. Now the result of the above variable_value query is :

1.0000000  /var/logs/my.slow.log  ON  

However I am still not seeing the file my.slow.log being written to. I don't think it is a permissions issue as the file is owned by mysql and I've added all permissions for all users on the file. Could anyone think of a reason why this would not be working?

Edit: Solved! The path to the slow query log was incorrect, it should have been /var/log/my.slow.log instead of /var/log*s*/my.slow.log . Thanks to all for the help, I've learned allot!

Sleep Processes are Running Longer Than Allowed

Posted: 05 Mar 2013 04:02 PM PST

From time to time my mysql server will run into a "too many connections" error. I believe part of the problem is large amount of sleep processes that are running, some over 1000 seconds.

When looking into the issue, I saw this and investigated. I checked my timeout values with

show variables like "%timeout";  

And both interactive_timeout and wait_timeout are set to 600.

How/Why would these sleep processes be running so long, if they're supposed to stop at 600?

Multiple database servers for performance vs failover

Posted: 05 Mar 2013 11:47 AM PST

If I have two database servers, and I am looking for maximum performance vs high-availability, what configuration would be best?

Assuming the architecture is two load-balanced web/app servers in front of two db servers, will I be able to have both db servers active with synced data, with web1 to db1, web2 to db2 setup? Is this active/active?

I'm also aware that the two db servers can have their own schema to manually 'split' the db needs of the app. In this case daily backups would be fine. We don't have 'mission critical data.'

If it matters, we have traffic around 3,000-7,000 simultaneous users.

Join To a @Table Variable is running ineficiently

Posted: 05 Mar 2013 11:47 AM PST

Alright so I have a report sproc that was running incredibly slow. Customers were complaining that the report would not run so I started investigating exactly where in the sproc the problem was and I found this portion taking up 99.8% of the time.

DECLARE @xmlTemp TABLE (    CompanyID  INT,    StoreID    INT,    StartDate  DATETIME,    DateStaID  INT,    EndDate    DATETIME,    DateEndID  INT,    LastUpdate DATETIME)    INSERT INTO @xmlTemp  VALUES      (50,               2,               '3/3/2013',               0,               '3/3/2013',               0,               '3/3/2013')    SELECT DISTINCT T.CompanyID,                  CompanyName,                  Z.StoreID,                  StoreName,                  CashedOutBy,                  TransactionID,                  RegisterID,                  BusinessDate,                  CashedOut,                  Total - Isnull((SELECT Sum(DISTINCT PaymentAmount)                                  FROM   vPullDrawerPayments                                  WHERE  CompanyID = T.CompanyID                                         AND StoreID = T.StoreID                                         AND TransactionID = T.TransactionID                                         AND Isnull(PaymentType, 1) <> 1), 0) AS PaymentAmount,                  'Cash'                                                      AS PaymentDesc,                  CASE                    WHEN Z.EndDate >= Z.LastUpdate THEN 1                    ELSE 0                  END                                                         AS MissingData  FROM   vPullDrawerPayments AS T         INNER JOIN @xmlTemp AS Z           ON T.CompanyID = Z.CompanyID              AND T.StoreID = Z.StoreID  WHERE  BusinessDate BETWEEN Z.StartDate AND Z.EndDate  UNION ALL  SELECT DISTINCT NC.CompanyID,                  CompanyName,                  Z.StoreID,                  StoreName,                  CashedOutBy,                  TransactionID,                  RegisterID,                  BusinessDate,                  CashedOut,                  PaymentAmount,                  PaymentDesc,                  CASE                    WHEN Z.EndDate >= Z.LastUpdate THEN 1                    ELSE 0                  END AS MissingData  FROM   vPullDrawerPayments AS NC         INNER JOIN @xmlTemp AS Z           ON NC.CompanyID = Z.CompanyID              AND NC.StoreID = Z.StoreID  WHERE  BusinessDate BETWEEN Z.StartDate AND Z.EndDate         AND Isnull(PaymentType, 1) <> 1  UNION ALL  SELECT DISTINCT C.CompanyID,                  CompanyName,                  Z.StoreID,                  StoreName,                  CashedOutBy,                  TransactionID,                  RegisterID,                  BusinessDate,                  CashedOut,                  Sum(Abs(LineAmount)) AS PaymentAmount,                  'Coupons'            AS PaymentDesc,                  CASE                    WHEN Max(Z.EndDate) >= Max(Z.LastUpdate) THEN 1                    ELSE 0                  END                  AS MissingData  FROM   vPullDrawerPayments AS C         INNER JOIN @xmlTemp AS Z           ON C.CompanyID = Z.CompanyID              AND C.StoreID = Z.StoreID  WHERE  BusinessDate BETWEEN Z.StartDate AND Z.EndDate  GROUP  BY C.CompanyID,            CompanyName,            Z.StoreID,            StoreName,            CashedOutBy,            TransactionID,            RegisterID,            BusinessDate,            CashedOut   

The @xmlTemp portion of this query is normally used to take parameters from our web application and turn them into parameters the report can actually use. For the sake of testing this I am just inserting values that run this for one store for one day. Running this portion can take upwards of 20 minutes.

So I ran this query plan through PlanExplorer and saw it was pulling all of the data from two of my fact tables instead of filtering out just that store and that day. As seen in the picture below.

QueryPlan Obviously this is bad. So the next step I took is to cut out the join the @xml temp and just manually put in the values in the queries WHERE clause to see how well that worked.

SELECT DISTINCT T.CompanyID,                  CompanyName,                  T.StoreID,                  StoreName,                  CashedOutBy,                  TransactionID,                  RegisterID,                  BusinessDate,                  CashedOut,                  Total - Isnull((SELECT Sum(DISTINCT PaymentAmount)                                  FROM   vPullDrawerPayments                                  WHERE  CompanyID = T.CompanyID                                         AND StoreID = T.StoreID                                         AND TransactionID = T.TransactionID                                         AND Isnull(PaymentType, 1) <> 1), 0) AS PaymentAmount,                  'Cash'                                                      AS PaymentDesc  --CASE WHEN Z.'3/3/2013' >= Z.LastUpdate THEN 1 ELSE 0 END AS MissingData  FROM   vPullDrawerPayments AS T  WHERE  CompanyID = 50         AND StoreID = 1         AND BusinessDate BETWEEN '3/3/2013' AND '3/3/2013'  UNION ALL  SELECT DISTINCT NC.CompanyID,                  CompanyName,                  NC.StoreID,                  StoreName,                  CashedOutBy,                  TransactionID,                  RegisterID,                  BusinessDate,                  CashedOut,                  PaymentAmount,                  PaymentDesc  --CASE WHEN Z.'3/3/2013' >= Z.LastUpdate THEN 1 ELSE 0 END AS MissingData  FROM   vPullDrawerPayments AS NC  WHERE  CompanyID = 50         AND StoreID = 1         AND BusinessDate BETWEEN '3/3/2013' AND '3/3/2013'         AND Isnull(PaymentType, 1) <> 1  UNION ALL  SELECT DISTINCT C.CompanyID,                  CompanyName,                  C.StoreID,                  StoreName,                  CashedOutBy,                  TransactionID,                  RegisterID,                  BusinessDate,                  CashedOut,                  Sum(Abs(LineAmount)) AS PaymentAmount,                  'Coupons'            AS PaymentDesc  --CASE WHEN MAX(Z.'3/3/2013') >= MAX(Z.LastUpdate) THEN 1 ELSE 0 END AS MissingData  FROM   vPullDrawerPayments AS C    WHERE  CompanyID = 50         AND StoreID = 1         AND BusinessDate BETWEEN '3/3/2013' AND '3/3/2013'  GROUP  BY C.CompanyID,            CompanyName,            C.StoreID,            StoreName,            CashedOutBy,            TransactionID,            RegisterID,            BusinessDate,            CashedOut   

By changing this to a simple where clause it runs in 4 seconds instead of 20 minutes and up. Also the query Plan is showing the correct Is there any reason I should be seeing this behavior?

Edit here is the full Link to the QueryPlan.

How can I configure SQL Server 2012 to use multiple NICs?

Posted: 05 Mar 2013 09:16 PM PST

How can I configure SQL Server 2012 to use multiple NICs?

I've a server with 4 NICs. I want to use 2 NICs to query my databases, and 2 NICs to retrieve results from them.

I want them to work in load balancing mode and still continue to work if one nick is down.

It is possible?

Two dichotomy views of a table, and means to express this as a stored procudure?

Posted: 05 Mar 2013 07:58 PM PST

There is a table events. Some relatively complex logic determines for each event whether it is archived. Then I need two more views: All archived events and all not archived events.

How to express this? Should I create a view, a procedure, or a function which tells for every view ID whether it is archive (in order to use it in two above mentioned views)?

MySQL.

How to restore mysql dump file?

Posted: 05 Mar 2013 09:47 PM PST

We have created a dump file (.sql) using the command mysqldump in mysql5.5. Now we want to restore the same. We are using the command :-

-u[username] -p[pwd] [dbname] < [.sql file]

But we are not able to restore. We are using Win7 SP1 32 bit OS.

How to avoid deadlock while updating 2 rows in MySQL

Posted: 05 Mar 2013 07:45 PM PST

This is a interview problem:

There is a perl program that updates the database, and it could run in different processes.

One process may execute a transaction like:

update row A -> update row B -> commit  

The other process may execute a transaction like:

update row B -> update row A -> commit  

The rows need to be updated is selected in the program before the transaction.

I was asked how to avoid deadlocks without changing the transaction logic (I cannot commit after updating A and commit again after updating B).

They want me to propose at least 3 different methods. What I know is to use select .. for update when selecting row A and B. Can anyone help to suggest some other methods?

Restore SQL Server database using Windows Powershell 3.0

Posted: 05 Mar 2013 06:32 PM PST

I'm trying to restore a SQL Server database with a PowerShell script, but I'm having problems.

Here is the error I'm getting:

Exception calling "SqlRestore" with "1" argument(s): "Restore failed for Server 'WUSFK250042-OLU\SQLSERVER2008R2'. " At line:48 char:1 + $smoRestore.SqlRestore($server)

Here is my code:

#clear screen  cls    #load assemblies  [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | Out-Null  #Need SmoExtended for backup  [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoExtended") | Out-Null  [Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.ConnectionInfo") | Out-Null  [Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoEnum") | Out-Null    $backupFile = "C:\SafewayRT\SafewayRTFUll.bak"    #we will query the database name from the backup header later  $server = New-Object ("Microsoft.SqlServer.Management.Smo.Server") "WUSFK250042-OLU\SQLSERVER2008R2"  $backupDevice = New-Object("Microsoft.SqlServer.Management.Smo.BackupDeviceItem") ($backupFile, "File")  $smoRestore = new-object("Microsoft.SqlServer.Management.Smo.Restore")    #restore settings  $smoRestore.NoRecovery = $false;  $smoRestore.ReplaceDatabase = $true;  $smoRestore.Action = "Database"  $smoRestorePercentCompleteNotification = 10;  $smoRestore.Devices.Add($backupDevice)    #get database name from backup file  $smoRestoreDetails = $smoRestore.ReadFileList($server)    #display database name  "Database Name from Backup Header : " +$smoRestoreDetails.Rows[0]["Safeway_LogixRT"]    #give a new database name  $smoRestore.Database =$smoRestoreDetails.Rows[0]["Safeway_LogixRT"]    #specify new data and log files (mdf and ldf)  $smoRestoreFile = New-Object("Microsoft.SqlServer.Management.Smo.RelocateFile")  $smoRestoreLog = New-Object("Microsoft.SqlServer.Management.Smo.RelocateFile")    #the logical file names should be the logical filename stored in the backup media  $smoRestoreFile.LogicalFileName = $smoRestoreDetails.Rows[0]["Safeway_LogixRT"]  $smoRestoreFile.PhysicalFileName = $server.Information.MasterDBPath + "\" + $smoRestore.Database + "_Data.mdf"  $smoRestoreLog.LogicalFileName = $smoRestoreDetails.Rows[0]["Safeway_LogixRT"] + "_Log"  $smoRestoreLog.PhysicalFileName = $server.Information.MasterDBLogPath + "\" + $smoRestore.Database + "_Log.ldf"  $smoRestore.RelocateFiles.Add($smoRestoreFile)  $smoRestore.RelocateFiles.Add($smoRestoreLog)    #restore database  $smoRestore.SqlRestore($server)  

How can I set a default session sql_mode for a given user?

Posted: 05 Mar 2013 02:32 PM PST

I want to have a default session sql_mode that is different from the global sql_mode.

I can set it manually, but I'd like to do it automatically.

I thought I could add it to .my.cnf, but that doesn't work. I tried adding it to the [mysql] group in .my.cnf, and it does not throw an error.

Yet, when I connect my session sql_mode, the session still inherits the global sql_mode.

I'm using MySQL 5.5.16.

Which RDBMS has the most advanced "table-valued" functions?

Posted: 05 Mar 2013 04:32 PM PST

Every Database Management System focuses on specific features and applications. I am using mysql, probably as it is popular and satisfies common needs. However, I came across a problem, which needs advanced string functions (probably at the level of a scripting language), posted as a question with a solution in mysql though NOT straightforward (How to split a string to and get info from another table in mysql).

Thus, I thought that there should be other RDBMSes with appropriate functions suitable for such tasks. Can you suggest alternative RDBMSes for splitting a cell (string) and connect the sub-cells to other tables? Even, it is not necessary to build a virtual table by JOIN, just to fetch appropriate values from another table.

MySQL generic trigger to save identification data for later accessing of the changed row

Posted: 05 Mar 2013 04:32 PM PST

I am pretty inexperienced with this.

I need a generic trigger, able to create and save in a fixed table some sort of identification data for a changed row from generic (any) table. The identification data should be used later to SELECT the changed item in the given table.

Can be this done without previously knowing the table structure?

The only idea I had, but it's way too inefficient in my opinion, also requires previous knowledge of the table column names, is to save a hash by:

MD5(concat(NEW.field1, NEW.field2, NEW.field3, ...))  

then

SELECT * FROM chaged_table WHERE hash = MD5(concat(field1, field2, field3, ...))  

to identify the changed row in the table which triggered the change.

I would greatly appreciate any help or suggestions!

How do i subtract a number of days from a prepared statement parameter in db2?

Posted: 05 Mar 2013 01:46 PM PST

I have a C# application using OLEDB to connect to a DB2 database. Basically, I want to do this:

SELECT CALC.WEEK, COUNT(CALC.ID) AS TRANSACTIONS ,      SUM(CALC.SUBTOTAL) AS REVENUE   FROM (          SELECT               CASE                  WHEN O.DATE < ? - 21 DAYS THEN 1                  WHEN O.DATE >= ? - 21 DAYS AND O.DATE < ? - 14 DAYS THEN 2                  WHEN O.DATE >= ? - 14 DAYS AND O.DATE < ? - 7 DAYS THEN 3                  WHEN O.DATE >= ? - 7 DAYS THEN 4              END AS WEEK,               O.ID, O.SUBTOTAL          FROM COMPANY.ORDERS AS O INNER JOIN               COMPANY.LOCATIONS AS L ON O.LOCATION_ID = L.ID INNER JOIN               COMPANY.STORES AS S ON O.STORE_ID = S.ID           WHERE O.DATE >= ? - 28 DAYS              AND O.DATE <= ?              AND S.NUMBER = ?     ) AS CALC  GROUP BY CALC.WEEK;  

However, when I run that, I get this:

An expression with a datetime value or a labeled duration is not valid.. SQLCODE=-182, SQLSTATE=42816, DRIVER=3.63.108  

And if i change it to wrap the ? parameter with a TIMESTAMP function thusly:

SELECT CALC.WEEK, COUNT(CALC.ID) AS TRANSACTIONS ,      SUM(CALC.SUBTOTAL) AS REVENUE   FROM (          SELECT               CASE                  WHEN O.DATE < TIMESTAMP(?) - 21 DAYS THEN 1                  WHEN O.DATE >= TIMESTAMP(?) - 21 DAYS AND O.DATE < TIMESTAMP(?) - 14 DAYS THEN 2                  WHEN O.DATE >= TIMESTAMP(?) - 14 DAYS AND O.DATE < TIMESTAMP(?) - 7 DAYS THEN 3                  WHEN O.DATE >= TIMESTAMP(?) - 7 DAYS THEN 4              END AS WEEK,               O.ID, O.SUBTOTAL          FROM COMPANY.ORDERS AS O INNER JOIN               COMPANY.LOCATIONS AS L ON O.LOCATION_ID = L.ID INNER JOIN               COMPANY.STORES AS S ON O.STORE_ID = S.ID           WHERE O.DATE >= TIMESTAMP(?) - 28 DAYS              AND O.DATE <= TIMESTAMP(?)              AND S.NUMBER = ?      ) AS CALC  GROUP BY CALC.WEEK  

I get this error:

The invocation of routine "TIMESTAMP" is ambiguous. The argument in position "1" does not have a best fit.. SQLCODE=-245, SQLSTATE=428F5, DRIVER=3.63.108  

As a bonus question, short of creating a PL/SQL function, is there a way to use the same parameter for all the timestamp locations instead of supplying the same value 8 times?

Determining the optimal Column Widths from a .csv File

Posted: 05 Mar 2013 08:32 PM PST

I want to import a .CSV file into a MySQL Table. I would like to determine the optimal column widths to use based on the actual data contained in the CSV file. Are there any scripts or utilities to accomplish this?

MySQL GRANT control

Posted: 05 Mar 2013 07:32 PM PST

How can I write a script to automate MySQL database grants in a very easy way?

I have a config file that contains the grant changes I want to make. I want the script to read the config file, and make any necessary changes to the access rights as specified in the config file. Essentially, I want the target rights to match the rights in the config file.

Postgres PgAdmin III Query Builder crash

Posted: 05 Mar 2013 12:07 PM PST

I've been using Postgres and pgAdmin III for some years, but this morning I had a very BIG BIG problem. I'm using pgAdmin III 9.0 on Windows Xp operating system, on my laptop Dell Latitude E4300.

From this morning, Query Builder ("SQL Editor") option simply stopped working. When I open pgAdmin, everything is fine and workin', until I click on the Sql Query Builder button. When I do that, nothing appears and pgAdmin crashes without giving any information.

I tried to uninstall and install PgAdmin III 9.1, again, but Sql Query Builder kept not workin'.

What could I do?

[SQL Server] Really confused newbie - Perhaps a Temp Table query?

[SQL Server] Really confused newbie - Perhaps a Temp Table query?


Really confused newbie - Perhaps a Temp Table query?

Posted: 05 Mar 2013 12:58 AM PST

Hi Guys,I'll basically do my best to describe what I'm trying to accomplish, and give you the query I have so far (which isn't correct). I have several different tables. In this case, three: ERTutAccounts, ERTutPositions, and ERTutSecMast.What I'm trying to accomplish is a table where one result set lives. Within this table I'd ideally like to have:Portfolio | PortfolioID | # Of Securities | Total Market Value of PortfolioI've gotten as far as Column 1, Column 2, and Column 4, but am unable to add the 3rd column (counting the # of securities in each portfolio).My query is as follows:SELECT ERTutAccounts.Portfolio, ERTutPositions.PortfolioID, SUM(ERTutPositions.MarketValue) AS SumOfMV From ERTutAccounts, ERTutPositions WHERE ERTutPositions.PortfolioID=ERTutAccounts.PortfolioID GROUP BY ERTutAccounts.Portfolio, ERTutPositions.PortfolioIDThis query above does the job, but when I add in the Securities...SELECT ERTutAccounts.Portfolio, ERTutPositions.PortfolioID, SUM(ERTutPositions.MarketValue) AS SumOfMV, ERTutSecMast.SecID AS '# of Securities' From ERTutAccounts, ERTutPositions, ERTutSecMast WHERE ERTutPositions.PortfolioID=ERTutAccounts.PortfolioID AND ERTutPositions.SecID=ERTutSecMast.SecID GROUP BY ERTutAccounts.Portfolio, ERTutPositions.PortfolioID, ERTutSecMast.SecIDWhen I do this, it will list each SecID individually and repeat the PortfolioID over and over again.I understand this is confusing, and I attempted to make it concise as possible. As you can tell, I'm an extreme newbie who has been using SQL for about three days now. Any input is appreciated!

Performing calculations on subtotals

Posted: 05 Mar 2013 04:47 AM PST

Hello: I'm a new user to SQL Server 2012 Standard. I have a table with information similar to what's found in the spreadsheet attached. I would like to get the total Pay calculated not just by department (e.g. total of CNA is 70.31) but then also have a total of the subtotals of some departments (but not all). For example, I'd like the total of LPN and RN together (both of which are types of nurses, so I want a 'nurse total') but not CNA. Is this possible to do through a SQL query, or another way in SSMS? I've read something about Calculated Members and cubes, is that the direction I have to head in, or can it be handled in SSMS? For the record, I'm using Yellowfin to run reports off of the data in SS 2012 and I can't find a way to create these calculations in YF, so I'm hoping to do it on the server end. I hope this is clear. thank you in advance.

Creating an SQL Temp Table?

Posted: 05 Mar 2013 05:04 AM PST

Hi guys,I had reached out to everyone earlier today to ask for help, and I was able to take a lot away from that, so thank you in advance. I'm looking to turn the query that I created into a "Temp Table".I've searched far and wide on the internet and can find little to no literature on it. Can anyone help me get started with creating a temp table?The previous query was as follows:SELECT A.Portfolio, B.PortfolioID, SUM(B.MarketValue) AS SumOfMV, COUNT(B.SecID) AS [# of Securities] From ERTutAccounts A INNER JOIN ERTutPositions B ON A.PortfolioID = B.PortfolioID GROUP BY A.Portfolio, B.PortfolioID HAVING COUNT(B.SecID) > 70 OR SUM(B.MarketValue) > 30000000 Before attempting to build the above query into a temp table, I simply attempted a really trivial table to no avail. Can someone tell me what I'm doing wrong and the right place to begin?CREATE TABLE #Test (PortfolioID int, Portfolio varchar(200)) INSERT INTO #Test (PortfolioID, Portfolio) SELECT PortfolioID, Portfolio FROM ERTutAccounts WHERE PortfolioID = 1

Move Database From One Server to Other

Posted: 04 Mar 2013 07:32 PM PST

Hi All,What I am trying to do is that I am trying to copy database from Other Sever. I am coping the database from source server and connected it my server. Everything is working fine but at the last it is showing me error like below:SqlServer Agent is not currently running so it cannot be notified of this action. (Microsoft SQL Server, Error: 22022)Could please let me know the remedy?Thanks in advance!

View results into string

Posted: 05 Mar 2013 02:15 AM PST

I have a table where I need to take any number of financial distributions and stack the accounts into a string that can be used as an array.SELECT POPRCTNM, ACTINDX FROM POP30390 //this would for example return say 3 to 6 rows of data per POPRCTNM each with different ACTINDX numbers and related values (in the database but not requested)So for example:RCT0117382 3061RCT0117382 3061RCT0117382 46What I want would be RCT0117382 3061 3061 46I then would be able to look at ITEMS on the receipt and if they are of a certain class, check if the expected ACTINDX is in the list used (so I can identify problems from improper human processes).So, how would I create a table that groups on the PO Receipt Number and has a string field that I can then use crystal to convert to a searchable array for specific account indexes?

Extract only the last three days

Posted: 04 Mar 2013 06:20 PM PST

Extract only the last three dayshi all, hope in your help.this is my procedure for export in txt file the values of the table in db sql server 2008.the table is on a remote server and the field [myDateString] is nvarchar 255 and I'm not admin.I need extract only the last three days for the table, in this moment extract all current year.Can you help me ?thank you[code] EXEC master.dbo.sp_configure 'show advanced options', 1 RECONFIGURE EXEC master.dbo.sp_configure 'xp_cmdshell', 1 RECONFIGURE EXEC xp_cmdshell 'bcp "SELECT * FROM tbl WHERE YEAR(CAST(SUBSTRING([myDateString], 7, 4) AS DATETIME)) = 2013;" queryout "\\myserver\public\tkt.txt" -T -c -t;'[/code]

Create Insert data script for views

Posted: 04 Mar 2013 03:23 PM PST

Dear AllFor one table I need to create insert statemetn with data. For this I used SSMS-Task -Generate script - Data only it creates inserts will all the rows.But i need only few selected rows.Hence created a view and tried the same thing but it does not generate insert script with data.I am doing somthing wrong?Regards

Trying to make my lookup table unique

Posted: 04 Mar 2013 01:50 PM PST

I am running the following query, what I am trying to do is only have in my reference table the first 3 unique columns, without loosing the other column values, I am not interested in the other columns being grouped but SqlSever is forcing me to group by them all with the following error. Msg 8120, Level 16, State 1, Line 6Column 'lookuptable.Software_Subcategory' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.So I have had to group all columnsAny Ideasdrop table lookuptable1select Software_Name_Raw, Software_Version_Raw, Software_Publisher_Raw, Software_Category, Software_Subcategory, MSDN_Flag, CDL_Flag, Source, Pending_Classification_Flag, Auto_Classification_Flag, Software_Classification_Version, Manual_Deletion, Load_Date into lookuptable1from lookuptablegroup by Software_Name_Raw, Software_Version_Raw, Software_Publisher_Raw, Software_Category, Software_Subcategory, MSDN_Flag, CDL_Flag, Source, Pending_Classification_Flag, Auto_Classification_Flag, Software_Classification_Version, Manual_Deletion, Load_DateAS YOU CAN SEE FROM MY RESULTS BELOW I ONLY NEED THE FIRST 3 COLUMNS BELOW for reference BUT I STILL NEED ALL THE DATA SO BASICALLY 1 OF THE ROWS COULD BE DELETED from the first 2 ROWS. as the first 3 columns match select * from lookuptable1 where software_name_raw = 'Acrobat' order by software_name_rawACROBAT,6.X,Adobe Systems, Inc.,Licensable,Edition Unknown,FALSE,FALSE,TAD4D-Matched-DRN,No,Yes,0023,NULL,2012-09-08 16:41:38.000ACROBAT,6.X,Adobe Systems, Inc.,Licensable,Edition Unknown,FALSE,FALSE,TAD5A-matched,DRN,No,Yes,0023,NULL,2012-09-08 16:41:38.000ACROBAT,7.x,Adobe Systems, Inc.,Licensable,Edition Unknown,FALSE,FALSE,TAD4D-Matched-DRN,No,Yes,0023,NULL,2012-09-08 16:41:38.000

[MS SQL Server] Dr Server Implementation in sql server 2008

[MS SQL Server] Dr Server Implementation in sql server 2008


Dr Server Implementation in sql server 2008

Posted: 04 Mar 2013 06:20 PM PST

Hi, i have morethan 100+ database in single instance but now i have plan for create Dr site server. is it possiable for all database in Dr site..? how can i do it..?

Mirror failover

Posted: 05 Mar 2013 02:03 AM PST

Hi All,I have an issue with mirror failover1. I configured mirroring on 3 databases. if in 3databses one got failover the other two also need to failover to the mirror server can any one know the TSQL command?please post here..I have small idea about the query where mirror status is 7 or 8 :-)

Database Owners

Posted: 05 Mar 2013 02:03 AM PST

It recently came to my attention that the DB owner on many of our databases is the user who created them, rather thas sa or a service account.Assuming it is OK for that person to have full rights on the database, how much of a problem is this? Does anything break if the person leaves the organization and the account is disabled? (I believe Agent jobs owned by diabled Active Directory accounts DO stop working).What are the pros & cons of having databases owned by sa vrs domain service accounts?

rename SQL server

Posted: 04 Mar 2013 04:48 AM PST

After we changed computer name, and I renamed SQL server name too following ms articles. by using sp_dropserver, sp_addserver.But I see there are still somewhere on the server left with old computer name footprint.For example: there are some windows group see attached, they have old computer name there, shall I rename the groups?I am afraid if I rename it to include newer computer names, it will break something in the server, or registry?Or it is safe to rename? [img]http://www.sqlservercentral.com/Forums/Attachment13286.aspx[/img]Thanks

Data Issue

Posted: 04 Mar 2013 11:30 PM PST

Hi Experts,In one of our database developers found an issue with data. The data is 4F which is returned correctly when using this value and the same is returned when given 4F. That is the symbol is replaced with 'F'. have to give N' (unicode) while searching with both the symbol and F. Any idea why this happens. The column data type is NVARCHAR.Thanks in Advance

WMI event for mirroring

Posted: 04 Mar 2013 07:16 AM PST

Hi Friends,I have to implement Mirroring on 4 databases. 1. How can I write code for Failover for all databases. If one DB got failover I want it to failover all the databses. For this I want create a WMI event for Each DB can any one know the script please reply me back.Thank you in Advance :-)

[Articles] Data We Don't Want

[Articles] Data We Don't Want


Data We Don't Want

Posted: 04 Mar 2013 11:00 PM PST

There's potentially an exploit that can download lots of data to a machine. This shouldn't be a concern for servers, but you never know.

[SQL 2012] Failing to deploy MDS package with MDSModelDeploy.exe

[SQL 2012] Failing to deploy MDS package with MDSModelDeploy.exe


Failing to deploy MDS package with MDSModelDeploy.exe

Posted: 25 Nov 2012 08:54 PM PST

Heya guys, i am deploying a .pkg for MDS using MDSModelDeploy.exeI am doing exactly what MS says. But get an error.this is my CMD code[code]c:\Program Files\Microsoft SQL Server\110\Master Data Services\Configuration>mdsmodeldeploy.exe listservicesMDS services (Service, Website, Virtual Path):MDS1, Default Web Site, MDSMDS2, Default Web Site, MDS1MDS3, ISPMasterDataService,MDSModelDeploy operation completed successfully. Elapsed time: 00:00:00.3989524c:\Program Files\Microsoft SQL Server\110\Master Data Services\Configuration> MDSModelDeploy.exe deploynew -package "C:\Programles\Microsoft SQL Server\110\Master Data Services\Configuration\Finance_data.pkg" -model Finance -service ISPMasterDataServiceMDSModelDeploy operation failed. Elapsed time: 00:00:00.0634293Error:The DeployNew command failed because either there is a missing - option or the name is not in quotes. All names that contain spas must be enclosed in quotation marks.c:\Program Files\Microsoft SQL Server\110\Master Data Services\Configuration>[/code]any idea why?thanks

Migrating from 2008R2 to 2012... any considerations?

Posted: 04 Mar 2013 07:04 PM PST

Hi,This week we will start migrating from SQL2008-R2 to SQL2012. Are they are known issues to consider? Or is it just a matter of restoring the databses from a backup and continue?Thanks,

Can't find options to retain partitions in SSAS Tabular Deployment?

Posted: 04 Mar 2013 11:05 AM PST

So I have created a new Tabular cube on 2012, deployed it for a while. And created a few new partitions via SSMS in the server to cover all the data. Now I have made some changes in the project, planning to deploy it to the server, but can NOT find an option to ignore existing partitions (that are not defined in the project itself) I have tried it on a test db, and no matter what i do... the project seems to deploy its definition, and overwrite what's on the server. meaning all the partitions that i created after deploy are WIPED! I have billions of rows of data, so reprocessing all the missing partitions are really not preferable... and managing the partitions in project but not in ssms is also not preferable as we typicall use scripts to add / manage partitions after it goes livewe have found some blog posts about changing .deploymentoptions file... but we don't know what value we should change to (not in BOL).. we changed it to "RetainPartitions" as a test.. but that doesn't workanyone has been through this? thanks

Error for Reinitialize Subscription in Snapshot Replication

Posted: 04 Mar 2013 06:37 AM PST

Hi all, I have Snapshot replication setup. I added a new article to my publication. I then went to the Replication monitor and right clicked on Reinitialize all Subscriptions. I get the error below. But when I manually run the SQL job which creates the snapshot and then run the job for the subscriber, everything runs fine. I don't understand the Reinitialize all subscriptions error message. I first thought this was a permissions issue, but have played around with the permissions in the distribution database and nothing has resolved the error. [i]Replication Monitor could not reinitialize one or more subscriptions that support automatic initilization. Additional Information: An exception occurred while executing a transact sql statement or batch. (Microsoft.SQLServer.ConnectionInfo)The subscripttion status of the object could not be changed. Could not update the distribution database subscription table. The subscription status could not be changed. Changed database context to "TestDatabase". (Microsoft SQL Server: Error 14068) [/i]

Search This Blog