Tuesday, March 5, 2013

[how to] How to properly kill MySQL?

[how to] How to properly kill MySQL?


How to properly kill MySQL?

Posted: 05 Mar 2013 09:27 PM PST

I have CentOS 64bit with CPanel installed and I use

service mysql stop  

However, it keeps doing ..... for minutes and it never stops. It used to be instant. Any idea why it does that and how to fix? Right now I have to do killall -9 mysql but is there a better way?

The server is also very very active.

RSExecRole missing

Posted: 05 Mar 2013 09:23 PM PST

I have multiple servers and 4 of them have reporting services installed with the ReportServer & ReportServerTempDB databases.

A few days ago I was setting up security for a new login and I found out that the RSExecRole role is missing in my dev server (in the reportserver database). It exists on the ReportServerTempDB, master & msdb databases.

I have found a way to create it on the master & msdb databases on msdn but it didn't help me creating it on reportserver with all the securibles & properties similar to the other environents I run.

Has anyone encountered this problem before? Can anyone help me with a script and an explanation about this role?

Web based tool to backup PostgreSQL

Posted: 05 Mar 2013 09:14 PM PST

Please suggest any web based tool to take an hourly backup of PostgreSQL database.

Run job in agent based on table data

Posted: 05 Mar 2013 09:19 PM PST

Is there a way to start a SQL Server agent job if a table is not empty?

Basically, first I want to check a table, and if the table is not empty then start the job. If it is empty, then check every 1 hour to see whether it has the required information.

I want to run this job once a day once I have data in the table.

Error while connecting SQL Server management studio 2008

Posted: 05 Mar 2013 09:12 PM PST

I have installed SQL Server 2008 management studio. When I connect, it shows an error. Also when I go to the SQL Server configuration manager and click on SQL Server SERVER services, this message is shown:

There are no items in list to view

No connections are found in the SQL Server configuration manager while my local network is connected.

How can I fix this?

What does the bit data type map to in a Java Domain Object

Posted: 05 Mar 2013 07:25 PM PST

When you have a column of the bit data type what Class would be used to represent that in a Java POJO mapping that table to an object which is using javax.persistence annotations?

I tried Boolean, which didn't work and produced the following errors:

org.hibernate.exception.SQLGrammarException: could not insert: [com.DomainClassName]
java.sql.SQLException: Incorrect syntax near '-'.

EDIT: a Java boolean does map to the bit data type in SQL Server. I had a column name with a dash in it that I had to change.

MySQL Fulltext match with forward slashes

Posted: 05 Mar 2013 06:20 PM PST

I'm trying to run a fulltext match against a product database, where the product names have forward slashes in them. These are staple sizes, so it's quite important that they get matched.

However, for every query I execute with a forward slash, I get 0 results.

The query I'm using is:

SELECT `product`.*  FROM `product`  WHERE MATCH (`product`.`name`) AGAINST('26/6')  

I have also tried

SELECT `product`.*  FROM `product`  WHERE MATCH (`product`.`name`) AGAINST('\"26/6\"')  

However, running the following returns 6 results, as expected:

SELECT `product`.*  FROM (`product`)  WHERE `product`.`name` LIKE '%26/6%'  

The table is MyISAM and the product.name is varchar(255), with a FULLTEXT index on that column alone. If I search using fulltext for something else, like 'tool', I get the results as expected, and the server is configured for 4 character or more matching.

What do I need to do to make this fulltext match work?

MySQL Table not repairing

Posted: 05 Mar 2013 09:06 PM PST

Table info:

Database name: user_motiva  Table name: wp_options.frm  wp_options.MYD  wp_options.MYI  wp_options.TMD  

when I do a mysqlcheck -r --all-databases it gets hung on that table even if you let it sit all day.

Is there anther way to fix/repair/recover that table?

Should I use myisamchk? I saw something like:

shell> myisamchk --recover City  

My config on a 16GB ram box

 cat /etc/my.cnf  [mysqld]  default-storage-engine=MyISAM  local-infile=0  symbolic-links=0  skip-networking  max_connections = 500  max_user_connections = 20  key_buffer = 512M  myisam_sort_buffer_size = 64M  join_buffer_size = 64M  read_buffer_size = 12M  sort_buffer_size = 12M  read_rnd_buffer_size = 12M  table_cache = 2048  thread_cache_size = 16K  wait_timeout = 30  connect_timeout = 15  tmp_table_size = 64M  max_heap_table_size = 64M  max_allowed_packet = 64M  max_connect_errors = 10  query_cache_limit = 1M  query_cache_size = 64M  query_cache_type = 1  low_priority_updates=1  concurrent_insert=ALWAYS  log-error=/var/log/mysql/error.log  tmpdir=/home/mysqltmp  myisam_repair_threads=4  [mysqld_safe]  open_files_limit = 8192  log-error=/var/log/mysql/error.log    [mysqldump]  quick  max_allowed_packet = 512M    [myisamchk]  key_buffer = 64M  sort_buffer = 64M  read_buffer = 16M  write_buffer = 16M  

and could this have happened because of a crashed table from doing killall -9 mysqld because it would not shutdown and restart?

EDIT:

root@server [/var/lib/mysql/user_motiva]# myisamchk -e *.MYI  Checking MyISAM file: wp_options.MYI  Data records:    1827   Deleted blocks:       3  myisamchk: warning: 3 clients are using or haven't closed the table properly  - check file-size  - check record delete-chain  - check key delete-chain  - check index reference  - check data record references index: 1  - check data record references index: 2  - check records and index references  MyISAM-table 'wp_options.MYI' is usable but should be fixed  root@server [/var/lib/mysql/user_motiva]# myisamchk --safe-recover wp_options.MYI  - recovering (with keycache) MyISAM-table 'wp_options.MYI'  Data records: 1827  myisamchk: error: Can't create new tempfile: 'wp_options.TMD'  MyISAM-table 'wp_options.MYI' is not fixed because of errors  Try fixing it by using the --safe-recover (-o), the --force (-f) option or by not using the --quick (-q) flag  root@ns2 [/var/lib/mysql/user_motiva]# myisamchk -o -f wp_options.MYI  - recovering (with keycache) MyISAM-table 'wp_options.MYI'  Data records: 1827  

Does this mean that it is now fixed? If so how do I move it back? (this was done on a different server) Is there a way to maybe bring MySQL down on the main server and run a command to fix all the files?

How do I get more details on a full text catalog error?

Posted: 05 Mar 2013 11:55 AM PST

I have an error that is repeatedly generated on a SQL Server 2005 instance running the databases for MS Team Foundation Server.

 'Severity 16' occurred on INSTANCENAME  

Looking in the SQL Server logs gives me

 Errors were encountered during full-text index population for table or indexed view '[TfsWorkItemTracking].[dbo].[WorkItemLongTexts]', database 'TfsWorkItemTracking' (table or indexed view ID '1810105489', database ID '14'). Please see full-text crawl logs for details.  

And the actual full-text crawl log (\MSSQL\LOG\SQLFT0001300005.LOG) gives me:

 Error '0x800b0100' occurred during full-text index population for table or indexed view      The component 'MSFTE.DLL' reported error while indexing. Component path 'D:\Program Files\Microsoft SQL Server\MSSQL.3\MSSQL\Binn\MSFTE.DLL'  

If I rebuild the full-text catalog like so...

 ALTER FULLTEXT CATALOG [TeamFoundationServer10FullTextCatalog]   

The error will stop happening for a while, but it will start up again days later. I have searched for hours and cannot figure out what is causing this. Any suggestions?

Optimizing an audit table for inserts and deletes

Posted: 05 Mar 2013 12:15 PM PST

I'm trying to identify the best way to design a table for audit purposes. Basically I log several last events for many users in one table. There's a limit on the number of records for each user. The new records come, the older go.

Something like this:

CREATE TABLE Audit  (    UserId INT NOT NULL,    EventId INT NOT NULL,    CreationDate DATETIME NOT NULL,    UpdateDate DATETIME NULL,    -- Other data fields  )  

The question is what to do with indexes. I'm thinking of creating the clustered index on (UserId, EventId). But since user activity happens independently it would mean inserts in the middle of the table and deletes in the middle of the table. Probably not good.

Another thought is to add an artificial AuditId field just to have new records get increasing numbers. Like this:

CREATE TABLE Audit  (    Id INT, -- Becomes the clustered index    -- The same as above  )  

This way new audit entries will be appended to the end but deletes will still happen in the middle of the table. It's probably better than the first option but I'm not sure.

This table will be used frequently, basically every user activity is going to be logged (1 insert) and the oldest activity gets deleted in the same transaction (1 delete). This needs to be fast. Well, I want it to be instantaneous ideally and not noticeable performance-wise.

I also need to be able to retrieve the set of records for a particular user quickly. It can probably be covered by a non-clustered index.

I'm asking for advice in designing this table for optimal performance.

EDIT: I think I have missed something important to mention.

What I'm trying to track is not instantaneous in time but rather a period in time. There are several places in the system where I need this. Consider what the user is doing an activity of some sort that may span some period of time. If certain conditions are met then an existing activity is reused (refreshed, updated). I only wish to delete older sort of abandoned activities. For example, within 2 weeks one user may have issued like 50 of activities, but for another user to produce than many may take over a year. That's why I don't want a generally ordered log for all users together.

It is also not clear how should I cluster per datetime (as suggested). Do I do it on initial creation event or on the update event?

Error 1044 Access denied to user

Posted: 05 Mar 2013 01:50 PM PST

This is driving me crazy.

When I try to create a few tables from my Workbench model I get this error 1044.

I've been trying to find a solution but nothing works for me.

Curiously when I run SELECT USER(),CURRENT_USER(); I get two versions of the same user. One is techboy@(an ip address) and the other is techboy@%.

Access is denied to both at one point or another.

The MySql server is a remote hosted server with the user permissions correctly set.

How do I understand this deadlock?

Posted: 05 Mar 2013 11:43 AM PST

I have two threads doing inserts into a table at the same time, causing a deadlock. The interaction with the table is in a new transaction, so I'm pretty confident there is nothing else going on here.

What is the issue?

The following is table (the primary key is the only index)

CREATE TABLE [dbo].[ImageCache](      [ImageStoreKey] [nvarchar](255) NOT NULL,      [ImageData] [varbinary](max) NULL,      [LastModified] [datetime] NULL,      [StoredInRemote] [bit] NOT NULL,   CONSTRAINT [PK_ImageCache] PRIMARY KEY CLUSTERED   (      [ImageStoreKey] ASC  )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,       IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = OFF,       ALLOW_PAGE_LOCKS = OFF) ON [PRIMARY]  ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]  

I have a deadlock that looks like two statements doing this at the same time on the same table, and then deadlocking each other (I'm taking a sample here of a different execution because the deadlock XML doesn't have the full declaration of the prepared statement):

declare @p1 int  set @p1=2218  exec sp_prepexec @p1 output,N'@P0 varbinary(max),@P1 datetime2,@P2 bit,      @P3 nvarchar(4000)',N'insert into ImageCache (imageData, lastModified,       storedInRemote, imageStoreKey)       values (@P0, @P1, @P2, @P3)                                ',      [binary data],'2013-03-05 10:44:53.6050000',0,N'257-27c440c1980070224a79'  select @p1  

Multiple threads inserting into this table in the same way cause the following deadlock:

<deadlock-list>   <deadlock victim="processf1d828">    <process-list>     <process id="processc3eb08" taskpriority="0" logused="0"       waitresource="OBJECT: 7:1282220464:0 " waittime="218" ownerId="5521931466"       transactionname="implicit_transaction" lasttranstarted="2013-03-04T15:54:48.543"       XDES="0x7498c0700" lockMode="X" schedulerid="1" kpid="7288" status="suspended"       spid="145" sbid="0" ecid="0" priority="0" transcount="2"       lastbatchstarted="2013-03-04T15:54:48.543"       lastbatchcompleted="2013-03-04T15:54:48.497"       clientapp="Microsoft JDBC Driver for SQL Server" hostname="newappserver"       hostpid="0" loginname="User" isolationlevel="read committed (2)"       xactid="5521931466" currentdb="7" lockTimeout="4294967295"       clientoption1="671088672" clientoption2="128058">      <executionStack>      <frame procname="adhoc" line="1" stmtstart="120"           sqlhandle="0x02000000732b8e307aef74c20d8606c2b827936fe195eee9">      insert into ImageCache (imageData, lastModified, storedInRemote, imageStoreKey)       values (@P0, @P1, @P2, @P3)           </frame>      <frame procname="unknown" line="1"           sqlhandle="0x000000000000000000000000000000000000000000000000">             unknown           </frame>      </executionStack>      <inputbuf>          Select 1          </inputbuf>     </process>     <process id="processf1d828" taskpriority="0" logused="0"           waitresource="OBJECT: 7:1282220464:0 " waittime="218"           ownerId="5522008674" transactionname="implicit_transaction"           lasttranstarted="2013-03-04T15:54:54.843" XDES="0x66e46ca90"           lockMode="X" schedulerid="6" kpid="11456" status="suspended"           spid="316" sbid="0" ecid="0" priority="0" transcount="2"           lastbatchstarted="2013-03-04T15:54:54.843"           lastbatchcompleted="2013-03-04T15:54:54.843"           clientapp="Microsoft JDBC Driver for SQL Server"           hostname="newappserver" hostpid="0" loginname="User"           isolationlevel="read committed (2)" xactid="5522008674"           currentdb="7" lockTimeout="4294967295" clientoption1="671088672"           clientoption2="128058">      <executionStack>      <frame procname="adhoc" line="1" stmtstart="120"           sqlhandle="0x02000000732b8e307aef74c20d8606c2b827936fe195eee9">          insert into ImageCache (imageData, lastModified, storedInRemote, imageStoreKey)           values (@P0, @P1, @P2, @P3)           </frame>      <frame procname="unknown" line="1"           sqlhandle="0x000000000000000000000000000000000000000000000000">          unknown           </frame>      </executionStack>      <inputbuf>          Select 1          </inputbuf>     </process>    </process-list>    <resource-list>     <objectlock lockPartition="0" objid="1282220464" subresource="FULL"           dbid="7" objectname="Database.dbo.ImageCache" id="locke394cf80"           mode="IX" associatedObjectId="1282220464">      <owner-list>       <owner id="processc3eb08" mode="IX"/>       <owner id="processf1d828" mode="IX"/>      </owner-list>      <waiter-list>       <waiter id="processf1d828" mode="X" requestType="convert"/>       <waiter id="processc3eb08" mode="X" requestType="convert"/>      </waiter-list>     </objectlock>    </resource-list>   </deadlock>  </deadlock-list>  

Slow Queries Not Logging

Posted: 05 Mar 2013 01:37 PM PST

I am attempting to enable slow query logging on our server in order to identify any queries that could use optimization. Sounds simple enough, however my file is not being written to. I get no errors or anything of the like, it just doesn't seem to be logging the slow queries. I have remembered to restart mysql after my config changes.

I am using MySQL Ver 5.1.61 . Here is what I have in my.cnf:

slow-query-log=1  slow-query-log-file=/var/logs/my.slow.log  long_query_time=1  

the file /var/logs/my.slow.log has mysql as the owner, also in the interest of debugging I gave read/write to all on the log file.

I have the long_query_time set to 1 in the above as I just want to see if it is working. I have tried setting it lower(e.g 0.3) but I'm still not getting anything logged. I know that the queries my app are running take longer than 1 second and I've also ran intentionally log queries (SELECT sleep(10);) in the terminal for testing and the log is still empty.

I've looked through the docs, from what I can see this should be working. Anyone have any suggestions as to what I am doing wrong? Any advice would be appreciated, thanks much!

Edit: As asked in comments I ran the query:

 `SELECT variable_value FROM information_schema.global_variables WHERE variable_name IN ('slow_query_log','slow_query_log_file','long_query_time');`  

The result:

10.0000000  /var/run/mysqld/mysqld-slow.log  OFF  

Obviously my configuration changes are not being taken into account as I believe these are defaults. I am quite certain the my.cnf file I am altering is being parsed as if I put in an invalid value mysql will error on restart. What could be going on here?

Another Edit:

After taking @RolandoMySQLDBA 's advice and moving my slow query config lines to under [mysqld] my settings seem to be saving. Now the result of the above variable_value query is :

1.0000000  /var/logs/my.slow.log  ON  

However I am still not seeing the file my.slow.log being written to. I don't think it is a permissions issue as the file is owned by mysql and I've added all permissions for all users on the file. Could anyone think of a reason why this would not be working?

Edit: Solved! The path to the slow query log was incorrect, it should have been /var/log/my.slow.log instead of /var/log*s*/my.slow.log . Thanks to all for the help, I've learned allot!

Sleep Processes are Running Longer Than Allowed

Posted: 05 Mar 2013 04:02 PM PST

From time to time my mysql server will run into a "too many connections" error. I believe part of the problem is large amount of sleep processes that are running, some over 1000 seconds.

When looking into the issue, I saw this and investigated. I checked my timeout values with

show variables like "%timeout";  

And both interactive_timeout and wait_timeout are set to 600.

How/Why would these sleep processes be running so long, if they're supposed to stop at 600?

Multiple database servers for performance vs failover

Posted: 05 Mar 2013 11:47 AM PST

If I have two database servers, and I am looking for maximum performance vs high-availability, what configuration would be best?

Assuming the architecture is two load-balanced web/app servers in front of two db servers, will I be able to have both db servers active with synced data, with web1 to db1, web2 to db2 setup? Is this active/active?

I'm also aware that the two db servers can have their own schema to manually 'split' the db needs of the app. In this case daily backups would be fine. We don't have 'mission critical data.'

If it matters, we have traffic around 3,000-7,000 simultaneous users.

Join To a @Table Variable is running ineficiently

Posted: 05 Mar 2013 11:47 AM PST

Alright so I have a report sproc that was running incredibly slow. Customers were complaining that the report would not run so I started investigating exactly where in the sproc the problem was and I found this portion taking up 99.8% of the time.

DECLARE @xmlTemp TABLE (    CompanyID  INT,    StoreID    INT,    StartDate  DATETIME,    DateStaID  INT,    EndDate    DATETIME,    DateEndID  INT,    LastUpdate DATETIME)    INSERT INTO @xmlTemp  VALUES      (50,               2,               '3/3/2013',               0,               '3/3/2013',               0,               '3/3/2013')    SELECT DISTINCT T.CompanyID,                  CompanyName,                  Z.StoreID,                  StoreName,                  CashedOutBy,                  TransactionID,                  RegisterID,                  BusinessDate,                  CashedOut,                  Total - Isnull((SELECT Sum(DISTINCT PaymentAmount)                                  FROM   vPullDrawerPayments                                  WHERE  CompanyID = T.CompanyID                                         AND StoreID = T.StoreID                                         AND TransactionID = T.TransactionID                                         AND Isnull(PaymentType, 1) <> 1), 0) AS PaymentAmount,                  'Cash'                                                      AS PaymentDesc,                  CASE                    WHEN Z.EndDate >= Z.LastUpdate THEN 1                    ELSE 0                  END                                                         AS MissingData  FROM   vPullDrawerPayments AS T         INNER JOIN @xmlTemp AS Z           ON T.CompanyID = Z.CompanyID              AND T.StoreID = Z.StoreID  WHERE  BusinessDate BETWEEN Z.StartDate AND Z.EndDate  UNION ALL  SELECT DISTINCT NC.CompanyID,                  CompanyName,                  Z.StoreID,                  StoreName,                  CashedOutBy,                  TransactionID,                  RegisterID,                  BusinessDate,                  CashedOut,                  PaymentAmount,                  PaymentDesc,                  CASE                    WHEN Z.EndDate >= Z.LastUpdate THEN 1                    ELSE 0                  END AS MissingData  FROM   vPullDrawerPayments AS NC         INNER JOIN @xmlTemp AS Z           ON NC.CompanyID = Z.CompanyID              AND NC.StoreID = Z.StoreID  WHERE  BusinessDate BETWEEN Z.StartDate AND Z.EndDate         AND Isnull(PaymentType, 1) <> 1  UNION ALL  SELECT DISTINCT C.CompanyID,                  CompanyName,                  Z.StoreID,                  StoreName,                  CashedOutBy,                  TransactionID,                  RegisterID,                  BusinessDate,                  CashedOut,                  Sum(Abs(LineAmount)) AS PaymentAmount,                  'Coupons'            AS PaymentDesc,                  CASE                    WHEN Max(Z.EndDate) >= Max(Z.LastUpdate) THEN 1                    ELSE 0                  END                  AS MissingData  FROM   vPullDrawerPayments AS C         INNER JOIN @xmlTemp AS Z           ON C.CompanyID = Z.CompanyID              AND C.StoreID = Z.StoreID  WHERE  BusinessDate BETWEEN Z.StartDate AND Z.EndDate  GROUP  BY C.CompanyID,            CompanyName,            Z.StoreID,            StoreName,            CashedOutBy,            TransactionID,            RegisterID,            BusinessDate,            CashedOut   

The @xmlTemp portion of this query is normally used to take parameters from our web application and turn them into parameters the report can actually use. For the sake of testing this I am just inserting values that run this for one store for one day. Running this portion can take upwards of 20 minutes.

So I ran this query plan through PlanExplorer and saw it was pulling all of the data from two of my fact tables instead of filtering out just that store and that day. As seen in the picture below.

QueryPlan Obviously this is bad. So the next step I took is to cut out the join the @xml temp and just manually put in the values in the queries WHERE clause to see how well that worked.

SELECT DISTINCT T.CompanyID,                  CompanyName,                  T.StoreID,                  StoreName,                  CashedOutBy,                  TransactionID,                  RegisterID,                  BusinessDate,                  CashedOut,                  Total - Isnull((SELECT Sum(DISTINCT PaymentAmount)                                  FROM   vPullDrawerPayments                                  WHERE  CompanyID = T.CompanyID                                         AND StoreID = T.StoreID                                         AND TransactionID = T.TransactionID                                         AND Isnull(PaymentType, 1) <> 1), 0) AS PaymentAmount,                  'Cash'                                                      AS PaymentDesc  --CASE WHEN Z.'3/3/2013' >= Z.LastUpdate THEN 1 ELSE 0 END AS MissingData  FROM   vPullDrawerPayments AS T  WHERE  CompanyID = 50         AND StoreID = 1         AND BusinessDate BETWEEN '3/3/2013' AND '3/3/2013'  UNION ALL  SELECT DISTINCT NC.CompanyID,                  CompanyName,                  NC.StoreID,                  StoreName,                  CashedOutBy,                  TransactionID,                  RegisterID,                  BusinessDate,                  CashedOut,                  PaymentAmount,                  PaymentDesc  --CASE WHEN Z.'3/3/2013' >= Z.LastUpdate THEN 1 ELSE 0 END AS MissingData  FROM   vPullDrawerPayments AS NC  WHERE  CompanyID = 50         AND StoreID = 1         AND BusinessDate BETWEEN '3/3/2013' AND '3/3/2013'         AND Isnull(PaymentType, 1) <> 1  UNION ALL  SELECT DISTINCT C.CompanyID,                  CompanyName,                  C.StoreID,                  StoreName,                  CashedOutBy,                  TransactionID,                  RegisterID,                  BusinessDate,                  CashedOut,                  Sum(Abs(LineAmount)) AS PaymentAmount,                  'Coupons'            AS PaymentDesc  --CASE WHEN MAX(Z.'3/3/2013') >= MAX(Z.LastUpdate) THEN 1 ELSE 0 END AS MissingData  FROM   vPullDrawerPayments AS C    WHERE  CompanyID = 50         AND StoreID = 1         AND BusinessDate BETWEEN '3/3/2013' AND '3/3/2013'  GROUP  BY C.CompanyID,            CompanyName,            C.StoreID,            StoreName,            CashedOutBy,            TransactionID,            RegisterID,            BusinessDate,            CashedOut   

By changing this to a simple where clause it runs in 4 seconds instead of 20 minutes and up. Also the query Plan is showing the correct Is there any reason I should be seeing this behavior?

Edit here is the full Link to the QueryPlan.

How can I configure SQL Server 2012 to use multiple NICs?

Posted: 05 Mar 2013 09:16 PM PST

How can I configure SQL Server 2012 to use multiple NICs?

I've a server with 4 NICs. I want to use 2 NICs to query my databases, and 2 NICs to retrieve results from them.

I want them to work in load balancing mode and still continue to work if one nick is down.

It is possible?

Two dichotomy views of a table, and means to express this as a stored procudure?

Posted: 05 Mar 2013 07:58 PM PST

There is a table events. Some relatively complex logic determines for each event whether it is archived. Then I need two more views: All archived events and all not archived events.

How to express this? Should I create a view, a procedure, or a function which tells for every view ID whether it is archive (in order to use it in two above mentioned views)?

MySQL.

How to restore mysql dump file?

Posted: 05 Mar 2013 09:47 PM PST

We have created a dump file (.sql) using the command mysqldump in mysql5.5. Now we want to restore the same. We are using the command :-

-u[username] -p[pwd] [dbname] < [.sql file]

But we are not able to restore. We are using Win7 SP1 32 bit OS.

How to avoid deadlock while updating 2 rows in MySQL

Posted: 05 Mar 2013 07:45 PM PST

This is a interview problem:

There is a perl program that updates the database, and it could run in different processes.

One process may execute a transaction like:

update row A -> update row B -> commit  

The other process may execute a transaction like:

update row B -> update row A -> commit  

The rows need to be updated is selected in the program before the transaction.

I was asked how to avoid deadlocks without changing the transaction logic (I cannot commit after updating A and commit again after updating B).

They want me to propose at least 3 different methods. What I know is to use select .. for update when selecting row A and B. Can anyone help to suggest some other methods?

Restore SQL Server database using Windows Powershell 3.0

Posted: 05 Mar 2013 06:32 PM PST

I'm trying to restore a SQL Server database with a PowerShell script, but I'm having problems.

Here is the error I'm getting:

Exception calling "SqlRestore" with "1" argument(s): "Restore failed for Server 'WUSFK250042-OLU\SQLSERVER2008R2'. " At line:48 char:1 + $smoRestore.SqlRestore($server)

Here is my code:

#clear screen  cls    #load assemblies  [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | Out-Null  #Need SmoExtended for backup  [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoExtended") | Out-Null  [Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.ConnectionInfo") | Out-Null  [Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoEnum") | Out-Null    $backupFile = "C:\SafewayRT\SafewayRTFUll.bak"    #we will query the database name from the backup header later  $server = New-Object ("Microsoft.SqlServer.Management.Smo.Server") "WUSFK250042-OLU\SQLSERVER2008R2"  $backupDevice = New-Object("Microsoft.SqlServer.Management.Smo.BackupDeviceItem") ($backupFile, "File")  $smoRestore = new-object("Microsoft.SqlServer.Management.Smo.Restore")    #restore settings  $smoRestore.NoRecovery = $false;  $smoRestore.ReplaceDatabase = $true;  $smoRestore.Action = "Database"  $smoRestorePercentCompleteNotification = 10;  $smoRestore.Devices.Add($backupDevice)    #get database name from backup file  $smoRestoreDetails = $smoRestore.ReadFileList($server)    #display database name  "Database Name from Backup Header : " +$smoRestoreDetails.Rows[0]["Safeway_LogixRT"]    #give a new database name  $smoRestore.Database =$smoRestoreDetails.Rows[0]["Safeway_LogixRT"]    #specify new data and log files (mdf and ldf)  $smoRestoreFile = New-Object("Microsoft.SqlServer.Management.Smo.RelocateFile")  $smoRestoreLog = New-Object("Microsoft.SqlServer.Management.Smo.RelocateFile")    #the logical file names should be the logical filename stored in the backup media  $smoRestoreFile.LogicalFileName = $smoRestoreDetails.Rows[0]["Safeway_LogixRT"]  $smoRestoreFile.PhysicalFileName = $server.Information.MasterDBPath + "\" + $smoRestore.Database + "_Data.mdf"  $smoRestoreLog.LogicalFileName = $smoRestoreDetails.Rows[0]["Safeway_LogixRT"] + "_Log"  $smoRestoreLog.PhysicalFileName = $server.Information.MasterDBLogPath + "\" + $smoRestore.Database + "_Log.ldf"  $smoRestore.RelocateFiles.Add($smoRestoreFile)  $smoRestore.RelocateFiles.Add($smoRestoreLog)    #restore database  $smoRestore.SqlRestore($server)  

How can I set a default session sql_mode for a given user?

Posted: 05 Mar 2013 02:32 PM PST

I want to have a default session sql_mode that is different from the global sql_mode.

I can set it manually, but I'd like to do it automatically.

I thought I could add it to .my.cnf, but that doesn't work. I tried adding it to the [mysql] group in .my.cnf, and it does not throw an error.

Yet, when I connect my session sql_mode, the session still inherits the global sql_mode.

I'm using MySQL 5.5.16.

Which RDBMS has the most advanced "table-valued" functions?

Posted: 05 Mar 2013 04:32 PM PST

Every Database Management System focuses on specific features and applications. I am using mysql, probably as it is popular and satisfies common needs. However, I came across a problem, which needs advanced string functions (probably at the level of a scripting language), posted as a question with a solution in mysql though NOT straightforward (How to split a string to and get info from another table in mysql).

Thus, I thought that there should be other RDBMSes with appropriate functions suitable for such tasks. Can you suggest alternative RDBMSes for splitting a cell (string) and connect the sub-cells to other tables? Even, it is not necessary to build a virtual table by JOIN, just to fetch appropriate values from another table.

MySQL generic trigger to save identification data for later accessing of the changed row

Posted: 05 Mar 2013 04:32 PM PST

I am pretty inexperienced with this.

I need a generic trigger, able to create and save in a fixed table some sort of identification data for a changed row from generic (any) table. The identification data should be used later to SELECT the changed item in the given table.

Can be this done without previously knowing the table structure?

The only idea I had, but it's way too inefficient in my opinion, also requires previous knowledge of the table column names, is to save a hash by:

MD5(concat(NEW.field1, NEW.field2, NEW.field3, ...))  

then

SELECT * FROM chaged_table WHERE hash = MD5(concat(field1, field2, field3, ...))  

to identify the changed row in the table which triggered the change.

I would greatly appreciate any help or suggestions!

How do i subtract a number of days from a prepared statement parameter in db2?

Posted: 05 Mar 2013 01:46 PM PST

I have a C# application using OLEDB to connect to a DB2 database. Basically, I want to do this:

SELECT CALC.WEEK, COUNT(CALC.ID) AS TRANSACTIONS ,      SUM(CALC.SUBTOTAL) AS REVENUE   FROM (          SELECT               CASE                  WHEN O.DATE < ? - 21 DAYS THEN 1                  WHEN O.DATE >= ? - 21 DAYS AND O.DATE < ? - 14 DAYS THEN 2                  WHEN O.DATE >= ? - 14 DAYS AND O.DATE < ? - 7 DAYS THEN 3                  WHEN O.DATE >= ? - 7 DAYS THEN 4              END AS WEEK,               O.ID, O.SUBTOTAL          FROM COMPANY.ORDERS AS O INNER JOIN               COMPANY.LOCATIONS AS L ON O.LOCATION_ID = L.ID INNER JOIN               COMPANY.STORES AS S ON O.STORE_ID = S.ID           WHERE O.DATE >= ? - 28 DAYS              AND O.DATE <= ?              AND S.NUMBER = ?     ) AS CALC  GROUP BY CALC.WEEK;  

However, when I run that, I get this:

An expression with a datetime value or a labeled duration is not valid.. SQLCODE=-182, SQLSTATE=42816, DRIVER=3.63.108  

And if i change it to wrap the ? parameter with a TIMESTAMP function thusly:

SELECT CALC.WEEK, COUNT(CALC.ID) AS TRANSACTIONS ,      SUM(CALC.SUBTOTAL) AS REVENUE   FROM (          SELECT               CASE                  WHEN O.DATE < TIMESTAMP(?) - 21 DAYS THEN 1                  WHEN O.DATE >= TIMESTAMP(?) - 21 DAYS AND O.DATE < TIMESTAMP(?) - 14 DAYS THEN 2                  WHEN O.DATE >= TIMESTAMP(?) - 14 DAYS AND O.DATE < TIMESTAMP(?) - 7 DAYS THEN 3                  WHEN O.DATE >= TIMESTAMP(?) - 7 DAYS THEN 4              END AS WEEK,               O.ID, O.SUBTOTAL          FROM COMPANY.ORDERS AS O INNER JOIN               COMPANY.LOCATIONS AS L ON O.LOCATION_ID = L.ID INNER JOIN               COMPANY.STORES AS S ON O.STORE_ID = S.ID           WHERE O.DATE >= TIMESTAMP(?) - 28 DAYS              AND O.DATE <= TIMESTAMP(?)              AND S.NUMBER = ?      ) AS CALC  GROUP BY CALC.WEEK  

I get this error:

The invocation of routine "TIMESTAMP" is ambiguous. The argument in position "1" does not have a best fit.. SQLCODE=-245, SQLSTATE=428F5, DRIVER=3.63.108  

As a bonus question, short of creating a PL/SQL function, is there a way to use the same parameter for all the timestamp locations instead of supplying the same value 8 times?

Determining the optimal Column Widths from a .csv File

Posted: 05 Mar 2013 08:32 PM PST

I want to import a .CSV file into a MySQL Table. I would like to determine the optimal column widths to use based on the actual data contained in the CSV file. Are there any scripts or utilities to accomplish this?

MySQL GRANT control

Posted: 05 Mar 2013 07:32 PM PST

How can I write a script to automate MySQL database grants in a very easy way?

I have a config file that contains the grant changes I want to make. I want the script to read the config file, and make any necessary changes to the access rights as specified in the config file. Essentially, I want the target rights to match the rights in the config file.

Postgres PgAdmin III Query Builder crash

Posted: 05 Mar 2013 12:07 PM PST

I've been using Postgres and pgAdmin III for some years, but this morning I had a very BIG BIG problem. I'm using pgAdmin III 9.0 on Windows Xp operating system, on my laptop Dell Latitude E4300.

From this morning, Query Builder ("SQL Editor") option simply stopped working. When I open pgAdmin, everything is fine and workin', until I click on the Sql Query Builder button. When I do that, nothing appears and pgAdmin crashes without giving any information.

I tried to uninstall and install PgAdmin III 9.1, again, but Sql Query Builder kept not workin'.

What could I do?

No comments:

Post a Comment

Search This Blog