Sunday, May 19, 2013

[how to] MySQL binary log failed to generate a new one

[how to] MySQL binary log failed to generate a new one


MySQL binary log failed to generate a new one

Posted: 19 May 2013 08:55 PM PDT

In my Linux standalone machine, I have set up MySQL on it.

In my.cnf, I have the following cofiguration:

[mysqld]  log-bin=mysql-bin  binlog_format=mixed  expire_logs_days = 10  max_binlog_size = 100M  

I found that mysql-bin.000001 is created.

However MySQL cannot create a new binary log (e.g. mysql-bin.000002) after restart mysql service or using the following command to have full backup.

mysqldump --user [username] --password=[password] --single-transaction --flush-logs --master-data=2 [Database Name] > /tmp/backupFull.sql

I also have a windows server with MySQL, but I can see a new binary log (e.g. mysql-bin.000002) after restart mysql service with the same config mentioned above.

Is there anything I missed to configuration?

Thanks

Database running out of space

Posted: 19 May 2013 07:22 PM PDT

My database has 16MB of space left.

I used to just truncate as I was taught but I found these links that advise against truncating:

http://www.sqlskills.com/blogs/paul/why-you-should-not-shrink-your-data-files/

http://blog.sqlauthority.com/2011/01/19/sql-server-shrinking-database-is-bad-increases-fragmentation-reduces-performance/

Is there anything else I can do on my database to reduce the size other than deleting table records? I am new to the DBA forum and I probably should have looked around for other questions before posting but I am desperate as I am worried about my database going down.

How do you create a composite foreign key when key being referenced is also a foreign key

Posted: 19 May 2013 08:38 PM PDT

If you have table a (column x) which is 1 to many with table b (columns x, y) which is 1 to many with table c (columns y, z), it's pretty straight forward to define the foreign keys.

If for performance reasons, table c was altered to be (columns x, y, z) is there a way to specify that table c (columns x, y) have a foreign key relationship to table b's (columns x, y)?

Previously table c contained 2 single column foreign keys x --> table a and y --> table b. While this is correct, it creates erroneous results when trying to use commercial tools to create database diagrams.

format of mysql query log

Posted: 19 May 2013 07:10 PM PDT

What is the format of the mysql query log? In particular, for lines like:

133 Query     commit  133 Query     rollback  

What does '133' stand for? And is it true that each line represents a round trip communication to the database (i.e., they are not batched)?

Match two large tables finding similar data [closed]

Posted: 19 May 2013 06:36 PM PDT

I have two tables in MySQL. Table 1 contains much data, but Table 2 contains huge data.

Here's the code I implement using Python

import MySQLdb    db = MySQLdb.connect(host = "localhost", user = "root", passwd="", db="fak")  cursor = db.cursor()    #Execute SQL Statement:  cursor.execute("SELECT invention_title FROM auip_wipo_sample     INNER JOIN us_pat_2005_to_2012     ON auip_wipo_sample.invention_title = us_pat_2005_to_2012.invention_title")  #Get the result set as a tuple:  result = cursor.fetchall()    #Iterate through results and print:  for record in result:      print record  print "Finish."    #Finish dealing with the database and close it  db.commit()  db.close()  

But it doesn't work. It says

raise errorclass, errorvalue. OperationalError: (1052, "Column 'invention_title' in field list is ambiguous")`

Missing or lost transaction and no errors to show what went wrong

Posted: 19 May 2013 06:18 PM PDT

I have 2 missing records in SQL Server and I am trying to identify the problem that caused this issue.

New records are inserted using a stored procedure, which will return an error code to the application if something goes wrong.

The Application, SQL Server, System Logs do not show any errors during the time when these 2 transactions should have been inserted.

How do I go about identifying the issue here? My current idea is to find a way to read the Transaction Log (from a backup restore) to see if the transactions are even in the log and if they got rolled back.

Can anyone recommend anything else I can do to troubleshoot this? Are there DMVs or other (SQL Server hidden) logs?

I am worrying about this issue and what SQL Server did in this situation. (I have experienced issues recently with timeouts, but since a recent reboot, this problem was resolved....)

Directory lookup failed during log shipping

Posted: 19 May 2013 06:21 PM PDT

I have configured log shipping for all our SharePoint 2010 databases. All worked well since Friday and today Monday I ran this query:

SELECT *   FROM [msdb].[dbo].[sysjobhistory]  WHERE [message] like '%Operating system error%'  

On the secondary server and got the error below. Basically it is SharePoint's WebAnalyticsServiceApplication_ReportingDB which creates an extra database weekly and it seems this latest copy could not be found. What I'm not sure of is 2 things.

Why is it this database when viewed/backed up on the primary server shows as 1 db, but when copied/restored to another server, it shows up with its weekly breakdown.

My default sql installation and data folder is in in the H Drive, why is .Net SqlClient Data Provider looking in C drive for this one newly created SharePoint file?

2013-05-13 11:45:57.91 * Error: Could not apply log backup file 'H:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\WebAnalyticsServiceApplication_ReportingDB##..._20130513061518.trn' to secondary database 'WebAnalyticsServiceApplication_ReportingDB##...'.(Microsoft.SqlServer.Management.LogShipping) 2013-05-13 11:45:57.91 Error: Directory lookup for the file "C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\WebAnalyticsServiceApplication_ReportingDB##...Aggregation20130519.ndf" failed with the operating system error 3(The system cannot find the path specified.). File 'WebAnalyticsServiceApplication_ReportingDB##...Aggregation20130519' cannot be restored to 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\WebAnalyticsServiceApplication_ReportingDB##...Aggregation20130519.ndf'. Use WITH MOVE to identify a valid location for the file. Problems were identified while planning for the RESTORE statement. Previous messages provide details. RESTORE LOG is terminating abnormally.(.Net SqlClient Data Provider) *

Other than this one error, my log shipping works well. Any help?

How to do division from select statements in sql server?

Posted: 19 May 2013 06:22 PM PDT

I am trying to create a select statement that can return a table of 1 column. Basically, I am looping through all the records of NewHire, and then using its id, to select stuff from another table. I then do a count to get the number of rows, and then divide to get a percentage value, and then I want to insert it to this temp table t. Then select t. But this isn't working. I am getting syntax errors.

Does anyone know how to fix this?

declare @t table (Percentage int)  DECLARE @acc INT   SET @acc = 1  DECLARE @max INT   select @max = max(HireID) from NewHire  WHILE (@acc <= @max)  BEGIN    IF (@acc in (select HireID from NewHire))    BEGIN      insert into @t          (select COUNT(*) from Hire_Response WHERE HireID = @acc) /        (select COUNT(*) from Hire_Response WHERE HireID = @acc          AND (HireResponse = 0 OR HireResponse = 1))    END    set @acc = @acc + 1  END  select * from @t  

Real-time merge of two databases on different servers

Posted: 19 May 2013 07:35 PM PDT

One of the requirements our customer has given is to merge two databases running on two different machines in real time. This means that we have to build some sync policy (or use SQL Server tools to manage that). But I have some doubts:

  • How can I be sure that my databases are totally merged? I mean, we are in a real time environment with n/sec web and local requests, it's impossible to share table updates/merges with the same speed.

  • Is this a good architecture in spite of a master/slaves one (with an unique DB) topology? I'm pretty sure sync both databases each other will cost too much since our tables store a lot of data in terms of rows and information.

  • Can anyone provide a good solution to build the best architecture possible?

Is it possible to backup a database that is not ONLINE?

Posted: 19 May 2013 07:38 PM PDT

We have a job which backs up all ONLINE databases nightly, but for some reason it didn't back up msdb. So the issue is at that time msdb was in some status was other then 0. So my question is : is it possible to back up a database if its status is not online (either RESTORING, RECOVERING, RECOVERY_PENDING, SUSPECT, EMERGENCY)?

The multi-part identifier "xxx" could not be bound

Posted: 19 May 2013 07:39 PM PDT

The following is a HAVING clause from a small part of a very large stored procedure which I have inherited from some other devs (external);

HAVING (SELECT COUNT(*) FROM  (      SELECT *      FROM dbo.ContractDailyRoomAllocation da2      WHERE da2.ContractId = DA.ContractId      AND da2.RoomTypeId = DA.RoomTypeId      AND da2.Date >= @FromDate AND da2.Date < @ToDate      AND da2.IsSold = 0      AND da2.ReleaseDay <= DATEDIFF("d", @TodayDate, da2.Date)) T) = @StayDates      AND ( (@AllowOnRequestBookings = 'False'         OR dbo.GetAvailablePackageRoomCount(da2.ContractId,da2.RoomTypeId,@FromDate,@ToDate)          >= 0 ) )        AND ( (@AllowOnRequestBookings = 'True'         OR dbo.GetAvailablePackageRoomCount(da2.ContractId,da2.RoomTypeId,@FromDate,@ToDate)         > 0 )  )  

The last 2 AND's give me the following error messages:

Msg 4104, Level 16, State 1, Procedure GetAvailableHotelRooms, Line 176
The multi-part identifier "da2.ContractId" could not be bound.
Msg 4104, Level 16, State 1, Procedure GetAvailableHotelRooms, Line 176
The multi-part identifier "da2.RoomTypeId" could not be bound.
Msg 4104, Level 16, State 1, Procedure GetAvailableHotelRooms, Line 177
The multi-part identifier "da2.ContractId" could not be bound.
Msg 4104, Level 16, State 1, Procedure GetAvailableHotelRooms, Line 177
The multi-part identifier "da2.RoomTypeId" could not be bound.

Any ideas?

Auto-generate scripts to create tables, indexes, etc

Posted: 19 May 2013 07:43 PM PDT

In SSMS, we can right click a database/table/index/... and select the "Script As" option to generate a drop and create script.

Is there a way to automate this script generation and drop the scripts to a temp location, rather than manually right clicking each object and generating them?

Trying to write a function that doesn't require dbo. prefix

Posted: 19 May 2013 07:45 PM PDT

I am trying to write a simple system function so that the dbo. prefix is NOT required to access the function. I have found an example of this for SQL Server 2000 but when I run

EXEC SP_CONFIGURE 'ALLOW UPDATES', 1  RECONFIGURE WITH OVERRIDE  go    create function system_function_schema.trim      (@input varchar(max))      returns  varchar(max)  as  begin      declare @result varchar(max)        select  @result = rtrim(ltrim(@input))        return  @result  end  go    EXEC SP_CONFIGURE 'ALLOW UPDATES', 0  RECONFIGURE WITH OVERRIDE  

This is the error thrown.

The specified schema name "system_function_schema" either does not exist or you do not have permission to use it. Severity 16

The expected usage would be

select trim('   padded value   ')  

Does anyone know the approach I should be taking or if I HAVE to create it as a regular user defined function that must be called with the dbo. like :

select dbo.trim('   padded value   ')  

Force sql server to run query conditions as written?

Posted: 19 May 2013 07:47 PM PDT

Im using Sql server 2008 R2 And I have this pseudo query (SP) :

select ...  from ...  WHERE    @LinkMode IS NULL       AND (myColumn IN (...very long time exeted query...))       ...       ...  

The problem is that still , the query takes a very long time to execute -- even if I execute the SP with @LinkMode=2

As you noticed , the long time consuming query should be executed only if @LinkMode is null ( which is not the case here. in my case @LinkMode =2 ! )

However , If I change it to :

 select ...      from ...      WHERE    1=2           AND (myColumn IN (...very long time exeted query...))       ...       ...  

The SP does run fast .

I've heard before that sometimes , optimizer can optimize the order of criteria

So I ask :

  • Even if the optimizer choose different route , what can be faster than checking if =null ? I mean , I think that checking if a==null is much faster than running the other long query...

  • How can I force Sql server to run the query as I've written it ! ( the same order)

Why does this 120 GB insert grow the log by over 780 GB?

Posted: 19 May 2013 07:49 PM PDT

A table [tbl_old] has about 3 billion rows and about 120 GB for disk space, and then created a [tbl_new] with the same schema (actually, just dumped the script of [tbl_old], then rename the table name to [tbl_new], then create it).

But when running this SQL script:

insert [tbl_new]      select * from [tbl_old]  

After the transaction log file used up about 780 GB and hung.

My questions are:

  • Is it reasonable for just such inserting script for about 120 GB, more than 780 GB transaction log is not enough?

  • What is in the log file?

Background:

  1. the DB is in simple recovery mode, and no index on these [tbl_old] and [tbl_new]
  2. SQL Server 2008 R2 Enterprise on Window Server 2008 64bit.
  3. And Intel Xeon CPU E5645 @2.4GHz (2 processors), 24 CPUs, 64 GB memory

Disabled trigger still fired

Posted: 19 May 2013 07:50 PM PDT

I've just run the following t-sql to disable a trigger (with a deploy script wrapper around osql):

disable trigger dbo.trExampleTableInsert ON dbo.ExampleTable  

In SSMS I could see the trigger was disabled (downward pointing red arrow). When I inserted some rows into the table, the trigger still fired (I have an audit table that logs when the trigger fires).

I enabled and disabled the trigger this time using SSMS UI. When I inserted rows the trigger didn't fire.

What does the UI do differently over the T-SQL command? Did I need to issue a commit?

Indexes file damaged, was on ramdisk

Posted: 19 May 2013 07:55 PM PDT

I have put some of my indexes into a filegroup that contains one file, that file is on the ramdisk, the performance is great, but the problem is that the file was deleted incorrectly (the file contains only index information). I tried to delete that file from database properties, but an error occured:

Alter failed for Database 'xxxxxx'. (Microsoft.SqlServer.Smo)

An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)

The operating system returned error 21(failed to retrieve text for this error. Reason: 15105) to SQL Server during a write at offset 0x0000002c668000 in file 'R:\DBIndexes.mdf'. Additional messages in the SQL Server error log and system event log may provide more detail. This is a severe system-level error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online. (Microsoft SQL Server, Error: 823)

I have no problem to re-build the indxes, but I need to fix the database!!

I deleted all of the indexes that stores in that file, And tried to delete the file group, but the same error occured


*UPDATE*

When I detached the database, I couldn't re-attache it, it now saying: Unable to open the physical file "R:\DBIndexes.mdf". Operating system error 2: "2(failed to retrieve text for this error. Reason: 15105)".

It was working fine before detaching it, but the small problem was in deleting that orphan file!!!

Performance issues moving 1 million rows between tables

Posted: 19 May 2013 07:58 PM PDT

I want to replicate 1 million rows from table1 to table2. I have written a job with a query like this:

delete from table1 OUTPUT *.delete into table2  

I am executing this job every hour, but it takes a long time.

Another problem is that my log file size is continuously increasing. How can I fix this?

Speed up large clustered index creation?

Posted: 19 May 2013 08:02 PM PDT

I have a large table, the row count of the table is more than 3 billion, the data space for this table is about 120 GB.

And Intel Xeon CPU E5645 @2.4GHz(2 processors), 24 CPUs, 64G memory, 64bit windows server 2008 R2 enterprise.

I run

create unique clustered index MyTable_IXC on tblFactFoo(barKey) on [PRIMARY]  

But it took more than 6 hours (actually, it reported an error of duplicated key after 6 hours).

When running it, the cpu was less than 10%, and disk IO was less than 20M/s, normally about 15M/s, I wonder how to improve the performance of creating a clustered index with such powerful hardware.

Visualiser for spatial query results

Posted: 19 May 2013 08:03 PM PDT

This question Whats the easiest way to get sql 08 spatial data onto a map? (and many other articles) recommends Craig Dunn's Geoquery, but all the download links appear to be kaput.

SQL Server Managament Studio's spatial results tab will show a single dataset, and this MSDN article, Spatial Data Support In SQL Server 2008 mentions a workaround using UNION ALL to display point data overlayed on a map:

SELECT geog, name FROM Mondial.dbo.city WHERE geog IS NOT NULL   UNION ALL   SELECT geog, cntry_name FROM SpatialSamples.dbo.cntry00  

Whenever I try something similar, I only ever get a single resultset displayed.

Is there something built into either SQL Server 2008 or 2012 to do this?

Can we Configure Change Data Capture in another Database?

Posted: 19 May 2013 08:06 PM PDT

I want to apply Change Data Capture(CDC) to database DB1's tables and want to log everything in DB2.

So, could it be possible to capture table DB1's changes to DB2?

Recover data from accidental update

Posted: 19 May 2013 08:08 PM PDT

I have done an incorrect update and hence I restored the DB with a backup from the previous day. I have not taken a copy of the corrupted data before restoring the backup. Is there any possibility to get back the data lost between the creation of back up file and restoring the file?

Thanks for saving my job

Running a Job from a Stored Procedure in another server?

Posted: 19 May 2013 08:10 PM PDT

How can I run a job from another server using a stored procedure?

Let's say

  • server 1 : db1
  • server 2 : db2
  • username : testssis
  • password : testssispass
  • sqljob found in db2: job1

Here is the part im lost with how can I make this work?

xp_cmdshell + osql command -U: -P: + sp_startjob  

INSERT INTO #temp taking 1000 times longer than SELECT INTO #temp

Posted: 19 May 2013 08:11 PM PDT

I am shredding an xml blob that is being passed to me and loading a temp table for further processing. One of the things I noticed while working through the process is that performing an INSERT INTO is taking like 20 minutes while doing a SELECT INTO is taking seconds. The DB statistics are up to date and we are running daily maintenance plans so not sure what is going on. I know that SELECT INTO bypasses the logs which would make it faster, but I as not assuming that the INSERT INTO would be so much worse.

EDIT: Based on comments adding some more details

Why would the INSERT INTO be taking so much longer than the SELECT INTO for a temp table.

Query example below. I am shredding XML and inserting into a temp Table I have both the INSERT INTO and SELECT INTO commented out. The INSERT INTO for just 100 records is showing 382,000+ logical reads while the SELECT INTO is showing none.

;with cteScen AS (  SELECT    DENSE_RANK() OVER ( ORDER BY scenario_node ) AS scenario ,          RANK() OVER ( PARTITION BY scenario_node ORDER BY scen_val ) AS level_order ,          scen_val.value('text-value[1]',                         'varchar(100)') AS 'scen_value' ,          scen_val.value('(distribution/@id)[1]',                         'int') AS 'dist'  FROM      @scenarioXML.nodes('/scenarios/scenario')          AS scenario ( scenario_node )          CROSS APPLY scenario.scenario_node.nodes('./values/value')          AS x ( scen_val ))  --INSERT INTO #tmpTbl(Scenario, Level_Order, Scenario_Value, Distribution_ID)  Select top 100 *  --INTO #tmpTbl  From cteScen  

Any thoughts?

Error while taking backup using SSMS

Posted: 19 May 2013 08:16 PM PDT

I am using SSMS. I have about 5 logins. For a particular login I have all server roles other than sysadmin. It is a sql login (not windows authentication). In user roles it has all permission for almost all databases. For some database it has only datareader.

Now when I try to take backup of database using SSMS and when try to select backup destination by clicking button I get the following error:

"C:\Program Files\Microsoft SQL Server\MSSQL10.SQLEXPRESS\MSSQL\Backup

Cannot access the specified path or file on the server. Verify that you have the necessary security privileges and that the path or file exists.

If you know that the service account can access a specific file, type in the full path for the file in the File Name control in the Locate dialog box."

Even though I have db_owner and all permissions I am getting this error. But if I select the path manually in the file name field then it allows to take backup without any error.

So why does it happen? I want this user to take backup without these error messages. (But I can't give sysadmin for the user). How can I solve it?

I already some article stating that use SQL query and sqlcmd to take backup. But I want it to work using SSMS.

Key lookup partition pruning

Posted: 19 May 2013 08:19 PM PDT

I have a query inner joining multiple tables all of which are partitioned and have 10 partitions each:

SELECT A.COL1, B.COL2, C.COL3  FROM A   INNER JOIN B ON A.ID = B.ID  INNER JOIN C ON A.ID = C.ID   WHERE COL20 < 10000  ---- COL20 IS NOT THE PARTITION KEY COLUMN  

In the actual query execution plan, for one of the tables, there is a non-clustered index scan with a key lookup.

When I look at the properties for Key lookup in the actual execution plan, it looks like it's getting partition pruned.

I'm confused as to why that would happen, is this like a negative impact on the system. I understand that key lookups themselves are bad, but why does it show that only 7 partitions have been accessed? The properties say:

Key Lookup (Clustered)  ----------------------------------  Actual number of rows:      215805  Actual Partition Count:     7  Actual Partitions Accessed: 3..9  

Dictate edition during unattended install?

Posted: 19 May 2013 08:45 PM PDT

I am creating an automated build of SQL Server installation which is run from a batch file. Is there any way to change the registry, or other method, of deciding which SQL Server edition (Either Enterprise or Standard) will be installed, or are the two versions still separate builds?

If they are separate builds, I guess I could get around it, by having a "master" batch file call an appropriate "child" batch file, but this would mean having a folder which contains both sets of binaries, which I rather avoid.

Deleted rows but database size is still large

Posted: 19 May 2013 08:46 PM PDT

I am using SQL Server 2008 Express which has maximum database size limit of 10 GB. Every month I hit the limit and the application can't store more data.

To overcome this problem we run a job every month which takes backup of the database for last 30 days and deletes the data for those 30 days from the respective tables. After delete also the size remains high.

Shrinking the database, truncating the tables, table clean up, Shrink the database and rebuild the index, delete the tables and recreate them. Every time shrinking the database is not good so kept as a last option.

Apart from this is there anything else that I can try?

Partition of tablespace

Posted: 19 May 2013 08:08 PM PDT

My postgres server is running in a windows server and three tablespace are created. Postgresql server is installed in C: drive and the tablespaces are in D: and E: drive. So how to take a backup and restore. Is it similar to normal backup and while restore, in another machine, do i need to have the same setup or i can restore it in a single drive? From the tablespace folder, is it possible to retreive information for only one table which is huge?

No comments:

Post a Comment

Search This Blog