Friday, August 23, 2013

[how to] SQL Server 2008 : Best practice to reorganize/ rebuild index by automation

[how to] SQL Server 2008 : Best practice to reorganize/ rebuild index by automation


SQL Server 2008 : Best practice to reorganize/ rebuild index by automation

Posted: 23 Aug 2013 08:48 PM PDT

In SQL Server 2008, Describe Best practice to reorganize/ rebuild index by automation ?

SQL Server 2008/R2 Backup and restore job script

Posted: 23 Aug 2013 08:03 PM PDT

Is any store procedure or script there for single or all database backup and restore job?

SQL server migration from 2000 to 2008/R2

Posted: 23 Aug 2013 07:53 PM PDT

Need to learn step by step of migration from SQL Server 2000/2005 to 2008/R2 including what are the dos and don'ts.

1) what is best method? and how ?  2) Is any scripts available ?  

Thanks

Best method to copy a database from one instance to another

Posted: 23 Aug 2013 08:55 PM PDT

What is the best method to move/copy a database from production to test (and test to production) including all objects/jobs/logins and data?

Query to find GUID's "not in" null containing subquery returns no results

Posted: 23 Aug 2013 06:58 PM PDT

When I have a query that checks if a column of type uniqueidentifer does not exist in a table that has a null value then I get no results back. If the subquery does not return a null it works fine and it only happens when using not in.

I know I can just do a not null check in my subquery, but I am curious why this does not work.

Query Example:

select a.guid from tableA a where a.guid not in (select b.guid from tableB)

Working Test:

select 1 where newid() not in (select newid())

Broken Test:

select 1 where newid() not in (select null)

Get result of joining multiple tables as one row

Posted: 23 Aug 2013 05:58 PM PDT

I have these 2 tables:

table1:    id | name  ---------  1  | john  2  | jack    table2:    id | profile_id | institution  -----------------------------  1  | 1          | SFU  2  | 1          | UBC  3  | 2          | BU  4  | 2          | USC  5  | 2          | SFU  

If I want to get all the information about a user using his userid, I can simply join them using this:

select a.id, a.name, b.institution from table1 a, table2 b  where a.id = USER_ID and a.id = b.profile_id  

which for USER_ID = 1 returns:

id | name | institution  -----------------------  1  | john | SFU  1  | john | UBC  

What I need is actually 1 unique row instead of multiple rows. Is it in any way possible to get something like this? (I haven't seen something to do it but am not sure)

id | name | institution  -----------------------  1  | john | [SFU, UBC]  

Optimize Query of Count Less Than

Posted: 23 Aug 2013 07:02 PM PDT

I would like to optimize querying in SQL Server of a table to know whether it has a count of records less than a certain amount. I don't need to know the complete count, just whether it is less than, say, 2.

Would it be efficient to do something like this?...

if (      select count(*)        from (select top (2) *                from sys.databases) t     ) > 1  begin    select 1  end  else  begin    select 0  end  

Problems migrate database from 4 to 5.5 mysql

Posted: 23 Aug 2013 04:01 PM PDT

I have a server with 4.0 mysql version innodb engine.

i need to migrate to other server with 5.5 version (linux) ok

i dump the full database using mysqldump, when i try to restore on the 5.5 server it has a problem, because the file generate in the 4.0 set the variable on the sql scritp TYPE=INNODB and mysql 5.5 accepts only ENGINE=INNODB format on the time table creation and it abort the process.

i search on google, call any friends and nothing! the sql file has 16 gigabytes. is not possible edit with an file editor.

error exemple: CREATE TABLE tabadi ( nAdicao char(3) default '0', nSeqAdic char(3) NOT NULL default '', cFabricante varchar(60) NOT NULL default '', vDescDi varchar(15) NOT NULL default '', xPed int(6) NOT NULL default '0', nItemPed int(6) NOT NULL default '0', nDI varchar(12) NOT NULL default ''

) TYPE=InnoDB

ERROR 1064 (42000) at line 19: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'TYPE=InnoDB' at line 9

Ps: i did try to change TYPE TO ENGINE and its solve the problem.

can anybody help me?

Thanks a lot.

Michel

Unable to drop non-PK index because it is referenced in a foreign key constraint

Posted: 23 Aug 2013 01:51 PM PDT

I have a table named MyTable. The primary key is an identity int column named MyTableID. There is a unique clustered index on the PK column MyTableID named PK_MyTable.

I noticed there is an additional non-clustered unique index IX_MyTable_MytableID on that table with a single column MyTableID, and no other included columns. This index is obviously redundant, but when I try to delete it, I get an error message:

The constraint 'IX_MyTable_MyTableID' is being referenced by table 'OtherTable',   foreign key constraint 'FK__OtherTable__MyTableID__369C23FC'.  

Why is the FK constraint relying on the non-clustered unique index instead of the primary key constraint? How do I update the FKs to use the clustered PK index instead of the other index?

Optimize simple query using ORDER BY date and text

Posted: 23 Aug 2013 12:46 PM PDT

I have a query to a table in Postgres with an order based on a date field and a number field, this table has 1000000 records

The data types of the table are:

fcv_id = serial  fcv_fecha_comprobante = timestamp without time zone  fcv_numero_comprobante = varchar(60)  

The query is:

SELECT fcv_id, fcv_fecha_comprobante FROM factura_venta  ORDER BY fcv_fecha_comprobante, fcv_numero_comprobante  

This query takes about 5 seconds, but if I take out the "order by" the query takes only 0.499 seconds

The problem I have is that I need to run this query in the shortest time possible, so I search on google what can I do and create a composite index with the following query

CREATE INDEX factura_venta_orden ON factura_venta  USING btree (fcv_fecha_comprobante ASC NULLS LAST             , fcv_numero_comprobante ASC NULLS LAST);  ALTER TABLE factura_venta CLUSTER ON factura_venta_orden;  

But the query is taking the same time or even more.

I'm using Postgres 9.0.13, here is the EXPLAIN with 73436 rows

Sort  (cost=11714.03..11897.62 rows=73436 width=27) (actual time=1260.759..1579.853 rows=73436 loops=1)    Sort Key: fcv_fecha_comprobante, fcv_numero_comprobante    Sort Method:  external merge  Disk: 2928kB    ->  Seq Scan on factura_venta  (cost=0.00..4018.36 rows=73436 width=27) (actual time=0.363..162.558 rows=73436 loops=1)  Total runtime: 1694.882 ms  

Postgres is running on a Phenon II 1055T (3 cores) With 8 GB Ram and 500 GB disk.

How I can optimize this query?

Mysql Fragmentation. How bad is too bad?

Posted: 23 Aug 2013 06:33 PM PDT

I ran a table status for my companies production site and its showing close to around '49085939712' for just about every table in my database and we have around 400 tables.

I am not a DB administrator and my mysql skills are not the best when it comes to optimizing. To me it looks pretty bad. but I need everyone else's oppinion.

My question is how bad is too bad. What is an on average threshold to know when its time to optimize a single table.

How big should I make a column in an audit table for storing the output of COLUMNS_UPDATED()?

Posted: 23 Aug 2013 10:10 AM PDT

I have a stored procedure which creates a complementary trigger and audit table when a table name is passed into it. The SP filters out all the calculated columns and other stuff that stops you doing an 'SELECT *' for auditing and then writes a trigger that inserts those specific columns from table into table_audit upon DELETE/INSERT/UPDATE along with some audit data such as HOST_NAME() and COLUMNS_UPDATED(). This has been little used but has generally worked for the clients who've asked for it.

In a recent round of testing I was asked to set up the auditing on a test database. This caused inserts in our main table to fail because "string or binary data would be truncated", after investigation I found the column where the results of COLUMNS_UPDATED() was being stored was the issue.

The definition of the column was:

[UpdateColumns] [varbinary](16) NULL  

Changing the definition to this has made everything work again:

[UpdateColumns] [varbinary](24) NULL  

However what this highlights is that I don't understand the relationship between the number of columns in the table (95 in this case, 7 of which are calculated) and the size of the output from COLUMNS_UPDATED(). I thought [varbinary](16) = 128bits, which should be more than enough flags for 95 columns.

So my question is: what is the relationship?

Secondary question: Can I easily derive from the number of columns that will be audited a value for x in [UpdateColumns] [varbinary](x) when building the trigger or would I be better off just setting x to some larger number?

Deleting users need delete procedures [on hold]

Posted: 23 Aug 2013 01:39 PM PDT

I have an MDF from 2000, in these years many times whe have move this database by backup to another computer and/or domains, ![enter image description here][1]

to delete and user need to delete , the permissions of this users in objects like procedures, have a simple way to delete users or know with procedures are linked to this user???

I need to delete old or not active users.

Clearing : I have a lot of old/inactive user, from other domains, for delete an user i make this . Security\USers the i choose the user right click , delete , then SQL show me a message, can not delete the user , because have permissions in varius procedures, then i must to open the procedure one by one delete the user from all the procedures , and late i can delete, have ay way to do this more simples.

Thanks Alejandro.

mysql replication for certain dababases

Posted: 23 Aug 2013 05:25 PM PDT

I am new to mysql and I have a query in regards to mysql replication.

Currently, I have a master database that replicates to four slave servers at the moment. I would need to set up a few replication servers(as slave) but these will replicate only a few databases. How would approach this issue?

Should I be using something like this on slave /etc/my.cnf?

replicate-ignore-db=db_name

Can I set something on slave configuration stating exactly which databases and tables to be replicated?

Should the changes happen only on the slave configuration or on master as well? The reason being this master server currently replicates whole lot of databases to other slaves which will remain as it is.

Replication is missing a specific set of rows on subscriber

Posted: 23 Aug 2013 11:13 AM PDT

So I've got two servers, we'll call them Server A and Server B.

Server A has a table, Table 1, which gets replicated to Server B.

Server B has a table, Table 2, which gets replicated to Server A.

All replication is transactional and pushed from the publisher continuously.

On Server B, there is a trigger on Table 1 which will ultimately create a record in Table 2.

Other various processes also create additional records in Table 2. All of the records in Table 2 are supposed to get replicated back to Server A.

In short, the data from Server A, Table 1 is supposed to make a round trip back to Server A in table 2 along with other records. However only the 'other records' are coming back. None of the data which originated on Server A is coming through, even though it's been successfully inserted into Table 2 on Server B.

Any ideas?

Additional info:

When I generate a new snapshot all those records that didn't replicate are included in the snapshot and do get replicated over. My suspicions lie in what's happening behind the scenes when the trigger fired by replication inserts the data into Table 2 and perhaps something is preventing those records from replicating over.

The trigger looks something like:

Create Trigger [dbo].[trg_INS_TableA]   For Insert  As    Declare @<Various Parameters>    Select @<Various Parameters> = <Record Data>  From inserted    Declare @RC Int  Exec @RC = [MyDB].[dbo].[up_INS_TableB] @<Various Parameters>  

And up_INS_TableB transforms and inserts the Table A data into Table B

How can I find the not used numbers in a column?

Posted: 23 Aug 2013 12:13 PM PDT

How can I find in a column filled with numbers, that ones that are not used.

Table_A:

'id' int(11) unsigned NOT NULL AUTO_INCREMENT,          'client_code' int(4) unsigned NOT NULL,      'client_name' varchar(50) NOT NULL,      PRIMARY KEY ('id')  

INSERT INTO Table_A ('client_code','client_name') VALUES (1,'Bob');      INSERT INTO Table_A ('client_code','client_name') VALUES (2,'Anna');      INSERT INTO Table_A ('client_code','client_name') VALUES (5,'Jim');      INSERT INTO Table_A ('client_code','client_name') VALUES (6,'Tom');      INSERT INTO Table_A ('client_code','client_name') VALUES (10,'Mattew');  

If I SELECT 'client_code' FROM Table_A the output will be 1,2,5,6,10. How can I have the numbers that are not used? (3,4,7,8,9,11,12...)

What are the ports and directionality of the TCP connections used in MySQL replication?

Posted: 23 Aug 2013 02:32 PM PDT

I wanted to know how replication works in mysql.

Both slave and master have mysql server running say on port 3306. Slave acts like a client and master acts like a server.

Does slave always bind to 3306 when it makes a tcp connection to send the request? When the response is sent back by server the dest port will be 3306 and since slave mysql is listening to that port it will process the response?

Not sure if my understanding is correct.

Also I am interested in knowing ingress and egress rules that should be added for replication to work.

For TCP connection from slave will it pick any available TCP port or is it always 3306?

Any help is appreciated!

Thanks!

Trace flag and which need to be turned off and Why

Posted: 23 Aug 2013 04:46 PM PDT

In SQL server there are so many trace flags. Why some of them need to be turned off. Some where i saw that trace flag 8017 need to be turned off. So i want to know which trace flag need to be tuned off and Why?

Streaming Replication Resource Management (PGSQL) on FreeBSD 8.x

Posted: 23 Aug 2013 10:31 AM PDT

I am attempting to set up PGSQL-HA streaming replication on freeBSD 8.2 (willing to upgrade to 8.4 if need be).

The set up is simple: 2 nodes, one active one standby. I have successfully set up streaming replication between the two nodes, but I am now trying to set up a resource agent/manager to deal with automatic failover and STONITH.

I have attempted to use Heartbeat/Corosync/Pacemaker for this, but am running into difficulty building them from source (Pacemaker, in particular).

Is there an easier way than using Pacemaker for a simple 2 node set up? I have little experience with this and especially with freeBSD.

Thanks!

*Note: FreeBSD 8.x is required, the servers are running other services that would make it too difficult to upgrade or change the OS

Mysql foreign key cascade delete

Posted: 23 Aug 2013 01:15 PM PDT

Does mysql have a way to delete all sub links if a row was deleted from a "parent" table?

Example table setup is below, the issue I am trying to avoid is having to maintain a script that deletes the data as the table ex_item_a will have a lot of similar tables[they have that same foreign key relationship]....In this case, let's say if I delete someone from the user table it should delete all links.

CREATE TABLE IF NOT EXISTS `account` (      `id` int(10) unsigned NOT NULL AUTO_INCREMENT,      PRIMARY KEY (`id`)  );    CREATE TABLE IF NOT EXISTS `user` (      `id` int(10) unsigned NOT NULL AUTO_INCREMENT,      `account_id` int(10) unsigned NOT NULL,      `email` varchar(50) NOT NULL,      PRIMARY KEY (`id`),      KEY `account_id` (`account_id`),      CONSTRAINT `FK_user_account` FOREIGN KEY (`account_id`) REFERENCES `account` (`id`) ON DELETE CASCADE  );    CREATE TABLE IF NOT EXISTS `account_tbl_ex` (      `id` int(10) unsigned NOT NULL AUTO_INCREMENT,      `account_id` int(10) unsigned NOT NULL,      PRIMARY KEY (`id`),      KEY `account_id` (`account_id`),      CONSTRAINT `FK_account_tbl_ex_account` FOREIGN KEY (`account_id`) REFERENCES     `account` (`id`)  );    CREATE TABLE IF NOT EXISTS `account_tbl_ex_link` (      `id` int(10) unsigned NOT NULL AUTO_INCREMENT,      `link_id` int(10) unsigned NOT NULL,      PRIMARY KEY (`id`),      KEY `link_id` (`link_id`),      CONSTRAINT `FK_account_tbl_ex_link_account_tbl_ex` FOREIGN KEY (`link_id`) REFERENCES `account_tbl_ex` (`id`)  );    CREATE TABLE IF NOT EXISTS `ex_item_a` (      `id` int(10) unsigned NOT NULL AUTO_INCREMENT,      `tbl_link_id` int(10) unsigned NOT NULL,      PRIMARY KEY (`id`),      KEY `tbl_link_id` (`tbl_link_id`),      CONSTRAINT `FK_ex_item_a_account_tbl_ex_link` FOREIGN KEY (`tbl_link_id`) REFERENCES `account_tbl_ex_link` (`link_id`)  );  

*UPDATE*: I did a little more research and tested on my local database, found out that the child tables just need to cascade delete and it works from the top down perfectly.

Geohash implementation in PostgreSQL

Posted: 23 Aug 2013 08:52 PM PDT

Are there any geohash implementation for PostgreSQL? I couldn't find any.

SQL Server 2012 Express fails at repair install, produces error 5178

Posted: 23 Aug 2013 01:38 PM PDT

My SQL Server Express Service will not start up.

To produce this problem, I basically cloned my old hard drive (Which had SQL Server Express 2012 installed) to a new hard drive (Seagate Momentus XT 750).

EDIT: I am adding info on how I cloned my hard drive as per request of SQLRockStar. I used Seagate DiscWizard. The program was producing errors when trying to clone the HD when using the simple "clone HD" command. So I "cloned" it the following way with the help of SeaGate Tech support:

  • Place new Hard drive(Momentus XT) in slave slot, Old HD in Master)
  • Create backup image of old HD and save on Old HD.
  • Create Seagate DiscWizard Bootable CD
  • Physically Swap both hard drive positions, (old now is in slave, new is in Master)
  • Boot with Seagate DiscWizard Bootable CD and restore backup image of old HD onto new HD.

Afterwards, I tried starting SQL Server Express 2012 on my Momentus XT and it would not start. So, I tried performing a repair installation of SQL Server, and it failed: see summary below:

Overall summary:    Final result:                  Failed: see details below    Exit code (Decimal):           -2061893608    Start time:                    2013-08-12 15:53:13    End time:                      2013-08-12 16:13:13    Requested action:              Repair    Setup completed with required actions for features.  Troubleshooting information for those features:    Next step for SQLEngine:       Use the following information to resolve the error, and then try the setup process again.    Next step for Replication:     Use the following information to resolve the error, and then try the setup process again.      Machine Properties:    Machine name:                  MATT-LAPTOP    Machine processor count:       8    OS version:                    Windows 7    OS service pack:               Service Pack 1    OS region:                     United States    OS language:                   English (United States)    OS architecture:               x64    Process architecture:          64 Bit    OS clustered:                  No    Product features discovered:    Product              Instance             Instance ID                    Feature                                  Language             Edition              Version         Clustered     SQL Server 2008                                                          Management Tools - Basic                 1033                 Express Edition      10.0.1600.22    No            SQL Server 2012      SQLEXPRESS           MSSQL11.SQLEXPRESS             Database Engine Services                 1033                 Express Edition      11.0.2316.0     No            SQL Server 2012      SQLEXPRESS           MSSQL11.SQLEXPRESS             SQL Server Replication                   1033                 Express Edition      11.0.2316.0     No            SQL Server 2012                                                          Management Tools - Basic                 1033                 Express Edition      11.0.2316.0     No            SQL Server 2012                                                          LocalDB                                  1033                 Express Edition      11.0.2318.0     No            Package properties:    Description:                   Microsoft SQL Server 2012     ProductName:                   SQL Server 2012    Type:                          RTM    Version:                       11    SPLevel:                       0    Installation location:         c:\215ca8b216eb992f2f4a\x64\setup\    Installation edition:          Express    User Input Settings:    ACTION:                        Repair    AGTDOMAINGROUP:                <empty>    AGTSVCACCOUNT:                 NT AUTHORITY\NETWORK SERVICE    AGTSVCPASSWORD:                <empty>    AGTSVCSTARTUPTYPE:             Disabled    ASCONFIGDIR:                   Config    ASSVCACCOUNT:                  <empty>    ASSVCPASSWORD:                 <empty>    CLTSTARTUPTYPE:                0    CLTSVCACCOUNT:                 <empty>    CLTSVCPASSWORD:                <empty>    CONFIGURATIONFILE:                 CTLRSTARTUPTYPE:               0    CTLRSVCACCOUNT:                <empty>    CTLRSVCPASSWORD:               <empty>    ENU:                           true    FAILOVERCLUSTERGROUP:          <empty>    FAILOVERCLUSTERNETWORKNAME:    <empty>    FTSVCACCOUNT:                  <empty>    FTSVCPASSWORD:                 <empty>    HELP:                          false    INDICATEPROGRESS:              false    INSTANCENAME:                  SQLEXPRESS    ISSVCACCOUNT:                  NT AUTHORITY\Network Service    ISSVCPASSWORD:                 <empty>    ISSVCSTARTUPTYPE:              Automatic    QUIET:                         false    QUIETSIMPLE:                   false    SQLSVCACCOUNT:                 NT Service\MSSQL$SQLEXPRESS    SQLSVCPASSWORD:                <empty>    UIMODE:                        AutoAdvance    X86:                           false      Configuration file:            C:\Program Files\Microsoft SQL Server\110\Setup Bootstrap\Log\20130812_155231\ConfigurationFile.ini    Detailed results:    Feature:                       Management Tools - Basic    Status:                        Passed      Feature:                       Database Engine Services    Status:                        Failed: see logs for details    Reason for failure:            An error occurred during the setup process of the feature.    Next Step:                     Use the following information to resolve the error, and then try the setup process again.    Component name:                SQL Server Database Engine Services Instance Features    Component error code:          0x851A0018    Error description:             Could not find the Database Engine startup handle.    Error help link:               http://go.microsoft.com/fwlink?LinkId=20476&ProdName=Microsoft+SQL+Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=11.0.2316.0&EvtType=0xD15B4EB2%400x4BDAF9BA%401306%4024&EvtType=0xD15B4EB2%400x4BDAF9BA%401306%4024      Feature:                       SQL Server Replication    Status:                        Failed: see logs for details    Reason for failure:            An error occurred for a dependency of the feature causing the setup process for the feature to fail.    Next Step:                     Use the following information to resolve the error, and then try the setup process again.    Component name:                SQL Server Database Engine Services Instance Features    Component error code:          0x851A0018    Error description:             Could not find the Database Engine startup handle.    Error help link:               http://go.microsoft.com/fwlink?LinkId=20476&ProdName=Microsoft+SQL+Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=11.0.2316.0&EvtType=0xD15B4EB2%400x4BDAF9BA%401306%4024&EvtType=0xD15B4EB2%400x4BDAF9BA%401306%4024      Feature:                       SQL Browser    Status:                        Passed      Feature:                       SQL Writer    Status:                        Passed      Feature:                       LocalDB    Status:                        Passed      Feature:                       SQL Client Connectivity    Status:                        Passed      Feature:                       SQL Client Connectivity SDK    Status:                        Passed      Feature:                       Setup Support Files    Status:                        Passed    Rules with failures:    Global rules:    There are no scenario-specific rules.    Rules report file:               C:\Program Files\Microsoft SQL Server\110\Setup Bootstrap\Log\20130812_155231\SystemConfigurationCheck_Report.htm    The following warnings were encountered while configuring settings on your SQL Server.  These resources / settings were missing or invalid so default values were used in recreating the missing resources.  Please review to make sure they don't require further customization for your applications:    Service SID support has been enabled on the service.  Service SID support has been enabled on the service.    The following resources could not be configured during repair without additional user input.  Review the warnings to understand your next steps:    The service failed to start for an unknown reason. For more information, see the event logs and the SQL Server error logs.  

I looked at the error log and it said

Error: 5178, Severity: 16, State: 1 Cannot use file 'c:\Program Files\Microsoft SQL Server\MSSQL10.SQLEXPRESS\MSSQL\DATA\master.mdf' because it was originally formatted with sector size 4096 and is now on a volume with sector size 3072. Move the file to a volume with a sector size that is the same as or smaller than the original sector size.

I read that last error message and am really confused. I'm led to believe that this is a problem with SQL Server, My HD has 4096 sector size.

UPDATE:

More information: I have discovered that My old hard Drive was 512 physical sector size and my new HD is 4096 sector size. I hear that there are conversion issues between the two sector sizes, but SQL Server is the only program that is producing errors on my system, I don't understand it.

UPDATE:

I have attempted to uninstall SQL Server and reinstall it. It did not succeed in installing the instance.

MySQL row does not persist, but primary key with auto-increment is incremented

Posted: 23 Aug 2013 04:28 PM PDT

I have a MySQL database that is acting oddly. I insert a new row and observe that for a brief period of time (less than 30 seconds), the row persists as expected in the table. After this time, though, the row vanishes (despite no other queries being executed).

This table has an integer ID as a primary key, with auto-increment set, and the primary key is auto-incremented as expected for new rows. This leads me to believe there is not some kind of transactional rollback (and my table is MyISAM, so that shouldn't be possible anyways) or anything else that is somehow reverting the database to a previous state.

What logs should I be checking to see what is going on here? The contents of my '/var/log/mysql/error.log' are below, but I don't see anything unusual.

120815 21:01:01 [Note] Plugin 'FEDERATED' is disabled.  120815 21:01:02  InnoDB: Initializing buffer pool, size = 8.0M  120815 21:01:02  InnoDB: Completed initialization of buffer pool  120815 21:01:03  InnoDB: Started; log sequence number 0 44233  120815 21:01:03 [Note] Event Scheduler: Loaded 0 events  120815 21:01:03 [Note] /usr/sbin/mysqld: ready for connections.  Version: '5.1.63-0ubuntu0.11.10.1'  socket: '/var/run/mysqld/mysqld.sock'      port: 3306  (Ubuntu)  120815 21:01:04 [ERROR] /usr/sbin/mysqld: Table './appname/users' is marked as crashed and should be repaired  120815 21:01:04 [Warning] Checking table:   './appname/users'  120815 21:10:34 [Note] /usr/sbin/mysqld: Normal shutdown    120815 21:10:34 [Note] Event Scheduler: Purging the queue. 0 events  120815 21:10:34  InnoDB: Starting shutdown...  120815 21:10:39  InnoDB: Shutdown completed; log sequence number 0 44233  120815 21:10:39 [Note] /usr/sbin/mysqld: Shutdown complete  

I noted the 'crashed' mark on the appname/users table, but mysqlcheck suggests the table is OK.

Any thoughts?

Fixing wildcard expansion resulting in too many terms error (DRG-51030)

Posted: 23 Aug 2013 04:14 PM PDT

How can I resolve the wildcard_maxterm problem in Oracle 10g?

My index creation syntax is:

begin      ctx_ddl.drop_preference('SUBSTRING_PREF');  end;  /    begin      ctx_ddl.create_preference('SUBSTRING_PREF', 'BASIC_WORDLIST');      ctx_ddl.set_attribute('SUBSTRING_PREF', 'SUBSTRING_INDEX','TRUE');  end;  /    begin      ctx_ddl.drop_preference('bus_obj1_lexer');  end;  /    begin      ctx_ddl.create_preference('bus_obj1_lexer','BASIC_LEXER');      ctx_ddl.set_attribute('bus_obj1_lexer', 'base_letter', 'YES');      ctx_ddl.set_attribute('bus_obj1_lexer', 'mixed_case', 'YES' );      ctx_ddl.set_attribute('bus_obj1_lexer','printjoins', ';,@_');        ctx_ddl.set_attribute('bus_obj1_lexer','skipjoins', '-');   end;  /    create index BUS_OBJ_FTS_Name on BUSINESS_OBJECTS1(name)    indextype is ctxsys.context     parameters ('wordlist SUBSTRING_PREF                  MEMORY 128M                  DATASTORE CTXSYS.DEFAULT_DATASTORE                 SECTION GROUP CTXSYS.AUTO_SECTION_GROUP                 STOPLIST CTXSYS.EMPTY_STOPLIST                                          LEXER bus_obj1_lexer                                          SYNC (ON COMMIT)');  

And my query is:

select * from BUSINESS_OBJECTS1 where contains(name,'%WIN%')>0 and rownum<=100;  

There are 15 million rows in that table, and more than 50000 rows match that query. I have set wildcard_maxterm=50000 and default=20000 but I'm still getting this error:

DRG-51030: wildcard query expansion resulted in too many terms.

Can anybody hep me how to solve this error?

export (mysqldump) databases from MySQL version 4.0?

Posted: 23 Aug 2013 10:14 AM PDT

Goal: export the schema and data from a MySQL version 4.0.16 DB.

Background:

I was asked to migrate a database from MySQL version 4.0.16. The database was installed on a Windows XP machine, and I just could not find mysqldump tool on that machine.

I then putty into a linux machine that has mysqldump. I ran the command, but got an error:

-bash-4.1$ mysqldump --databases db_2_dump --host='myhost' --port='6012' -u <user> -p >db-export.sql  Enter password:  mysqldump: Error: 'You have an error in your SQL syntax.  Check the manual that corresponds to your MySQL server version for the right syntax to use near 'SELECT DISTINCT LOGFILE_GROUP_NAME FROM INFORMATION_SCHEMA.FILE' when trying to dump tablespaces  

I looked up mysql reference. INFORMATION_SCHEMA is something new to version 5 and above.

This means the mysqldump on the linux box is incompatible with the production MySQL server which is on v4.0.16.

I tried to download Mysql 4.0 server and install it on a windows machine so that I can get hold of a compatible version, but searching on Mysql website found nothing older than 5.0.

(I also tried Mysql workbench to connect to this DB. Failed. "Unknown character set utf8")

So how can I export the schema and data at all from this legacy mysql DB???...

Dropping Hypothetical Indexes

Posted: 23 Aug 2013 11:31 AM PDT

In the past I thought I'd deleted hypothetical indexes using either a DROP INDEX statement for clustered indexes and DROP STATISTICS statement for non-clustered indexes.

I have a database that is full of DTA remnants that I would like to cleanup; however, when I try to drop the object I always receive an error telling me that I cannot drop the object "because it does not exist or you do not have permission". I am a full sysadmin on the server so would expect to have rights to do anything.

I've tried this with both DROP STATS and DROP INDEX statements but both give me the same error.

Has anyone deleted these before and is there a trick I'm missing?


Addendum

Poking around in this, I just noticed that if I R-Click on the object, both the 'Script As' and 'DELETE' options are greyed out.

Avoiding performance hit from GROUP BY during FULLTEXT search?

Posted: 23 Aug 2013 11:14 AM PDT

Is there any clever way to avoid the performance hit from using group by during fulltext search?

SELECT p.topic_id, min(p.post_id)   FROM forum_posts AS p   WHERE MATCH (p.post_text) AGAINST ('baby shoes' IN BOOLEAN MODE)  GROUP BY p.topic_id  LIMIT 20;  

In this example it's fetching the lowest post_id for unique topic_ids that match the text.

With the group by to find the min, it's taking 600ms in a million row database, with about 50K rows examined.

If I remove the MIN but leave the GROUP BY, it's the same slowness, so it's the GROUP hit.

I suspect this is because it can only use one index, the fulltext ?

key: post_text | Using where; Using temporary; Using filesort    Query_time: 0.584685  Lock_time: 0.000137  Rows_sent: 20  Rows_examined: 57751  Full_scan: No  Full_join: No  Tmp_table: Yes  Tmp_table_on_disk: No  Filesort: Yes  Filesort_on_disk: No  Merge_passes: 0  

Without the GROUP BY it's 1ms so this has to be filesort speed?

(I've removed ORDER BY and everything else to isolate where the hit is)

Thanks for any insight and ideas.

(using MyISAM under mariadb if it matters)

AWS performance of RDS with provisioned IOPS vs EC2

Posted: 23 Aug 2013 12:14 PM PDT

Has anyone done a performance comparison of AWS RDS with the new provisioned IOPS vs EC2? I've found plenty of non-high IOPS RDS vs EC2 but nothing with the new high IOPS feature in RDS.

sp_startpublication_snapshot Parameter(s)

Posted: 23 Aug 2013 07:14 PM PDT

I am creating a stored procedure that:

  1. Restores a DB from a .bak giving the .mdf and .ldf a new name (so we have have several copies of the same DB up
  2. (If specified in the SP's parameter) Creates three merge replication publications
  3. (What I need help doing) Generating the snapshots for the three publications using sp_startpublication_snapshot

Here is my new brick wall... On this DB server, I have a 'shell' db that they will be running the SP from, that has a history table so I can keep track of who created/deleted databases using my SP's... The only parameter for sp_startpublication_snapshot is @publication... I can give it the publication name, but since I am not running it from the publishing database, how do I specify the publishing database?

i.e.: the publication shows up as:

[WC48_Database1]: upb_Inspection_PrimaryArticles  

but I am running the script from the database [WC_QACatalog]

Any ideas about how to accomplish this?

Thank you, Wes

No comments:

Post a Comment

Search This Blog