Sunday, September 22, 2013

[how to] Easy replication for SQL Server: Can it be set up like mongodb or Oracle?

[how to] Easy replication for SQL Server: Can it be set up like mongodb or Oracle?


Easy replication for SQL Server: Can it be set up like mongodb or Oracle?

Posted: 22 Sep 2013 08:41 PM PDT

This might look like a noob question... but it isn't. (I have 18 years of SQL Server experience, and three years of MongoDB experience.)

The basic scenario is:

  • -- Data Center has big, primary SQL Server instance
  • -- Need a warm spare off site

("Warm Spare": Can be up to 5 or 10 minutes behind. While the primary is live, the spare is not used operationally in any way. Is there, and automatically is within 5-10 mins of current. Manual cutover is needed to bring it on line and use it.)

With SQL Server, to set up a backup server, I need to go through a lot of steps, to get log shipping set up:

  • -- networking (open firewalls for multiple ports for windows file share)
  • -- file system shares
  • -- OS users with rights
  • -- SQL Agent
  • -- Jobs to create tlogs, and retrieve and apply tlogs
  • -- initial setup tasks (restoring backup and first kit of logs)

I often use redgate sql backup which helps many of these tasks.

But it is still quite a bit of setup work. Further, in some environments (like the Amazon RDS SQL Server service), I have %100 access to SQL Server, but zero access to the OS. Without OS access, I can't even begin to set up the backup server.

I was talking to a colleague (who is more Oracle centric), who said, "Either you have sipped too much Microsoft Coolaid, and sadly think that the limitations you describe are normal, or there is an easier way."

I think (hopefully correctly!) that SQL Server is world class.

So the question: Is there an easier way to set up an off site, warm spare SQL Server?

On Mongodb, it is REALLY EASY, and zero access to the OS is needed:

  1. Make sure a single port is open between two servers
  2. Tweak a conf file (which can be done via Mongo commands)
  3. rs.initiate() (Mongo command)
  4. rs.add("backup-db-server") (mongo command)

Done!

My colleague indicates that Oracle has a similar mechanism.

So my SQL friends: Can SQL Server match Mongo, or get in sight of Mongo, for simplicity of replication setup?

What database does Yahoo, Bing, and Ask use? [on hold]

Posted: 22 Sep 2013 03:52 PM PDT

I'm trying to find information on popular search engines, and I wanted to know what databases they use. I've been searching everywhere, and I've only found little information. I know Google uses BigTable, but I'm not sure about Yahoo, Bing, or Ask.

Can I delete SQL Server 2008 R2 "audittrace" (.trc) files

Posted: 22 Sep 2013 05:04 PM PDT

This is precaution question :)


When I was configuring SQL Server 2008 R2 database engine, I checked "Enable C2 audits" which I don't really need. From that time it created a lot of files like audittrace20130918235148.trc. I disabled C2 auditing in SQL Server Management Studio by unchecking checkbox.


Are those files really from C2 audit? If so, can I safely remove them without worring about data stored in databases? Again: I do not need audit logs.

Mapping and Inserting Values into a Specialized table

Posted: 22 Sep 2013 03:27 PM PDT

For a system i'm designing,i have one parent entity,Computer

Computer(computer_id,computer_processor,computer_hddrive,computer_memory);  

A computer can be of two types,Desktop and Laptop,each having their own specific attributes.

Laptop(laptop_id(Foreign key of computer_id),laptop_color,laptop_battery);  

and similarly a table for Desktop.

Now the entries to these tables is via a end-user form,my main question is, what would the exact INSERT queries be for adding say to the inventory,a laptop if i were to have 3 tables,one superclass and two subclass tables? I understand i would first how to make an entry to the computer table and then laptop,but how will this be carried out automatically?

How can I return column names in dot notation in MySQL?

Posted: 22 Sep 2013 12:56 PM PDT

I want to return column names into a table. But I also want to avoid redundancies like 'id' and 'id'.

I did:

INSERT INTO columns(name)   SELECT COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS   WHERE table_name='contacts' or table_name='companies';  

But as expected I get id from the first table showing up with the same field name.

How do I get it in the form: contacts.id and companies.id?

How to create root user after running mysql_install_db?

Posted: 22 Sep 2013 01:49 PM PDT

I'm installing MySQL on ubuntu and I changed the data directory. I created new data files using mysql_install_db command.

So now I need to access my database using root but it won't let me in. I tried starting the database by skipping the grant tables:

mysqld --skip-grant-tables --skip-networking &  

Now I'm in, but the mysql.user table doesn't have a single row, which is logical after creating the data files from scratch, or isn't it ??

So I tried:

CREATE USER 'root'@'localhost' IDENTIFIED BY 'mypass';  

But it won't run because I'm running with --skip-grant-tables !

  • So how can I create a root user from that point ?
  • And how can I stop the started mysql instance I just started with --skip-grant-tables because it seems I always have to killall mysqld to do it !

Upgrade SQL Server 2000 database to 2008 R2 and enable new features

Posted: 22 Sep 2013 01:22 PM PDT

I have recently upgraded SQL Server 2000 database to 2008 R2.


What I did was:

  1. Shutdown SQL Server 2000 (express) service on old machine,
  2. Move datafiles (mydatabase.mdf and mydatabase.ldf) to new machine,
  3. Run SQL Server Management Studio 2008,
  4. Connect to local database engine,
  5. Attach datafiles to database.
  6. Change the compatibility level of database to SQL 2008 (100).

Question: What else should I do to make migration complete?

I want to:

  1. use new features like checksumming and full recovery model,
  2. make this database to be exactly as it was created in SQL 2008 R2,
  3. make this database to be fully compatibile, correct and be perfect suited for new, SQL 2008 R2 database engine.

In other words: I just want to know how to correctly and completly convert old SQL 2000 database to new 2008 R2 database, be calm that everything is done right and be happy with all new features.


I'm asking this question, because I've found a lot of sites on the Internet that says so many different things that makes me confused: some say that it is required to rebuild indexes, another says to do other things... and now I know nothing so I want to hear experienced person opinion and clear, step-by-step instructions. I work for very small company, I'm on my own and I don't want to screw things up.

Thank you very much for your time!

Saelic

Column 'Comments.Text' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause

Posted: 22 Sep 2013 08:46 AM PDT

They have explained the root of the problem very well and even advised to use max aggregate as a solution. But, lets take the example

group_id    item_id     comment  --------------------------------  1       abc     "blah-blah"  1       def     "tral-la-lah"  1       ghi  2       jkl  2       mno  2       pqr  

I understand the solution: I must take an item to represent the group by max(item_id). But how do I take the comment of the selected item? How does it help me to take the corresponding comment?

edit

Everything is like in that question. The only complication that I address here is the extra comment column that I want to select along with the max(item_id) group by group_id.

I am using this beast, http://meta.stackoverflow.com/a/52063/201017

Nested with - Incorrect syntax near the keyword 'WITH'

Posted: 22 Sep 2013 07:16 AM PDT

With inside with

This is ok

    WITH CS AS (        SELECT      'ab' as a      )      SELECT * from cs  

http://data.stackexchange.com/mathematics/revision/138658/170321/comment-partition-by-question

But, as soon as I wrap it with a second with,

with PS as (          WITH CS AS (            SELECT      'ab' as a          )          SELECT * from cs          ) select * from PS  

I get this error,

Incorrect syntax near the keyword 'WITH'. Incorrect syntax near the keyword 'with'. If this statement is a common table expression, an xmlnamespaces clause or a change tracking context clause, the previous statement must be terminated with a semicolon. Incorrect syntax near ')'.

With inside select

The following is ok

select ab.a from (    select a from (SELECT 'ab' as a) b              ) ab  

This is also runs

  with cs as (SELECT 'ab' as a)    select a from cs  

But, wrapping expression with another select, as this one proposes,

  select a from (      with cs as (SELECT 'ab' as a)      select a from cs    ) ab  

fails again with the same error.

I do it in data.stackexchange and they report that they use MSSQL

"The FILE SIZE property is incorrect" error while trying to attach "Adventure Works for SQL Server 2012" sample databases

Posted: 22 Sep 2013 08:26 AM PDT

I am trying to attach the sample databases, for example, AdventureWorks2012_Data.mdf from codeplex "Adventure Works for SQL Server 2012" into a MS SQL Server 2012 (default) instances on a a virtual machines under MS Windows Server 2008 and MS Windows 7.

Each time I am getting the error:

The header for file 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\AdventureWorks2012_Data.mdf' is not a valid database file header. The FILE SIZE property is incorrect. (.Net SqlClient Data Provider)

------------------------------ For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft%20SQL%20Server&ProdVer=11.00.3128&EvtSrc=MSSQLServer&EvtID=5172&LinkId=20476

What can be done to attach the MS SQL Server 2012 sample databases?

Tried to install these sample databases ак the script(s), also always getting the error, the different one, about which I shall ask in a separate question.

Multiple filters with joins

Posted: 22 Sep 2013 11:35 AM PDT

I just read this qestion: SQL Multiple Filter - Optimal Way

I tried this with Northwind. I filter like this:

declare @supplier int = 1  declare @category int = 1  select * from products p     WHERE (@supplier  = '' or SupplierId = @supplier)    AND (@category = '' or CategoryId = @category)  

This seems to work as expected, I get two hits: Chai and Chang. Now I want to add filter ShipPostalCode:

declare @supplier int = 1  declare @category int = 1  declare @shippostal nvarchar(50) = '01307'  select * from products p  inner join [Order Details] od on p.ProductId = od.ProductId  inner join Orders o on o.OrderId = od.OrderId  AND (@category = '' or CategoryId = @category)  AND (@shippostal = '' or ShipPostalCode = @shippostal)  

Because of the join I now have multiple rows of Chai and Chang. These filters are users who makes selection in dropdown lists, so even if shippostalcode is chosen, I still need one hit.

distinct productName  

will give me one row per product, but that's not the way to solve it, is it?

EDIT: Thank you guys, Yes, supplier is an int.

So If there be would another many to many relation from the Products table I can just add another AND, I guess.

Sorry, this is a very sad example, but if I add OrderDetails2 and Orders2 where OrdersDetails2 will be a junction table between Products and Orders2, I can just add add another AND (and keep on filtering), like this:

declare @supplier int = 1  declare @category int = 1  declare @shippostal nvarchar(50) = '01307'  declare @city nvarchar(50) = 'boston'  select * from products p   WHERE (@supplier  = '' or SupplierId = @supplier) AND (@category = '' or CategoryId = @category)  AND ProductId in  (  Select od.ProductId from [Order Details] od    inner join Orders o on o.OrderId = od.OrderId  Where (@shippostal = '' or ShipPostalCode = @shippostal)  )  AND ProductId in  (  Select od.ProductId from OrderDetails2 od    inner join Orders2 o on o.OrderId = od.OrderId  Where (@city = '' or City = @city)  )  

As you can see, the Order2 table has a column 'City' What do you think?

How to give permissions to 90+ SQL Servers without tweaking in each of them?

Posted: 21 Sep 2013 10:28 PM PDT

There are over 90+ MS SQL Servers (SQL Server 2005, 2008 and 2012). All in the same Windows domain (but physically all over the world).

How to give a domain user (administrative, or other) access/permissions to its instances and databases without tweaking (creating, configuring) users/accounts and permissions for such user (or group) in each of 90+ servers, (and even more) their instances and databases?

Detabase Tuning Advisor about Indexing

Posted: 22 Sep 2013 08:40 PM PDT

How does DTA make an index recommendation? What is the technology behind that? If we can use same on our side then we can shorten the whole process of Tuning recommendation.

       Like        (1) creating trace file using SQl Profiler       (2) Use trace that trace file in DTA       (3) Than look for recommendation       (4) Follow the recommendation & finally see the results through Query Execution Plan.  

So i just wanted to know if it is there any way to create a predictable index.

What am I doing wrong in mysql database?

Posted: 22 Sep 2013 02:01 PM PDT

I'm creating a trigger that deletes rows from a table when they are inserted. However, when the trigger executes I get the following message:

MYSQL ERROR: Can't update table 'player_viplist' in stored function/trigger because it is already used by statement which invoked this stored function/trigge

What do I need to do for this trigger to work?

DROP TRIGGER dele;    DELIMITER |    CREATE DEFINER = 'root'@'localhost' TRIGGER dele    AFTER INSERT    ON player_viplist    FOR EACH ROW  BEGIN DELETE FROM player_viplist ;     END|    DELIMITER ;  

Advice for SQL Server 2000 replication

Posted: 22 Sep 2013 07:38 PM PDT

I have software that runs on SQL Server 2000.

Can you tell me if its possible to synchronize a branch database with its server without human intervention, i.e. programmatically? If so, where do I start?

MySQL row does not persist, but primary key with auto-increment is incremented

Posted: 22 Sep 2013 05:22 PM PDT

I have a MySQL database that is acting oddly. I insert a new row and observe that for a brief period of time (less than 30 seconds), the row persists as expected in the table. After this time, though, the row vanishes (despite no other queries being executed).

This table has an integer ID as a primary key, with auto-increment set, and the primary key is auto-incremented as expected for new rows. This leads me to believe there is not some kind of transactional rollback (and my table is MyISAM, so that shouldn't be possible anyways) or anything else that is somehow reverting the database to a previous state.

What logs should I be checking to see what is going on here? The contents of my '/var/log/mysql/error.log' are below, but I don't see anything unusual.

120815 21:01:01 [Note] Plugin 'FEDERATED' is disabled.  120815 21:01:02  InnoDB: Initializing buffer pool, size = 8.0M  120815 21:01:02  InnoDB: Completed initialization of buffer pool  120815 21:01:03  InnoDB: Started; log sequence number 0 44233  120815 21:01:03 [Note] Event Scheduler: Loaded 0 events  120815 21:01:03 [Note] /usr/sbin/mysqld: ready for connections.  Version: '5.1.63-0ubuntu0.11.10.1'  socket: '/var/run/mysqld/mysqld.sock'      port: 3306  (Ubuntu)  120815 21:01:04 [ERROR] /usr/sbin/mysqld: Table './appname/users' is marked as crashed and should be repaired  120815 21:01:04 [Warning] Checking table:   './appname/users'  120815 21:10:34 [Note] /usr/sbin/mysqld: Normal shutdown    120815 21:10:34 [Note] Event Scheduler: Purging the queue. 0 events  120815 21:10:34  InnoDB: Starting shutdown...  120815 21:10:39  InnoDB: Shutdown completed; log sequence number 0 44233  120815 21:10:39 [Note] /usr/sbin/mysqld: Shutdown complete  

I noted the 'crashed' mark on the appname/users table, but mysqlcheck suggests the table is OK.

Any thoughts?

Fixing wildcard expansion resulting in too many terms error (DRG-51030)

Posted: 22 Sep 2013 04:22 PM PDT

How can I resolve the wildcard_maxterm problem in Oracle 10g?

My index creation syntax is:

begin      ctx_ddl.drop_preference('SUBSTRING_PREF');  end;  /    begin      ctx_ddl.create_preference('SUBSTRING_PREF', 'BASIC_WORDLIST');      ctx_ddl.set_attribute('SUBSTRING_PREF', 'SUBSTRING_INDEX','TRUE');  end;  /    begin      ctx_ddl.drop_preference('bus_obj1_lexer');  end;  /    begin      ctx_ddl.create_preference('bus_obj1_lexer','BASIC_LEXER');      ctx_ddl.set_attribute('bus_obj1_lexer', 'base_letter', 'YES');      ctx_ddl.set_attribute('bus_obj1_lexer', 'mixed_case', 'YES' );      ctx_ddl.set_attribute('bus_obj1_lexer','printjoins', ';,@_');        ctx_ddl.set_attribute('bus_obj1_lexer','skipjoins', '-');   end;  /    create index BUS_OBJ_FTS_Name on BUSINESS_OBJECTS1(name)    indextype is ctxsys.context     parameters ('wordlist SUBSTRING_PREF                  MEMORY 128M                  DATASTORE CTXSYS.DEFAULT_DATASTORE                 SECTION GROUP CTXSYS.AUTO_SECTION_GROUP                 STOPLIST CTXSYS.EMPTY_STOPLIST                                          LEXER bus_obj1_lexer                                          SYNC (ON COMMIT)');  

And my query is:

select * from BUSINESS_OBJECTS1 where contains(name,'%WIN%')>0 and rownum<=100;  

There are 15 million rows in that table, and more than 50000 rows match that query. I have set wildcard_maxterm=50000 and default=20000 but I'm still getting this error:

DRG-51030: wildcard query expansion resulted in too many terms.

Can anybody hep me how to solve this error?

How to split data files across multiple hard drives?

Posted: 22 Sep 2013 03:22 AM PDT

I'm asking for advice concerning point when MySQL becomes a large (maybe medium for someone) system. We have MySQL database (InnoDB engine) that grows in gigabytes. It's already some hundreds of gigs and we definitely know it will grow. What would you advice to do when the size of data exceeds the drive space? I'm pretty sure data will grow out one terabyte, two, and so one later. I've found two different solutions to go with:

1) either use 'innodb_file_per_table' option and run table optimization ofter; Question for point 1: At some point anyway all those different idb files could come to a large size of disk, so is there a way (other than symlink) to move some files to other drive?

2) or add more drives to list in 'innodb_data_file_path' option. Question for point 2: Is that good enough solution?

I would appreciate if you could share any other solutions that worked for you in large systems: some sort of sharding or else?

UPDATED: Title of question updated - hopefully new one is more concrete.

export (mysqldump) databases from MySQL version 4.0?

Posted: 22 Sep 2013 10:22 AM PDT

Goal: export the schema and data from a MySQL version 4.0.16 DB.

Background:

I was asked to migrate a database from MySQL version 4.0.16. The database was installed on a Windows XP machine, and I just could not find mysqldump tool on that machine.

I then putty into a linux machine that has mysqldump. I ran the command, but got an error:

-bash-4.1$ mysqldump --databases db_2_dump --host='myhost' --port='6012' -u <user> -p >db-export.sql  Enter password:  mysqldump: Error: 'You have an error in your SQL syntax.  Check the manual that corresponds to your MySQL server version for the right syntax to use near 'SELECT DISTINCT LOGFILE_GROUP_NAME FROM INFORMATION_SCHEMA.FILE' when trying to dump tablespaces  

I looked up mysql reference. INFORMATION_SCHEMA is something new to version 5 and above.

This means the mysqldump on the linux box is incompatible with the production MySQL server which is on v4.0.16.

I tried to download Mysql 4.0 server and install it on a windows machine so that I can get hold of a compatible version, but searching on Mysql website found nothing older than 5.0.

(I also tried Mysql workbench to connect to this DB. Failed. "Unknown character set utf8")

So how can I export the schema and data at all from this legacy mysql DB???...

Optimizing mysql queries running on 50 million rows

Posted: 22 Sep 2013 07:39 PM PDT

Is there any technique/advice in order to optimize the performance of the queries below?

According to my needs, my average db size will be approximately 30mill. rows per day, so every second less, will make huge difference.

My DB engine is innoDB and I'm using 1 core CPU, with 2GB RAM.

Exec Time : 7,5 sec (5,5 mil. rows)

SELECT vpn_group, username, from_interface_addr_ip  FROM SystemEventsR  WHERE (timestamp > ( NOW( ) - INTERVAL 10 MINUTE ) AND   SysLogTagflag=1 AND   username !='')  GROUP BY username  

Exec Time : 88,4 sec (5,7 mil. rows)

SELECT syslogtagid, DeviceReportedTime, class, definition, SystemEventsR.SysLogTag,  COUNT(id) AS Records,   explanation, action   FROM SystemEventsR,oa_syslogtags  WHERE  (SystemEventsR.SysLogTag = oa_syslogtags.syslogtag AND   flagid = 1 AND DATE(timestamp) = DATE(NOW()))  GROUP BY SystemEventsR.SysLogTag   ORDER BY Records DESC  

Any help would be appreciated.

Dropping Hypothetical Indexes

Posted: 22 Sep 2013 01:22 PM PDT

In the past I thought I'd deleted hypothetical indexes using either a DROP INDEX statement for clustered indexes and DROP STATISTICS statement for non-clustered indexes.

I have a database that is full of DTA remnants that I would like to cleanup; however, when I try to drop the object I always receive an error telling me that I cannot drop the object "because it does not exist or you do not have permission". I am a full sysadmin on the server so would expect to have rights to do anything.

I've tried this with both DROP STATS and DROP INDEX statements but both give me the same error.

Has anyone deleted these before and is there a trick I'm missing?


Addendum

Poking around in this, I just noticed that if I R-Click on the object, both the 'Script As' and 'DELETE' options are greyed out.

Avoiding performance hit from GROUP BY during FULLTEXT search?

Posted: 22 Sep 2013 11:22 AM PDT

Is there any clever way to avoid the performance hit from using group by during fulltext search?

SELECT p.topic_id, min(p.post_id)   FROM forum_posts AS p   WHERE MATCH (p.post_text) AGAINST ('baby shoes' IN BOOLEAN MODE)  GROUP BY p.topic_id  LIMIT 20;  

In this example it's fetching the lowest post_id for unique topic_ids that match the text.

With the group by to find the min, it's taking 600ms in a million row database, with about 50K rows examined.

If I remove the MIN but leave the GROUP BY, it's the same slowness, so it's the GROUP hit.

I suspect this is because it can only use one index, the fulltext ?

key: post_text | Using where; Using temporary; Using filesort    Query_time: 0.584685  Lock_time: 0.000137  Rows_sent: 20  Rows_examined: 57751  Full_scan: No  Full_join: No  Tmp_table: Yes  Tmp_table_on_disk: No  Filesort: Yes  Filesort_on_disk: No  Merge_passes: 0  

Without the GROUP BY it's 1ms so this has to be filesort speed?

(I've removed ORDER BY and everything else to isolate where the hit is)

Thanks for any insight and ideas.

(using MyISAM under mariadb if it matters)

AWS performance of RDS with provisioned IOPS vs EC2

Posted: 22 Sep 2013 12:22 PM PDT

Has anyone done a performance comparison of AWS RDS with the new provisioned IOPS vs EC2? I've found plenty of non-high IOPS RDS vs EC2 but nothing with the new high IOPS feature in RDS.

sp_startpublication_snapshot Parameter(s)

Posted: 22 Sep 2013 07:22 PM PDT

I am creating a stored procedure that:

  1. Restores a DB from a .bak giving the .mdf and .ldf a new name (so we have have several copies of the same DB up
  2. (If specified in the SP's parameter) Creates three merge replication publications
  3. (What I need help doing) Generating the snapshots for the three publications using sp_startpublication_snapshot

Here is my new brick wall... On this DB server, I have a 'shell' db that they will be running the SP from, that has a history table so I can keep track of who created/deleted databases using my SP's... The only parameter for sp_startpublication_snapshot is @publication... I can give it the publication name, but since I am not running it from the publishing database, how do I specify the publishing database?

i.e.: the publication shows up as:

[WC48_Database1]: upb_Inspection_PrimaryArticles  

but I am running the script from the database [WC_QACatalog]

Any ideas about how to accomplish this?

Thank you, Wes

How do I determine if FORCE LOGGING is on for the entire database?

Posted: 21 Sep 2013 10:32 PM PDT

How can I determine if force logging is on, at the database level. (Not asking about finding out if force logging is on for particular tablespaces.)

What rights are needed to perform the check of the above. (I have SELECT_CATALOG_ROLE, but other than that, pretty limited rights.)

I've tried to find out how to query from the Reference and the Administrator's Guide manuals.

Version:

SQL> select * from v$version;    BANNER  ----------------------------------------------------------------  Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi  PL/SQL Release 10.2.0.4.0 - Production  CORE    10.2.0.4.0  Production  TNS for Solaris: Version 10.2.0.4.0 - Production  NLSRTL Version 10.2.0.4.0 - Production  

How to show the column names of a table with a simple sql query?

Posted: 22 Sep 2013 04:56 AM PDT

The case is simple: You have an mysql database where you have only sql query interface and you want to know the database structure with queries. You can list tables with show tables; command but how do you see the individual column names?

(SELECT statement shows Empty set if no data is present and can NOT be thus used.)

What are the main differences between InnoDB and MyISAM?

Posted: 21 Sep 2013 10:40 PM PDT

What are the main differences between InnoDB and MyISAM?

No comments:

Post a Comment

Search This Blog