Sunday, July 14, 2013

[how to] SQL Database Hosting Solutions?

[how to] SQL Database Hosting Solutions?


SQL Database Hosting Solutions?

Posted: 14 Jul 2013 05:30 PM PDT

I have a client who is running a local SQL Server (for 2 computers) for a client application. They are considering upgrading the local DB to a hosted database.

In all the places i look Amazon RDS, Azure, etc, all the prices seem to be expensive. Their database has about less than 1,000 records. Any suggestions? Am i looking in the wrong places?

SQL Server 2005 onwards- Capturing Baseline

Posted: 14 Jul 2013 10:30 AM PDT

I want design a process by which I can capture the baseline data of multiple SQL servers( 2005 onwards) in a single database from where I can get all the info and alerts ..as of now I want to capture the info about below counters:

1) Free space available on disks 2) Database File and Log space available 3) Wait statistics 4) CPU and memory counters

Is there any free tool that I can refer to and modify if needed or any useful article that can help me to set up the same.

As of now I am referring to the article by Erin Stellato in SQLServerCentral.com.

Any help would be greatly appreciated.

Thanks Subhro

Finding rows for a specified date range

Posted: 14 Jul 2013 02:08 PM PDT

I have a table which stores which teacher (teacherid) works for which group (groupid) of pupils starting from the date startdate:

CREATE TABLE `pupilgroupteacher` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `groupid` smallint(5) unsigned NOT NULL,    `startdate` date NOT NULL,    `teacherid` int(10) unsigned DEFAULT NULL,    PRIMARY KEY (`id`),    UNIQUE KEY `date` (`groupid`,`startdate`),    KEY `teacher` (`teacherid`),    KEY `group` (`groupid`),    CONSTRAINT `fk_pupilgroupteacher_2` FOREIGN KEY (`groupid`) REFERENCES `pupilgroup` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION,    CONSTRAINT `fk_pupilgroupteacher_1` FOREIGN KEY (`teacherid`) REFERENCES `employee` (`personid`)  ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin

Having a teacher ID and a month, I need to find all groups for which this teacher worked (or is going to work) at least one day in the given month.

Note: For a given teacherid there may be more than one startdate (if, for example, a teacher worked for a group, was replaced with another teacher and then replaced back on a later startdate).

How does Log shipping knows to keep track

Posted: 14 Jul 2013 12:39 PM PDT

I have set up log shipping between 2 SQL Servers (SQL Server 2008 and SQL Server 2012).

At first i took a full backup of the first one and restored it on the other one and than i set up the log shipping. The databases first backup-restore took a few hours and below are my questions that I need to understand :

  • How does the log shipping process knows from when to start the logs it is shipping?
  • Does it start from when the last backup-restore process happened or from the moment the log shipping itself is set ?
  • Can i be sure that no information was lost in these few hours ?

GeomFromText in trigger has error

Posted: 14 Jul 2013 05:50 PM PDT

I have this trigger:

DELIMITER $$  CREATE TRIGGER geometrytrigger BEFORE INSERT ON users  FOR EACH ROW BEGIN    SET NEW.user_geometry=GeomFromText(NEW.user_geometry);  END$$  DELIMITER ;  

when I insert a row with this query:

INSERT INTO `users` (`id`, `name`, `email`, `password`, `user_geometry`) VALUES (NULL, 'arash', 'arash@host.com', 'sssssss', 'POINT(-0.23 169.93)');  

this error has occurred:

 #1416 - Cannot get geometry object from data you send to the GEOMETRY field  

How does Log shipping knows to keep track [duplicate]

Posted: 14 Jul 2013 01:21 PM PDT

This question already has an answer here:

I have set up log shipping between 2 SQL Servers (SQL Server 2008 and SQL Server 2012),

At first i took a full backup of the first one and restored it on the other one and than i set up the log shipping,

The databases first backup-restore took a few hours and i wish to know:

  • how does the log shipping process knows from when to start the logs it is shipping? does it start from when the last backup-restore process happened or from the moment the log shipping itself is set ?
  • Can i be sure that no information was lost in these few hours ?

Thanks

How can we check for data integrity when we upgrade from MySQL 4.1 to MySQL 5.5?

Posted: 14 Jul 2013 02:57 AM PDT

By data integrity, I mean:

  • Schema integrity
  • Data (record level) integrity

Checksum was an option, but since the storage format for some data types in MySQL 5.5 has been changed, it won't be reliable.

Another option is to parse the XML file generated by mysqldump. Since the file is too large (in GBs), is it worth it?

Any other way you have confirmed it?

School database design

Posted: 14 Jul 2013 02:56 AM PDT

I am trying to design a database management system with my PHP and intermediate MySQL knowledge, keeping normalization in mind.

These are the assumptions:

  1. The school has many students from grade 1 to grade 6.
  2. Each grade is divided into classes (e.g. we have grade 1 A, grade 1 B, grade 1 C) due to the class size.
  3. Grade 1 to grade 3 offer the same subjects, but from grade 4 to grade 6 users can select which courses to offer.
  4. Many teachers teach ONLY one subject BUT can teach MORE THAN one grade.

As an intermediate programmer, it all got twisted. I was able to create the following tables and columns

students{(id),(name),(name),(parent_id)} -student details    parents{(id),(name),(email),(phone),(address)} - parent details    teachers{(id),(name),(email),(subject_id)} - teachers details    subjects{(id),(name),(description)} - subject details    attendance{(date),(status),(student_id)} - attendance of student with id of student_id,                                             status in boolean TRUE = present, FALSE=absent  

This is how far I have gone.

Now I want to show a table where a student can be linked to all the subjects he/she is taking and a teacher linked to all the grades (no need to specify classes) he is teaching. How can I do that?

I will appreciate it if I can be pointed in the right direction.

Postgres database locked up

Posted: 13 Jul 2013 10:10 PM PDT

I was running a process to update a bunch of rows in postgres database and it locked up. Now I can't select anything out of it. None of my processes are "idle in transaction" when I execute ps auxwww|grep ^postgres

Also the output of my locks select is:

-[ RECORD 1 ]------+----------------  locktype           | relation  database           | 16384  relation           | 11000  page               |   tuple              |   virtualxid         |   transactionid      |   classid            |   objid              |   objsubid           |   virtualtransaction | 3/4  pid                | 18655  mode               | AccessShareLock  granted            | t  -[ RECORD 2 ]------+----------------  locktype           | virtualxid  database           |   relation           |   page               |   tuple              |   virtualxid         | 3/4  transactionid      |   classid            |   objid              |   objsubid           |   virtualtransaction | 3/4  pid                | 18655  mode               | ExclusiveLock  granted            | t  

but I'm not sure what I can with that info. I also restarted my db and no luck. Any ideas?

Check existence with EXISTS outperform COUNT! ... Not?

Posted: 14 Jul 2013 07:26 PM PDT

I've often read when one had to check existence of a row should always be done with EXISTS instead of with a COUNT.

Yet in several recent scenarios I've measured a performance improvement when using count.
The pattern goes like this:

LEFT JOIN (      SELECT          someID          , COUNT(*)      FROM someTable      GROUP BY someID  ) AS Alias ON (      Alias.someID = mainTable.ID  )  

I'm not familiar with methods to tell what's happening "inside" SQL Server so I was wondering if there was a unheralded flaw with EXISTS that gave perfectly sense to the measurements I've done (could EXISTS be RBAR?!).

Do you have some explanation to that phenomena?

EDIT:

Here's a full script you can run:

SET NOCOUNT ON  SET STATISTICS IO OFF    DECLARE @tmp1 TABLE (      ID INT UNIQUE  )      DECLARE @tmp2 TABLE (      ID INT      , X INT IDENTITY      , UNIQUE (ID, X)  )    ; WITH T(n) AS (      SELECT          ROW_NUMBER() OVER (ORDER BY (SELECT NULL))      FROM master.dbo.spt_values AS S  )   , tally(n) AS (      SELECT          T2.n * 100 + T1.n      FROM T AS T1      CROSS JOIN T AS T2      WHERE T1.n <= 100      AND T2.n <= 100  )  INSERT @tmp1  SELECT n  FROM tally AS T1  WHERE n < 10000      ; WITH T(n) AS (      SELECT          ROW_NUMBER() OVER (ORDER BY (SELECT NULL))      FROM master.dbo.spt_values AS S  )   , tally(n) AS (      SELECT          T2.n * 100 + T1.n      FROM T AS T1      CROSS JOIN T AS T2      WHERE T1.n <= 100      AND T2.n <= 100  )  INSERT @tmp2  SELECT T1.n  FROM tally AS T1  CROSS JOIN T AS T2  WHERE T1.n < 10000  AND T1.n % 3 <> 0  AND T2.n < 1 + T1.n % 15    PRINT '  COUNT Version:  '    WAITFOR DELAY '00:00:01'    SET STATISTICS IO ON  SET STATISTICS TIME ON    SELECT      T1.ID      , CASE WHEN n > 0 THEN 1 ELSE 0 END AS DoesExist  FROM @tmp1 AS T1  LEFT JOIN (      SELECT          T2.ID          , COUNT(*) AS n      FROM @tmp2 AS T2      GROUP BY T2.ID  ) AS T2 ON (      T2.ID = T1.ID  )  WHERE T1.ID BETWEEN 5000 AND 7000  OPTION (RECOMPILE) -- Required since table are filled within the same scope    SET STATISTICS TIME OFF    PRINT '    EXISTS Version:'    WAITFOR DELAY '00:00:01'    SET STATISTICS TIME ON    SELECT      T1.ID      , CASE WHEN EXISTS (          SELECT 1          FROM @tmp2 AS T2          WHERE T2.ID = T1.ID      ) THEN 1 ELSE 0 END AS DoesExist  FROM @tmp1 AS T1  WHERE T1.ID BETWEEN 5000 AND 7000  OPTION (RECOMPILE) -- Required since table are filled within the same scope    SET STATISTICS TIME OFF   

On SQL Server 2008R2 (Seven 64bits) I get this result

COUNT Version:

Table '#455F344D'. Scan count 1, logical reads 8, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table '#492FC531'. Scan count 1, logical reads 30, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 81 ms.

EXISTS Version:

Table '#492FC531'. Scan count 1, logical reads 96, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table '#455F344D'. Scan count 1, logical reads 8, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 76 ms.

Constraint based rules engine

Posted: 14 Jul 2013 10:03 AM PDT

I am evaluating software systems which configure products that tout constraint based versus rules based engines. Can someone explain (with maybe an example or two) to a non-expert what the difference is? My guess is that a constraint based engine is rules defining relationships between objects, and rules based engines are more like the WHERE clause of a SQL language query.

Oracle schema migration to new database with zero downtime

Posted: 14 Jul 2013 07:03 AM PDT

I have a problem: I have two production schemas on one database each serving two different appplicaitons.

I need to migrate (move) one schema to an entirely new database that will be dedicated for that application with zero downtime. SO EXPORT/IMPORT can not be used.

I have license to even active data guard. But if I setup data guard from current database DB to new DB, should I switch over OR failover or what else so that the current primary will continue to be primary in its role and only the new DB will change its role from standby to fully operational primary. Then we simply direct the application to new and start the new db. Thanks for your help.

MySQL HDD activity with empty process list

Posted: 14 Jul 2013 01:04 PM PDT

What does this mean and how can I understand what is mysql doing? I use InnoDB, my database is huge. This happens when I disconnect my app.

Thanx.

How do I use subquery on the same table in MySQL?

Posted: 14 Jul 2013 06:03 AM PDT

I have a query like this which takes a really long time to run. The table is around 4 million rows.

DELETE FROM TABLE WHERE value_was IS NULL OR value_was <= value_now;  

I'm hoping I could create an index for (value_was, value_now) so I could do something like

DELETE FROM TABLE WHERE   ID1 IN (SELECT ID1 from TABLE where value_was IS NULL)   OR ID2 IN (SELECT ID2 FROM TABLE WHERE value_was <= value_now);  

This table doesn't have primary key. It has two composite keys. And I guess I cannot use the same table in subquery, but how do I improve the performance of the first query?

Thanks very much any suggestion would be much appreciated.

InnoDB Tablespace critical error in great need of a fix

Posted: 14 Jul 2013 05:03 AM PDT

Link to screenshot : http://www.nouvellesduquartier.com/i/1/p/Munin_%20Critical_MySql_InnoDB_.JPG (The value reported is outside the allowed range - Byte free, free, gauge, warn, critic)

Question: Could the error shown on the screenshot be the reason why my site is very slow?

If so, I really need help to fix it since I am far from beeing an engeneer! Thank you in advance.

Getting "BadImageFormatException" when attempting to connect through Visual Studio to Oracle

Posted: 14 Jul 2013 05:04 PM PDT

I get the following message: "Attempt to load Oracle client libraries threw BadImageFormatException. This problem will occur when running in 64 bit mode with the 32 bit Oracle client components installed." when I attempt to create a connection from both Visual Studio 2008 and 2012 to Oracle from my Windows 8 64bit workstation.

I'm not running a web app when this message happens. I'm only trying to establish an Oracle connection with Visual Studio. Any solutions posted on the internet that I have seen that resolve this issue with others, doesn't seem to apply to my circumstances.

  • I did not have this issue when I was using Widows 7 64bit.
  • I have set the Active Solution Platform to both Any PC and x86 when I have a solution open, but I still get the same message with either setting.
  • I have the latest Oracle 32 bit Client since 64 bit Oracle client won't work with 32 bit Visual Studio.
  • At this point, I am only trying to connect to Oracle from Visual Studio and not run a web application. So IIS settings have nothing to do with this issue.

My environment is

  • Windows 8 64bit
  • Visual Studio 2012 and 2008
  • Oracle 32 bit Client. But also tried 64 Bit Client.
  • Oracle 11.2.0.3 database.
  • Active Solution Platform was tried with both Any PC and x86.

Any suggestions would be appreciated.

phpMyAdmin Error After Upgrade to 3.5.8.1

Posted: 14 Jul 2013 06:23 AM PDT

Im Nubie in mysql, I use that only for PHP.

to the point, I try to upgrade myphpadmin 3.5.8.1, after I that, I cant accses my database and have error like This:

#1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'WHERE db_name = 'happy2'  AND table_name = 'foto'  ORDER BY version DESC' at line 1  

My Other ALL DB have error like that.

Anyone can help me?

Im very Appreciated for your answer.

SHOW TABLE STATUS very slow on InnoDB

Posted: 14 Jul 2013 03:04 PM PDT

Recently we've been migrating from MyISAM to InnoDB and I understand that MyISAM uses meta information against each table to track information and such - however, the database is about 1.8gb with 1+ million records across 400+ or so tables.

The problem comes with software like PHPMyAdmin runs "SHOW TABLE STATUS FROM dbase;" where it can take up to 50 seconds to execute that command. Is there a way to optimise this? (MySQL or MariaDB)

Thanks!

Syncronize mysql databases between local and hosted servers automatically

Posted: 14 Jul 2013 04:03 AM PDT

We have many website with Development , Staging and Production Server. we have many developers for many projects, we need a solution to synchronize the database with developer database with staging database. after that one is works we can move to live database.

That one is need to be fully automatically synchronize so that developer dont need to run that tool each and every time

SELECTing multiple columns through a subquery

Posted: 14 Jul 2013 08:03 AM PDT

I am trying to SELECT 2 columns from the subquery in the following query, but unable to do so. Tried creating alias table, but still couldn't get them.

SELECT DISTINCT petid, userid,  (SELECT MAX(comDate) FROM comments WHERE petid=pet.id) AS lastComDate,  (SELECT userid FROM comments WHERE petid=pet.id ORDER BY id DESC LIMIT 1) AS lastPosterID    FROM pet LEFT JOIN comments ON pet.id = comments.petid  WHERE userid='ABC' AND deviceID!='ABC' AND comDate>=DATE_SUB(CURRENT_TIMESTAMP, INTERVAL 2 MONTH);  

Basically, I am trying to get the lastComDate & lastPosterID from the same row - the row which is the latest one in comments for the specific pet. Please suggest how can I get them in an efficient way.

The above query works, but seems overkill as same row is fetched twice. Moreover, the ORDER BY clause is significantly slower than the aggregate function - as I found while profiling query. So, a solution avoiding sorting would be appreciated.

Designing Simple Schema for Disaggregation of Demand Forecast

Posted: 14 Jul 2013 02:04 PM PDT

I am doing a simple database design task as a training exercise where I have to come up with a basic schema design for the following case:

I have a parent-child hierarchy of products (example, Raw Material > Work in Progress > End Product).

  • Orders are placed at each level.
  • Number of orders shall be viewable in weekly buckets for the next 6 months.
  • Demand forecast can be done for each product level.
  • Demand forecast for any week within next 6 months can be done today.
  • Demand forecast is done for weekly buckets, for the next 6 months.

Demand Forecast is usually done at the higher level in hierarchy (Raw Material or Work in Progress level) It has to be disaggregated to a lower level (End Product).

There are 2 ways in which demand forecast can be disaggregated from a higher level to lower level:

  1. User specifies percentage distribution for end product. Say, there's a forecast of 1000 for Work In Progress.. and user says I want 40% for End Product 1 and 60% for End Product 2 in bucket 10.. Then for 10th week (Sunday to Saturday) from now, forecast value for End Product 1 would be 400 and, for End Product 2 would be 600.
  2. User says, just disaggregate according to orders placed against end products in Bucket 5, and orders in bucket 5 for End Product 1 and 2 are 200 and 800 respectively, then forecast value for EP1 would be ((200/1000) * 100)% and for EP2 would be ((800/1000) * 100)% of forecast for 'Work in Progress'.

Forecast shall be viewable in weekly buckets for the next 6 months and the ideal format should be:

product name | bucket number | week start date | week end date | forecast value | created_on  

PRODUCT_HIERARCHY table could look like this:

id  |   name                |   parent_id  __________________________________________  1   |   raw material        |   (null)  2   |   work in progress    |   1  3   |   end product 1       |   2  4   |   end product 2       |   2  

ORDERS table might look like this:

id | prod_id | order_date | delivery_date | delivered_date  

where,

prod_id is foreign key that references id of PRODUCT_HIERARCHY table,

How to store forecast? What would be a good basic schema for such a requirement?


My idea to select orders for 26 weekly buckets is:

SELECT      COUNT(*) TOTAL_ORDERS,      WIDTH_BUCKET(          delivery_date,          SYSDATE,          ADD_MONTHS(sysdate, 6),           TO_NUMBER( TO_CHAR(SYSDATE,'DD-MON-YYYY') - TO_CHAR(ADD_MONTHS(sysdate, 6),'DD-MON-YYYY') ) / 7      ) BUCKET_NO  FROM      orders_table  WHERE      delivery_date BETWEEN SYSDATE AND ADD_MONTHS(sysdate, 6);  

But this will give weekly buckets starting from today irrespective of the day. How can I convert them to Sunday to Saturday weeks in Oracle?

Please help designing this database structure.

(will be using Oracle 11g)

SQL Server BPA 2008R2

Posted: 14 Jul 2013 12:04 PM PDT

I have been using SQL server BPA for getting good information from Microsoft. I was using its 2005 version in which i used to export the results in csv format but recently I got two new servers which has got SQL Server 2008 R2 installed on it and I know I can't run BPA 2005 on these, so chose the R2 version of it, but it doesn't have an option to save the report on csv format, only xml, I have tried the excel to convert it into csv but no use, even-though it display the details but can't narrow down the results any idea of converting the results to csv format?

MySQL optimization - year column grouping - using temporary table, filesort

Posted: 14 Jul 2013 04:04 PM PDT

I have a transactions table which is having 600,000 records, I need to list the count for the dashboard on financial year basis. The table used is MyISAM. I tried adding index for the transaction date (tran_date). Even though it is using the index it creates temporary table which is taking more time because of the temporary table and the filesort. Is there any way to optimize the query to improve the query time?

  SELECT COUNT( * ) AS cnt, CASE WHEN MONTH( tran_date ) >=3  THEN concat( YEAR( tran_date ) , '-', YEAR( tran_date ) +1 )  ELSE concat( YEAR( tran_date ) -1, '-', YEAR( tran_date ) )  END AS financial_year  FROM `transactions1`  WHERE tran_date >= '2010-06-01'  GROUP BY financial_year    Showing rows 0 - 4 (5 total, Query took 1.2095 sec)  
  id select_type  table       type  possible_keys     key key_len    ref  rows    Extra  1   SIMPLE    transactions1 range PRIMARY,tran_date tran_date 8  NULL   346485  Using where; Using index; Using temporary; Using filesort  
  Keyname     Type    Unique  Packed  Field       Cardinality   Collation   PRIMARY     BTREE   Yes       No    tran_date      205720         A                                           tran_ID        617162         A   coupon_No   BTREE   No        No    coupon_No      617162         A       account_typeBTREE   No        No    account_type   3              A       prodCode    BTREE   No        No    prodCode       430            A                                           tran_date      308581         A   tran_date   BTREE   No        No    tran_date      205720         A       cust_ID     BTREE   No        No    cust_ID        3265           A                                           tran_date      308581         A                                       account_type   308581         A                                       points_earned  617162         A  

Update :

Tried adding partition which is not that much helpful in comparison with non partitioned one. Does replication help in this case for reading this table?. There will be more grouping based on the dates (using the date functions) when reading the data.

Edit:

I altered the query and reduced the query execution time. The query I used is,

  SELECT SUM( count )  FROM (  SELECT COUNT( * ) AS count,  CASE WHEN MONTH( tran_date ) >=3  THEN concat( YEAR( tran_date ) , '-', YEAR( tran_date ) +1 )  ELSE concat( YEAR( tran_date ) -1, '-', YEAR( tran_date ) )  END AS format_date  FROM transactions1  GROUP BY tran_date  ) AS s  GROUP BY format_date    Showing rows 0 - 4 (5 total, Query took 0.5636 sec)  
  id  select_type     table     type  possible_keys   key     key_len     ref     rows    Extra  1   PRIMARY     <derived2>    ALL       NULL        NULL      NULL      NULL    229676  Using temporary; Using filesort  2   DERIVED     transactions1 index     NULL        tran_date   8       NULL    617162  Using index  

But when using

  SELECT COUNT( * ) AS count,  CASE WHEN MONTH( tran_date ) >=3  THEN concat( YEAR( tran_date ) , '-', YEAR( tran_date ) +1 )  ELSE concat( YEAR( tran_date ) -1, '-', YEAR( tran_date ) )  END AS format_date  FROM transactions1  GROUP BY tran_date    Showing rows 0 - 29 (229,676 total, Query took 0.0006 sec)  

gives less time without using the SUM(count) in the derived table. Is there any other way to get the sum without using the subquery in MySQL or can the subquery be optimized to get the index.

oracle local partition index

Posted: 14 Jul 2013 05:31 AM PDT

I have a table like this:

CREATE TABLE employees  (employee_id NUMBER(4) NOT NULL,   last_name VARCHAR2(10),    department_id NUMBER(2))  PARTITION BY RANGE (department_id)  (PARTITION employees_part1 VALUES LESS THAN (11) ,    PARTITION employees_part2 VALUES LESS THAN (21) ,    PARTITION employees_part3 VALUES LESS THAN (31) );  

Are these statements the same?

CREATE INDEX employees_local_idx ON employees (employee_id) LOCAL;    CREATE INDEX employees_local_idx ON employees (employee_id) LOCAL  (   PARTITION employees_part1,    PARTITION employees_part2,    PARTITION employees_part3  )  

Thanks,

No comments:

Post a Comment

Search This Blog