Friday, July 5, 2013

[how to] mysql tried to restart itself 3 or more times, but it failed finally

[how to] mysql tried to restart itself 3 or more times, but it failed finally


mysql tried to restart itself 3 or more times, but it failed finally

Posted: 05 Jul 2013 09:26 PM PDT

it happended the second time, could be the same reason cause it.

mysql tried to restart itself 3 or more times, but it failed finally.

here is mysql log

130706 09:43:27 mysqld_safe Number of processes running now: 0  130706 09:43:27 mysqld_safe mysqld restarted  130706  9:43:28  InnoDB: Initializing buffer pool, size = 8.0M  130706  9:43:28  InnoDB: Completed initialization of buffer pool  130706  9:43:29  InnoDB: Started; log sequence number 0 2552936  130706  9:43:29 [Note] Event Scheduler: Loaded 0 events  130706  9:43:29 [Note] /usr/libexec/mysqld: ready for connections.  Version: '5.1.69-log'  socket: '/var/www/data/mysql.sock'  port: 3306  Source distribution  130706 09:43:34 mysqld_safe Number of processes running now: 0  130706 09:43:34 mysqld_safe mysqld restarted130706  9:43:36  InnoDB: Initializing buffer pool, size = 8.0M  130706  9:43:36  InnoDB: Completed initialization of buffer pool  130706  9:43:36  InnoDB: Started; log sequence number 0 2552936  130706  9:43:36 [Note] Event Scheduler: Loaded 0 events  130706  9:43:36 [Note] /usr/libexec/mysqld: ready for connections.  Version: '5.1.69-log'  socket: '/var/www/data/mysql.sock'  port: 3306  Source distribution  130706 09:43:38 mysqld_safe Number of processes running now: 0  130706 09:43:39 mysqld_safe mysqld restarted  130706  9:43:41  InnoDB: Initializing buffer pool, size = 8.0M  130706  9:43:41  InnoDB: Completed initialization of buffer pool  130706  9:43:41  InnoDB: Started; log sequence number 0 2552936  130706 09:43:42 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended  

and following is content of my.cnf

there are 1.5G mem on this server

[mysqld]  datadir=/var/www/data  socket=/var/www/data/mysql.sock  user=mysql      symbolic-links=0  query_cache_size=128M      max_connections=512  key_buffer=512M   back_log=200  max_allowed_packet=256M  table_cache=512    sort_buffer_size=4M  myisam_sort_buffer_size=64M  thread_cache=128  wait_timeout=300      thread_concurrency=4  log_output=FILE   slow-query-log=1      slow-query-log-file=/var/www/logs/log-slow-query.log  long_query_time=1    [mysqld_safe]  log-error=/var/www/logs/mysqld.log  pid-file=/var/run/mysqld/mysqld.pid  

but i can easily start it using "service mysqld start"

anyone knows what was happened? please help!

Cannot change root access in MySQL to %

Posted: 05 Jul 2013 08:43 PM PDT

This is a new MySQL Server install. I created my root password as part of the setup (on Centos 6.4).

When I connect to the server through a terminal, I can connect to MySQL and issue commands, using my root password.

select current_user;

gives me:

+----------------+  | current_user   |  +----------------+  | root@localhost |  +----------------+  

If I do:

GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'xxxxxx' WITH GRANT OPTION  

I get:

Query OK, 0 rows affected (0.00 sec)  

But when I do:

SHOW GRANTS;  

Here's what I get:

+----------------------------------------------------------------------------------------------------------------------------------------+  | Grants for root@localhost                                                                                                              |  +----------------------------------------------------------------------------------------------------------------------------------------+  | GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' IDENTIFIED BY PASSWORD '*fdfgdgdggfggfgfg' WITH GRANT OPTION |  +----------------------------------------------------------------------------------------------------------------------------------------+  

Am I not supposed to see another line for root@% in addition to root@localhost?

The real issue I'm having is that I can't connect to MySQL from outside of localhost (as in, I'm logged in using a terminal session), and if the MySQL server is not giving root universal access (root@%), then that would explain the problem. When I try to connect using PHP (from my local MAC), the following is the returned MySQLi object:

mysqli Object  (      [affected_rows] =>       [client_info] =>       [client_version] => 50008      [connect_errno] => 2002      [connect_error] => Connection refused      [errno] =>       [error] =>       [field_count] =>       [host_info] =>       [info] =>       [insert_id] =>       [server_info] =>       [server_version] =>       [stat] =>       [sqlstate] =>       [protocol_version] =>       [thread_id] =>       [warning_count] =>   )  

I realize that granting root access from % is not a great idea, but at this point, I'm trying to figure out why I can't connect to the MySQL server, and when I solve that, I will restrict access.

"Error at line 1: ORA-00942: table or view does not exist." Sqlplus basic use, querying. Unable to retrieve schema from table "Course" or "emp"

Posted: 05 Jul 2013 06:38 PM PDT

I installed Oracle APEX 11g on my computer which is a WIN 7 64-bit OS. However, When I open the command prompt I try to execute the following query: "Select * from course;". I receive the following error: "Error at line 1: ORA-00942: table or view does not exist." I also receive that error when also trying to execute the following query: "select * from emp;". I have also tried to connect as a DBA and I receive the same error. I also receive the same error when trying to query while logging in by using my regular user id and password. I would like to learn sqlplus but right now I just want to learn how to query tables. I understand teh following queries but I receive the error above. Can someone please help? Thank you.

MySQL database datafiles in windows SystemData directory

Posted: 05 Jul 2013 02:45 PM PDT

When i create a database and some tables in MySQL Server, it makes some data files in Windows SystemData directory in C:\ProgramData\MySQL\MySQL Server 5.5\data\mydb\.

Each table has three files with these suffixes:

TableName.frm  TableName.MYD   TableName.MYI  

I don't know what those exactly are.

Please explain to me what those are and what is usages?

Thanks in advance.

List of variable defaults for InnoDB engine

Posted: 05 Jul 2013 02:29 PM PDT

I would like to list the defaults for the global variables that are innodb specific.

The problem is that the following commands doesn't list any innodb variables:

/usr/sbin/mysqld --no-defaults --verbose --help  /usr/sbin/mysqld --verbose --help  

However I can see the runtime ones with:

mysql -e "show global variables"  

Find records in same table with different company id but some other same values

Posted: 05 Jul 2013 12:59 PM PDT

I have subscriptions table that looks roughly:

table subscriptions
id (auto generated id)
company_id
city
state
subscription_date

I would like to find records that:

A) are of different company (company id) but have same city/state

B) are of different company (company id) but have same subscription date

For address, I was doing:

select distinct on (t1.company_id) t1.company_id, t2.company_id, t1.city, t1.state  from subscription t1, subscription t2  where t1.company_id <> t2.company_id  and t1.city = t2.city  and t1.state = t2.state  order by t1.company_id  

Is this efficient and accurate way to do it?

Return table based on 1, 2, 3, 4, 5 or 0 selected search options

Posted: 05 Jul 2013 03:45 PM PDT

Scenario: Lets say I have several tables for a software ticketing system: Developer, Supervisor, TicketType, TicketStatus and a Ticket table to join them on. Assume I have done my inner joins and now have a complete table listing all assigned Ticket info.

Id|TicketName|TicketStatus|Developer|Supervisor|DateAssigned| DateDue  |  ------------------------------------------------------------------------------           1  |  Report  |   Open     |   Josh  |    Bob   | 20130702   | 20130705              2  |   App    |   Pending  |   Brian |    Bob   | 20130701   | 20130802           3  |   App    |   Closed   |   Brian | Hannibal | 20130701   | 2013----    

Obviously the SQL for this is pretty basic:

SELECT TicketId, TicketName, TicketStatus ...  FROM Ticket INNER JOIN TicketType ON [Ticket].TicketID = [TicketType].TicketName       INNER JOIN [Status] ON .... ....  

So that's the DEFAULT static portion of the report. This is what initializes first, however the client, customer, manager, etc. wants the option of selecting zero, one, or more fields to sort data on for all fields in the report. For example:

User may choose 'app' from the TicketName category, 'Josh' as Developer, and BETWEEN date1 AND date2, while TicketStatus Remains NULL or ' '. The table should display all 'app' tickets assigned to 'Josh' between selected dates, while not breaking due to TicketStatus not being selected. Any ticket status should appear.

User could choose not to select any values, in which case the report should default to the example above, without breaking. Meaning I cannot use NULL as a place holder bacause it would cause the initial report to not initialize (unless I did something wrong?)

User can basically mix and match any variation of the fields and the table should sort on those selected fields.

Question: How can someone return a table based on all or no options being selected, without breaking the query?

Additional Info:

Using SQL Server 2008 r2, Business Intelligence Development Studio, T-SQL.

I have tried declaring variables and using LIKE statements, but this requires the user to select an option before the report will return anything. The User may not know what the field values are from memory.

CREATE PROCEDURE WorkAssignmentReport @Category varchar(25)                                   ,@---Name varchar(25)                                   ,@Stats varchar(25)                                    ,@---Name  varchar(25)                                    .........                                   ......      FROM [Ticket]   INNER JOIN dbo.[TicketType] ON [Ticket].TicketTypeID = [TicketType].TicketTypeID   INNER JOIN dbo.[Status] ON [Ticket].Stat ....  .....      WHERE [TicketType].Name LIKE @Category     AND [Ticket].Name LIKE @ReportName      AND [Status].StatusName LIKE @Status     AND [Phase].PhaseName LIKE @PhaseName      AND (BA.FullName = @Name1 OR AD.FullName = @Name1)  

PostGIS Query that selects from over 124GB of Indexed data, how to Optimize?

Posted: 05 Jul 2013 03:39 PM PDT

I have a query:

with contains as  ( Select reports.mmsi,            ST_Contains(q346_pareas.geom, reports.position_geom) as ST     from q346_pareas, reports    where report_timestamp between '2013-05-02 07:00:00' and '2013-05-02 08:00:00'      and (reports.position_geom && q346_pareas.geom)  )  select position_geom   from reports  where report_timestamp between '2013-05-02 07:00:00' and '2013-05-02 08:00:00'    and reports.mmsi = ANY         ( select mmsi           from contains           where contains.st = true             and contains.st is not null        );  

Everything that I am selecting from is indexed. The reports table is 164GB with an index size of 124GB

The q346_pareas table is a polygon made from the points:

 -72.2417667  66.268983,  -112.2589167  66.410833,  -125.5855167  77.681283,   -63.14165    81.332367,   -50.53       74.5,   -42.1917     55.844,   -48.583333   45.80575,   -67.2276333  40.5562,   -73.18415    46.8878167,   -72.2417667  66.268983  

The Explain analyze Information is:

                QUERY PLAN    --------------------------------------------------------------------------------  --------------------------------------------------------------------------------  ------------------------------------------   Nested Loop  (cost=192146.15..2907777.88 rows=577794 width=32) (actual time=236  2936.415..3156249.235 rows=80034 loops=1)     CTE contains       ->  Nested Loop  (cost=418.62..153615.11 rows=527503 width=236) (actual tim  e=1822818.674..2358354.485 rows=89045 loops=1)         ->  Seq Scan on q346_pareas  (cost=0.00..1.01 rows=1 width=200) (actu  al time=0.012..0.015 rows=1 loops=1)         ->  Bitmap Heap Scan on reports  (cost=418.62..20419.53 rows=6 width=  36) (actual time=1822791.372..2358148.699 rows=89045 loops=1)               Recheck Cond: (position_geom && q346_pareas.geom)               Rows Removed by Index Recheck: 359425857               Filter: ((report_timestamp >= '2013-05-02 07:00:00'::timestamp  without time zone) AND (report_timestamp <= '2013-05-02 08:00:00'::timestamp wit  hout time zone))               Rows Removed by Filter: 156874877               ->  Bitmap Index Scan on position_idx  (cost=0.00..418.62 rows=  5036 width=0) (actual time=1352314.373..1352314.373 rows=156963922 loops=1)                     Index Cond: (position_geom && q346_pareas.geom)     ->  HashAggregate  (cost=11206.14..11207.14 rows=100 width=4) (actual time=23  58424.949..2358426.072 rows=466 loops=1)           ->  CTE Scan on contains  (cost=0.00..10550.06 rows=262433 width=4) (ac  tual time=1822818.688..2358404.878 rows=72833 loops=1)                 Filter: (st AND (st IS NOT NULL))                 Rows Removed by Filter: 16212     ->  Bitmap Heap Scan on reports  (cost=27324.90..27429.29 rows=27 width=36) (  actual time=1662.920..1711.195 rows=172 loops=466)           Recheck Cond: ((mmsi = contains.mmsi) AND (report_timestamp >= '2013-05  -02 07:00:00'::timestamp without time zone) AND (report_timestamp <= '2013-05-02   08:00:00'::timestamp without time zone))           ->  BitmapAnd  (cost=27324.90..27324.90 rows=27 width=0) (actual time=1  660.391..1660.391 rows=0 loops=466)                 ->  Bitmap Index Scan on report_mmsi_idx  (cost=0.00..578.71 rows  =23532 width=0) (actual time=1578.394..1578.394 rows=180098 loops=466)                       Index Cond: (mmsi = contains.mmsi)                 ->  Bitmap Index Scan on report_timestamp_idx  (cost=0.00..26456.  54 rows=1157594 width=0) (actual time=49.747..49.747 rows=553929 loops=466)                       Index Cond: ((report_timestamp >= '2013-05-02 07:00:00'::ti  mestamp without time zone) AND (report_timestamp <= '2013-05-02 08:00:00'::times  tamp without time zone))   Total runtime: 3156257.974 ms  (23 rows)  

The last time I ran the query it took ~50 minutes. I understand that I will eventually reach the point where going through a table with hundreds of millions of records will take a long time, and that the network will become a factor, but I ideally need to increase the time frame of the query from 1 hour to two months.

Any help is greatly appreciated.

pt-table-checksum - Diffs cannot be detected because no slaves were found (1)

Posted: 05 Jul 2013 10:13 AM PDT

I'm new to Percona tools. I am trying to use pt-table-checksum as follows:

pt-table-checksum h=localhost -u root -p xyz--replicate=percona.checksums --create-replicate-table --databases=mysql  

But get the error.

Diffs cannot be detected because no slaves were found    Please read the --recursion-method documentation for information.  

My.cnf

#password       = your_password  port            = 3306  socket          = /tmp/mysql.sock  default-character-set=utf8    [mysqld]  datadir=/spacedb  socket=/tmp/mysql/mysql.sock  user=mysql  # Disabling symbolic-links is recommended to prevent assorted security risks  symbolic-links=0  init-connect='SET NAMES utf8'  character-set-server = utf8  max_connections = 300    [mysqld_safe]  log-error=/var/log/mysqld.log  pid-file=/var/run/mysqld/mysqld.pid    [mysqladmin]  socket=/tmp/mysql.sock    [mysql]  socket=/tmp/mysql.sock  default-character-set=utf8    [mysqld_multi]  mysqld     = /usr/bin/mysqld_safe  mysqladmin = /usr/bin/mysqladmin  user       = multi_admin  password   = user    [mysqld1]  socket     = /tmp/mysql.sock  port       = 3306  pid-file   = /var/run/mysqld/mysqld.pid  datadir    = /spacedb  language   = /usr/share/mysql/english  server-id  = 1  log-bin    = mysql-bin  binlog_format = mixed    # Expire_logs + MaxBinlogSize  expire_logs_days = 7  max_binlog_size = 100M    sync_binlog = 1  innodb_flush_log_at_trx_commit = 1  innodb_lock_wait_timeout=1    init-connect='SET NAMES utf8'  character-set-server = utf8  max_connections = 300    [mysqld2]  socket     = /tmp/mysql.sock2  port       = 3307  pid-file   = /var/run/mysqld/mysqld2.pid  datadir    = /repdb  language   = /usr/share/mysql/english  server-id  = 2  relay-log  = mysqld2-relay-bin    innodb_lock_wait_timeout=1  init-connect='SET NAMES utf8'  character-set-server = utf8  

Do I have to use DSN? And how does it work?

Cron job for Mysql Incremental backup

Posted: 05 Jul 2013 12:02 PM PDT

I want to run cron job Mysql Incremental backup, Can any one pls help me. I tried:

$command = "mysqldump-h $dbhost -u $dbuser -p$dbpass ".    $dbname." > $backup_file 2>&1";    system($command, $returned);  

it worked ,but "mysqlbackup" is not working. Mysql document is saying that we need to use "mysqlbackup" command rather than "mysqldump" command

Removing superfluous tables with a synchronization?

Posted: 05 Jul 2013 10:11 AM PDT

I'm using SQL Server 2008 r2's replication functionality to update my subscriber database through a transactional pull subscription. When I mark it for reinitialization, it does fix the schema and data of any modified local tables that exist in the publication snapshot, but it doesn't remove any new tables (and presumably SPs, triggers, etc.) that have been added. Is there any way to get the synchronization to remove superfluous objects like tables that don't exist in the publication shapshot, in addition to updating and adding existing objects?

InnoDB query duration inconsistent

Posted: 05 Jul 2013 12:23 PM PDT

I am running a series of UPDATE commands on a nearly empty InnoDB table, and around 1 out of every 20-30 queries will inexplicably take 10 times as long as the others. For example, the first 20 updates will take 20ms, and the 21st update will suddenly take 200ms. I've set up an incredibly basic test:

CREATE TABLE `test` (    `col1` int(11) DEFAULT NULL,    `col2` int(11) DEFAULT NULL  ) ENGINE=InnoDB DEFAULT CHARSET=utf8  

I insert a single row into the table, and then I have a C# console program that does a series of updates:

for (int i = 0; i < 15; i++) {     long start = CurrentTimeMillis();       MySqlCommand cmd = new MySqlCommand();     cmd.Connection = conn;     cmd.CommandText = "UPDATE test SET col1=" + i + " WHERE col2=1";     cmd.Prepare();     cmd.ExecuteNonQuery();       Console.WriteLine("Duration: " + (CurrentTimeMillis() - start));  }  

This is the output I see from the program:

MySQL version : 5.5.17  Duration: 36  Duration: 30  Duration: 16  Duration: 26  Duration: 152  Duration: 47  Duration: 71  Duration: 77  Duration: 46  Duration: 28  Duration: 21  Duration: 25  Duration: 17  

If I run "SHOW profile FOR QUERY" with the 152ms time, both the "Updating" and "query end" values are abnormally high.

If I switch the table to MyISAM, the query duration is perfect, but I don't want table-locking. Does anyone have a guess as to what is making InnoDB act this way?

TOAST Table Growth Out of Control - FULLVAC Does Nothing

Posted: 05 Jul 2013 10:54 AM PDT

Recently, I've had a PostgreSQL 8.2.11 server upgraded to 8.4 in order to take advantage of autovacuum features and be in line with 30ish other PGSQL servers. This was done by a separate IT group who administrates the hardware, so we don't have much choice on any other upgrades (won't see 9+ for a while). The server exists in a very closed environment (isolated network, limited root privileges) and runs on RHEL5.5 (i686). After the upgrade, the database has constantly been growing to the tune of 5-6 GB a day. Normally, the database, as a whole, is ~20GB; currently, it is ~89GB. We have a couple other servers which run equivalent databases and actually synchronize the records to each other via a 3rd party application (one I do not have access to the inner workings). The other databases are ~20GB as they should be.

Running the following SQL, it's fairly obvious there's an issue with a particular table, and, more specifically, its TOAST table.

SELECT nspname || '.' || relname AS "relation",      pg_size_pretty(pg_relation_size(C.oid)) AS "size"    FROM pg_class C    LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)    WHERE nspname NOT IN ('pg_catalog', 'information_schema')    ORDER BY pg_relation_size(C.oid) DESC    LIMIT 20;  

Which produces:

                relation              |  size    ------------------------------------+---------      pg_toast.pg_toast_16874           | 89 GB      fews00.warmstates                 | 1095 MB      ...    (20 rows)  

This TOAST table is for a table called "timeseries" which saves large records of blobbed data. A SUM(LENGTH(blob)/1024./1024.) of all the records in timeseries yields ~16GB for that column. There should be no reason this table's TOAST table should be as large as it is.

I've performed a VACUUM FULL VERBOSE ANALYZE timeseries, and the vacuum runs to completion with no errors.

INFO: vacuuming "pg_toast.pg_toast_16874"
INFO: "pg_toast_16874": found 22483 removable, 10475318 nonremovable row versions in 10448587 pages
DETAIL: 0 dead row versions cannot be removed yet.
Nonremovable row versions range from 37 to 2036 bytes long.
There were 20121422 unused item pointers.
Total free space (including removable row versions) is 0 bytes. 4944885 pages are or will become empty, including 0 at the end of the table. 4944885 pages containing 0 free bytes are potential move destinations.
CPU 75.31s/29.59u sec elapsed 877.79 sec.
INFO: index "pg_toast_16874_index" now contains 10475318 row versions in 179931 pages
DETAIL: 23884 index row versions were removed.
101623 index pages have been deleted, 101623 are currently reusable.
CPU 1.35s/2.46u sec elapsed 21.07 sec.

REINDEXed the table which freed some space (~1GB). I can't CLUSTER the table as there isn't enough space on disk for the process, and I'm waiting to rebuild the table entirely as I'd like to find out why it is so much bigger than equivalent databases we have.

Ran a query from the PostgreSQL wiki here - "Show Database Bloat", and this is what I get:

  current_database | schemaname |           tablename            | tbloat | wastedbytes |              iname              | ibloat | wastedibytes    -----------------+------------+--------------------------------+--------+-------------+---------------------------------+--------+--------------    ptrdb04          | fews00     | timeseries                     |    1.0 |           0 | idx_timeseries_synchlevel       |    0.0 |            0    ptrdb04          | fews00     | timeseries                     |    1.0 |           0 | idx_timeseries_localavail       |    0.0 |            0    ptrdb04          | fews00     | timeseries                     |    1.0 |           0 | idx_timeseries_expirytime       |    0.0 |            0    ptrdb04          | fews00     | timeseries                     |    1.0 |           0 | idx_timeseries_expiry_null      |    0.0 |            0    ptrdb04          | fews00     | timeseries                     |    1.0 |           0 | uniq_localintid                 |    0.0 |            0    ptrdb04          | fews00     | timeseries                     |    1.0 |           0 | pk_timeseries                   |    0.1 |            0    ptrdb04          | fews00     | idx_timeseries_expiry_null     |    0.6 |           0 | ?                               |    0.0 |            0  

It looks like the database doesn't consider this space as "empty," at all, but I just don't see where all the disk space is coming from!

I suspect that this database server is deciding to use 4-5x as much disk space to save the same records pulled from the other data servers. My question is this: Is there a way I can verify the physical disk size of a row? I'd like to compare the size of one row on this database to another "healthy" database.

Thanks for any help you can provide!

UPDATE 1

I ended up rebuilding the table from a dumped schema due to its size (couldn't leave it alone for another day). After synchronizing the data, via the software synch process, the TOAST table was ~35GB; however, I could only account for ~9GB of it from that blob column which should be the longest in terms of values. Not sure where the other 26GB is coming from. CLUSTERed, VACUUM FULLed, and REINDEXed to no avail. The postgresql.conf files between the local and remote data servers are exactly the same. Is there any reason this database might be trying to store each record with a larger space on disk?

UPDATE 2 - Fixed

I finally decided to just completely rebuild the database from the ground up- even going as far as to reinstall the PostgreSQL84 packages on the system. The database path was reinitialized and tablespaces wiped clean. The 3rd party software synchronization process repopulated the tables, and the final size came out to be ~12GB! Unfortunately, this, in no way, helps to solve what the exact source of the issue was here. I'm going to watch it for a day or two and see if there are any major differences with how the revitalized database is handling the TOAST table and post those results here.

"SQL1042C An unexpected system error" when creating a database

Posted: 05 Jul 2013 11:37 AM PDT

I installed DB2 Express-C 10.1 on OS X 10.8.4. I installed it in user mode, and it seems to have created an instance with my username (avernet). I checked the instance is indeed there by running db2ilist. The database manager is running, as when I run db2start, I get the message SQL1026N The database manager is already active.

Now, I am trying to create a database by running db2 from the command line, and typing the command create database orbeon, but it returns:

QL1042C  An unexpected system error occurred.  SQLSTATE=58004  

What am I missing here? How can I create a simple database from the command line (I prefer not to bother with Data Studio as it isn't available for OS X)?

In db2dump/db2diag.log, I see the follwing error (full file, in case you're interested):

2013-06-29-15.31.38.948217-420 E159047E344          LEVEL: Error (OS)  PID     : 1766                 TID : 140735175762304PROC : db2star2  INSTANCE: avernet              NODE : 000  HOSTNAME: huashan.local  FUNCTION: DB2 UDB, SQO Memory Management, sqloMemCreateSingleSegment, probe:100  CALLED  : OS, -, shmget                           OSERR: 17  

Note that for the install to go through, I increased the OS X shared memory, per this recommendation.

mysql replication goes out of sync for some tables

Posted: 05 Jul 2013 11:48 AM PDT

We are running mysql 5.1.61 on redhat systems and have the following setup

One master and four slaves replicating from the master, we recently added a new slave for replication and over a few days we have started noticing that on the newly added slave some tables ( not all ) loose some records , this happens only on this slave and it is not regular , over a period of 3 weeks this issue seems to have happened on 5-7 days .

We use statement based replication. I am not sure why this happens on only one slave. There seems to be no error in the mysql error logs. The only difference between the old slaves and the new slave is that the new slave has a slightly lower ram than the other ones but the new slave is not being used for anything right now.

Is there a way to trouble shoot this issue to see why this happens on only one slave ?. Could it be network related or anything else ? Any pointers on where to start looking at ?

Here is the memory info Old slave

            total       used       free   Mem:      24731204   24641572      89632     

New slave

Mem:      16431192   10112880    6318312   

Force View's query plan to update?

Posted: 05 Jul 2013 02:48 PM PDT

I have a View whose query plan appears to be cached, is there a way to force the View's plan to be recalculated on each access?

MySQL+Web-App Performance Issues using different servers

Posted: 05 Jul 2013 12:48 PM PDT

We are having a performance issue with our MySQL servers that does not make any sense. I have read countless articles from many people (mostly Percona) and have made my.cnf tweaks. We have even manage to squeeze out another 30% more TPS thanks to those Percona articles. However, our problem is with our in-house web-app (a Tomcat/Java/Apache model). It performs poorly when connected to certain servers - the better hardware servers.

Here is the symptom:

If we point our test application server (Ubuntu, Apache, Tomcat, Java) to server MYSQL02, the applications performance is acceptable. However, if we point the application to MYSQL01 or MYSQL03 (and these two boxes are idle!) the application performance is poor. There are high latencies. Example:

Retrieving an account (via our web-app) from MYSQL02 takes  2 to 3 seconds. On MYSQL01 or MYSQL03 it takes 12 seconds or longer.  

We cannot figure out why! The MySQL servers and MONyog do NOT report any problems! If we execute the statements (100's of them) manually they return instance results and their explanations show they are all using indexes. We do NOT get any slow query, deadlock, or contention notifications.

Here is some basic information about our MySQL systems. They are all DEDICATED MySQL servers:

PROD (current production, not in replication farm, standalone)

Best/newest hardware configuration (8 core/ 20GB / 15K RPM HW RAID 10)  MySQL: 5.0 (RHEL)  We are trying to migrate from this MySQL 5.0 to our MySQL 5.5 farm (see below).  This server will then be reloaded with MySQL 5.5 and return as the primary  production server.  

MYSQL01

Second best hardware configuration (8 core / 12GB RAM / 15K RPM HW RAID 10)  Supermicro X8FAH; 2xXEON X5550 @ 2.66GHz  2x LSI MegaRAID SAS 9260-8i  MySQL: 5.5.31 (Ubuntu)  OS: Ubuntu 12.04 LTS x64  

MYSQL02

Worst/oldest hardware configuration(4 core / 12GB RAM / 7.2K RPM SW {mdadm} RAID 5)  Supermicro X2DBE; 2xXEON 5150 @ 2.66GHz  MySQL: 5.5.31 (Ubuntu)  OS: Ubuntu 12.04 LTS x64    This is our EDI workhorse. It spends most of it's day importing data from  our old mainframe system. It replicates the data to 01 and 03.  

MYSQL03

Third best hardware configuration (8 core / 12GB RAM / 7.2K RPM HW RAID 10)  Supermicro X7DGB8-X; XEON E5410 @ 2.33GHz  MySQL: 5.5.31 (Ubuntu)  OS: Ubuntu 12.04 LTS x64  

We used sysbench to test and tweak all the above systems and here are the test results with notes.

NOTE: TPS = Transactions Per Second

Results before applying any new tweaks:

PROD: 1,179.72 TPS <- Expected  MYSQL01: 442.92 TPS <- Why?  MYSQL02: 543.22 TPS <- Expected  MYSQL03: 904.22 TPS <- Surprising  

Results after my.cnf tweaks:

MYSQL01: 655.00 TPS <- 32% improvement. Not bad but still very poor compared to its siblings.  MYSQL02: 754.81 TPS <- 28% improvement. This was unexpected.  MYSQL03: 969.59 TPS <- 07% improvement.  

We are unsure why MYSQL01's performance is so poor. We can only summarize that there is an OS, RAID CARD or BIOS setting(s) that may be improperly set. I am leaning towards the RAID Card/Configuration. They only way to know for sure is to shutdown this server and scrutinize the configuration. A reload may be necessary. However, since it is our ultimate plan to make the current PROD hardware the primary production MySQL server then we may leave MYSQL01 alone for now and re-purpose the hardware after migrating to the 5.5 farm. However, we can't migrate until we figure out why our application is behaving so poorly on certain hardware.

Anyone have any suggestions?

SQL Server database schema (and likely, some data changes) - how to auto-distribute over many database instances

Posted: 05 Jul 2013 10:48 AM PDT

Our development involves a SQL Server database (also might be Oracle or Postgres later) and we would sometimes make database schema changes or some other interventions in database.

What solutions exist to create a "patch" or "script" to distribute these changes on other installations of same database (we do not have direct access to these)? It needs to alter database schema and execute SQL and/or other complex, pre-programmed database data alterations as defined by person who initializes/designs change. On other instances, a system admin should be able just run some program/press button, so these changes would be applied automatically.

In addition, it is plus if such solution can take database snapshot and derive "difference" on contents of particular table that would be then distributed.

The solution can be commercial.

Thanks in advance!

MySQL slap with custom query

Posted: 05 Jul 2013 06:48 PM PDT

I want to conduct stress test on our MySQL DB. I have the list of queries i need to execute. I have tried using Apache JMeter for this but it is very time consuming. Is it possible to run mysqlslap with custom .sql file containing INSERT, UPDATE, SELECT queries on specified MySQL database?

Overview of how MongoDB uses its various threads

Posted: 05 Jul 2013 01:48 PM PDT

On one instance I have MongoDB using ~85 threads. In lieu of having time to investigate directly, I am curious:

  1. What are the threads used for? Do they fall into a few high-level usage categories?
  2. How can I control/influence the number of threads used? E.g. max connection params, etc.

MySQL information_schema doesn't update

Posted: 05 Jul 2013 08:48 PM PDT

I have a database, say abc, in mysql server. It has only one table named test. test uses innodb engine and I've set innodb_file_per_table to true.

After I run the query delete from abc.test, I want to calculate the database size of abc. Here is the query I use:

SELECT      table_schema "name",      sum( IF(engine = "MyISAM", data_length + index_length -  data_free,      data_length + index_length)) "size"  FROM information_schema.TABLES  where table_schema like "abc";  

The strange thing is that I find the database size doesn't decrease at all, however the data in "test" is gone.

I've done this kind of test many times, this strange behavior happens sometimes.

I'm using percona mysql server 5.5.29-rel29.4.

Can anybody tell me what is wrong?

Update:

Actually, I use another thread to check the database size periodically.

How to succesfully run a batch file in an SQL Agent job?

Posted: 05 Jul 2013 07:19 PM PDT

I have a SQL Agent Job which generates a specific report in PDF-file and then copies the PDF to a network directory and then deletes the PDF file in the source directory.

The SQL Jobs consists of 2 steps: 1. Generate the report 2. Copy the report to the network location.

For step 2 I made a bat-file which handles the copying and removal of the pdf file.

The bat-file is as follows:

set source_dir=C:\Reports\Energie\Uur  set dest_dir=\\KWS2-WEB-SERVER\Share\Reports\Uur    C:\Windows\System32\Robocopy.exe %source_dir% %dest_dir% /MOV /Z  

However, when I run my the Job, it hangs on the second step. The status just stays on "Executing".

This is the line which I stated in the step (location of the bat-file to execute):

cmd.exe /c "C:\Reports\rapport_uur_copy.bat"  

My job-settings are as follows:

Step 1

Type: Operating system (CmdExec) On Success: Go to the next step

On Failure: Quit the job reporting failure

Step 2

Type: Operating system (CmdExec)

On Success: Quit the job reporting success

On Failure: Quit the job reporting failure

Some facts:

  • I have read/write permissions on the network directory
  • I run the steps as the Administrator-account (the logged in user, default)
  • Step 1 succeeds
  • I run Windows Server 2008 R2 Standard
  • I have SQL Server 2008 R2
  • When I run the script from the cmd manually, it works (under Administrator account).

MySQL partitioned tables?

Posted: 05 Jul 2013 03:48 PM PDT

I have a database that supports a web application with several large tables. I'm wondering if partitioned tables will help speed up certain queries. Each of these tables has a colum called client_id. Data for each client_id is independent from every other client_id. In other words, web queries will always contain a where clause with a single client_id. I'm thinking this may be a good column on which to partition my large tables.

After reading up on partitioned tables, I'm still a little unsure as to how best to partition. For example, a typical table may have 50 million rows distributed more or less evenly across 35 client_ids. We add new client_ids periodically but in the short term the number of client_ids is relatively fixed.

I was thinking something along these lines:

CREATE TABLE foo (      id INT NOT NULL PRIMARY KEY,      ... more column defs here...      client_id int      )  PARTITION BY KEY(client_id)  PARTITIONS 35;  

My question. Is this an optimal strategy for partitioning these types of tables? My tests indicate a considerable speedup over indexing on client_id, but can I do better with some other form of partitioning (i.e. hash or range)?

Limit memory used for mongoDb

Posted: 05 Jul 2013 09:48 AM PDT

Is there any way to limit using RAM for mongodb on Debian? I'm looking for a solution fo about 8 hours, but have no results.

Pgpool, Postgresql and Apache tuning (1000 concurrent spatial queries)

Posted: 05 Jul 2013 08:48 AM PDT

I'm trying to configure a load balancing system. I've a python script, invoked through mod_wsgi on Apache, that generates a query and executes it on pgpool: request-> wsgi python -> pgpool -> postgresql. Pgpool is configured as load balancer using 4 servers with 24GB ram and 350GB ssh hd. Our db is about 150GB and a query takes about 2 seconds. These are the configurations:

Pgpool

  • num_init_children 500
  • max_pool 2
  • child_life_time 300 seconds

Apache (mpm_prefork)

  • StartServers 100
  • MinSpareServers 20
  • MaxSpareServers 45
  • ServerLimit 2000
  • MaxClients 100
  • MaxRequestsPerChild 1000

PostgreSQL

  • max_connections = 1000
  • shared_buffers = 6GB
  • work_mem = 4GB

It seems not working When I try to submit more than 150 concurrent queries, although pgpool log file doesn't have any errors I get this error from the python script:

OperationalError: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.

Any ideas?

innodb_file_format Barracuda

Posted: 05 Jul 2013 11:26 AM PDT

I have a couple questions for those more familiar. Most of my instances have been running Antelope despite having support for Barracuda.

I was looking to play around with some compresses innodb tables. My understanding is this is only available under the Barracuda format.

  1. I see innodb_file_format is dynamic so I can just switch over with out a bounce. Are there any implications of doing this I should be aware of. All I can tell is that means new tables or subsequently altered will be created with that format. Is this all correct?
  2. I was hoping to have to not go through and convert all my tables. Is is kosher to have antelope and barracude tables coexisting in the same tablespace? Even if it works are there any gotcha's to look out for?

From what I've read and gathered from my tests the answers are: Yes. Yes. I'm not sure.

Update

I've been running w/ some Dynamic and some Compressed tables in various instances since this post with out issue. Further I neglected to read http://dev.mysql.com/doc/refman/5.5/en/innodb-file-format-identifying.html at the time.

After you enable a given innodb_file_format, this change applies only to newly created tables rather than existing ones. If you do create a new table, the tablespace containing the table is tagged with the "earliest" or "simplest" file format that is required for the table's features. For example, if you enable file format Barracuda, and create a new table that is not compressed and does not use ROW_FORMAT=DYNAMIC, the new tablespace that contains the table is tagged as using file format Antelope.

So tables will be created as Antelope even if you allow Barracuda. The mixing is unavoidable unless you specify every table as row_format dynamic or a compressed table.

There is no indication you should do a complete dump and reload when introducing your first Barracuda table (such as is recommended when upgrading major versions of mysql)

[SQL Server] How Stored Procedure determine to return a value?

[SQL Server] How Stored Procedure determine to return a value?


How Stored Procedure determine to return a value?

Posted: 05 Jul 2013 02:08 AM PDT

This is my Stored Procedure:[code="sql"]DECLARE @MyString NVARCHAR(3); SELECT @MyString = 'foo' SELECT CASE WHEN 1 = 1 THEN 'One is equal to one'END[/code]If I execute this Stored Procedure I have this output:[quote]'One is equal to one'[/quote]Now I change my Stored Procedure to this:[code="sql"]DECLARE @MyString NVARCHAR(3); SELECT CASE WHEN 1 = 1 THEN 'One is equal to one'ENDSELECT @MyString = 'foo'[/code]Now if I execute my Stored Procedure once again I have this output:[quote]'One is equal to one'[/quote]Why? I expected this time I should have 'foo' as output.Because the last SELECT statement is 'foo'

Add Variable to integer

Posted: 05 Jul 2013 05:02 AM PDT

HiI declared the primary key as identity int.Now everytime a value(code) is selected from the grid , I want a corresponding character to be concatenated to the primary key( integer) and stored in another column.Example : if code = student then add "S" to "1" and store S1 to another column.else if code = Teacher then add "T" to 2 and store T2.How should I implement this logic?

What is # in first letter of table names?

Posted: 04 Jul 2013 11:34 PM PDT

Ive noticed dudes in this forum uses # for the first letter of their table names.Does it have any special meaning?Does SQL Server have different behavior with them?Or its just a letter like a,b,c... etc?

default trace configurations

Posted: 04 Jul 2013 11:10 PM PDT

Hi,Just curiosity to know... there will be 5 trace files with each 20 MB size for default trace. Can it be increased to more number of files to preserve, as well as is it possible to increase the size from 20 MB to more?Please let me know. Thanks

[Articles] A Better Conference

[Articles] A Better Conference


A Better Conference

Posted: 04 Jul 2013 11:00 PM PDT

Is there a better way to put together a conference or event? Today Steve Jones speculates on how events are run and how we might change things.

SQL DBA Bundle Top 5 Hard-earned Lessons of a DBA
'10 Tips for Efficient Disaster Recovery' by Steve Jones. Prepare for any future disaster by reading Steve's tips today.

[MS SQL Server] Size of auto-created statistics objects in a database

[MS SQL Server] Size of auto-created statistics objects in a database


Size of auto-created statistics objects in a database

Posted: 04 Jul 2013 09:33 AM PDT

Is there a way to calculate the size (in MB) of auto-created statistics objects in a database?I would like to get an idea of the space savings I would get from dropping duplicate statistics;ie. single-column auto-created stats named "_WA_Sys_..." defined on the same column as the leading key of a non-clustered index.What is the physical footprint of these objects in a db?Here is my first attempt, but this does not give me the size of these stats objects:[code="sql"]SELECT s.name, ps.used_page_count * 8192 / (1024 * 1024) AS Size_MBFROM sys.dm_db_partition_stats psINNER JOIN sys.stats sON ps.[object_id] = s.[object_id]WHERE s.auto_created = 1ORDER BY ps.used_page_count DESC;[/code]

How to Check Partition is Working or not

Posted: 04 Jul 2013 07:40 PM PDT

Hi All,In a database (partitionDB) I have created partition then I have crated table on that DB. In another database (Partitiondb_New) I have the same table but here is no partition. In both the datbases there is a table 'Orders'.I have used the following query to partition the database (partitionDB)[quote]Create PARTITION FUNCTION PFORDERDATERANGE (DATETIME)ASRANGE RIGHT FOR VALUES ('2011-05-04','2011-08-04','2011/11/04','2013/01/04','2013/04/04','2013/07/04')Create Partition Scheme PSOrderDateChange as Partition PFORDERDATERANGE to (FG1, FG2, FG3, FG4, FG5, FG6, [Primary])Create table dbo.orders(Orderid int identity (1,1) not null,[Orderdate] [datetime] not null,[Name] nvarchar (255),[Productid] int null,constraint [pk_orders] primary key clustered ( Orderid ASC, Orderdate ASC )ON PSOrderDateChange (OrderDate))on PSOrderDateChange (OrderDate)[/quote]Lastly I have checked to see which filegroups contain how may records[quote]select $partition.PFORDERDATERANGE (o.orderdate) as [Prtition Number],Min(o.orderdate) as [Min Value],max (o.orderdate) as [Max Value],Count(*) [records in Partition]from Orders oGroup by $partition.PFORDERDATERANGE (o.orderdate)order by [Prtition Number][/quote]and I am getting result. to see the result and performance issue I ran a query in both the databases[quote]select * from dbo.Orderswhere Orderdate between '2013-01-04 00:00:00.000' and '2013-04-03 00:00:00.000'[/quote]but in both the scenario I am getting output in 8 secs....then How will I use partition in my database or table?I am sure someone will show the right direction....Thanks in advance

uninstall sql instance in cluster

Posted: 04 Jul 2013 11:25 PM PDT

Received below error while uninstall sql instance in 2 node failover cluster.Error :The selected instance is clustered and cannot be removed as specified.To re remove the selected instance , select "Remove Node" on the Installation center or specify /Action=RemoveNode from the command-line.if i select 'Remove node' then the node will be removed from the failover cluster ?

[SQL 2012] Script component - Integration Services

[SQL 2012] Script component - Integration Services


Script component - Integration Services

Posted: 04 Jul 2013 10:59 AM PDT

Hi, I need generate dynamic files through a query in a foreach loop container. How can I overridden a global variable in a script component (not a script task) according to a input column. Thanks

[T-SQL] Require help to retrieve data from a table

[T-SQL] Require help to retrieve data from a table


Require help to retrieve data from a table

Posted: 05 Jul 2013 12:06 AM PDT

Hi All,I have two tables as described below:CREATE TABLE [dbo].[Batch]( [BatchID] [int] IDENTITY(1,1) NOT NULL, [BatchName] [nvarchar](50) NULL, [CourseID] [int] NULL, [DateFrom] [datetime] NULL, [DateTo] [datetime] NULL, CONSTRAINT [PK_Batch] PRIMARY KEY CLUSTERED ( [BatchID] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY]GOSET IDENTITY_INSERT [dbo].[Batch] ONINSERT [dbo].[Batch] ([BatchID], [BatchName], [CourseID], [DateFrom], [DateTo]) VALUES (1, N'B1', 1, CAST(0x0000A1F200000000 AS DateTime), CAST(0x0000A1FC00000000 AS DateTime))INSERT [dbo].[Batch] ([BatchID], [BatchName], [CourseID], [DateFrom], [DateTo]) VALUES (2, N'B2', 2, CAST(0x0000A1F300000000 AS DateTime), CAST(0x0000A1FD00000000 AS DateTime))INSERT [dbo].[Batch] ([BatchID], [BatchName], [CourseID], [DateFrom], [DateTo]) VALUES (3, N'B3', 3, CAST(0x0000A1F400000000 AS DateTime), CAST(0x0000A1FE00000000 AS DateTime))INSERT [dbo].[Batch] ([BatchID], [BatchName], [CourseID], [DateFrom], [DateTo]) VALUES (4, N'B4', 4, CAST(0x0000A1F500000000 AS DateTime), CAST(0x0000A1FF00000000 AS DateTime))INSERT [dbo].[Batch] ([BatchID], [BatchName], [CourseID], [DateFrom], [DateTo]) VALUES (5, N'B5', 5, CAST(0x0000A1F600000000 AS DateTime), CAST(0x0000A20000000000 AS DateTime))SET IDENTITY_INSERT [dbo].[Batch] OFF------------CREATE TABLE [dbo].[Course_Module]( [CourseModuleID] [int] IDENTITY(1,1) NOT NULL, [CourseID] [int] NOT NULL, [ModuleID] [int] NOT NULL, CONSTRAINT [PK_Course_Module] PRIMARY KEY CLUSTERED ( [CourseModuleID] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY]GOSET IDENTITY_INSERT [dbo].[Course_Module] ONINSERT [dbo].[Course_Module] ([CourseModuleID], [CourseID], [ModuleID]) VALUES (1, 1, 1)INSERT [dbo].[Course_Module] ([CourseModuleID], [CourseID], [ModuleID]) VALUES (2, 1, 2)INSERT [dbo].[Course_Module] ([CourseModuleID], [CourseID], [ModuleID]) VALUES (3, 1, 3)INSERT [dbo].[Course_Module] ([CourseModuleID], [CourseID], [ModuleID]) VALUES (4, 1, 4)INSERT [dbo].[Course_Module] ([CourseModuleID], [CourseID], [ModuleID]) VALUES (5, 2, 2)INSERT [dbo].[Course_Module] ([CourseModuleID], [CourseID], [ModuleID]) VALUES (6, 2, 5)INSERT [dbo].[Course_Module] ([CourseModuleID], [CourseID], [ModuleID]) VALUES (7, 2, 6)INSERT [dbo].[Course_Module] ([CourseModuleID], [CourseID], [ModuleID]) VALUES (8, 2, 7)INSERT [dbo].[Course_Module] ([CourseModuleID], [CourseID], [ModuleID]) VALUES (9, 3, 1)INSERT [dbo].[Course_Module] ([CourseModuleID], [CourseID], [ModuleID]) VALUES (10, 3, 4)INSERT [dbo].[Course_Module] ([CourseModuleID], [CourseID], [ModuleID]) VALUES (11, 3, 5)INSERT [dbo].[Course_Module] ([CourseModuleID], [CourseID], [ModuleID]) VALUES (12, 4, 2)INSERT [dbo].[Course_Module] ([CourseModuleID], [CourseID], [ModuleID]) VALUES (13, 4, 3)INSERT [dbo].[Course_Module] ([CourseModuleID], [CourseID], [ModuleID]) VALUES (14, 5, 6)INSERT [dbo].[Course_Module] ([CourseModuleID], [CourseID], [ModuleID]) VALUES (15, 5, 7)SET IDENTITY_INSERT [dbo].[Course_Module] OFFHere I want a TSQL query which return those batchesWhen I provide a CourseID then I should get all those batches which explicitly contains all the MODULEID attached with the provided CourseID.For Example:I have provided the CourseID = 4, then it should return CourseID 1 and 4 as both of them contains the ModuleID 2,3 but it should not return 1,2,3,4 as CourseID = 2 do not contain the ModuleID = 3 and CourseID = 3 do not contain the ModuleID = 2I have written the Code :Select distinct B.* from Batch Binner join course_Module CM on B.CourseID = CM.CourseIDinner join Course_Module CMM on CM.ModuleID = CMM.ModuleIDwhere CMM.CourseID = 4The Answer I am getting is :BatchName CourseIDB1 1B2 2B4 4Answer should be :BatchName CourseIDB1 1B4 4

Can a strawberry query be done better?

Posted: 04 Jul 2013 07:49 AM PDT

Coming from MySQL background, one query design I learned early on was 'strawberry query', which got its odd name from the MySQL newsgroup. It is a very useful pattern for solving the problem of answering questions like "who's the best performing salesperson of month?" or similar questions.The solution basically involves doing a "triangular join" and filtering for NULLs. Using salesperson example:[code="sql"]SELECT s.SalesPerson, s.SalesMonth, s.SalesAmount, s.CustomerFROM Sales AS sLEFT JOIN Sales AS m ON s.SalesMonth = m.SalesMonth AND s.SalesAmount < m.SalesAmountWHERE m.SalesID IS NULL;[/code]Note that the query is free to include other fields from the same row because there is no GROUP BY; the grouping is implicitly done via the self-join. We are guaranteed to get only one possible row each month for a given sales which also is the greatest amount. There is no any other row that's greater than the greatest amount of given month so m.SalesID must be NULL.This also works for getting the minimum; just reverse the inequality operator on the join criteria. Also, there is no TOP 1 ... ORDER BY which can be problematic when you need to get multiple results (e.g. you want to see all 12 months at once.)Now, that worked well with MySQL. However, I'm also aware that T-SQL language has some features that doesn't exist in the MySQL dialect and also whether there might be a better way of doing it in T-SQL. I don't exactly trust myself to interpret the best execution plans so I'd be very interested in hearing from others whether this can be outperformed by alternatives such as using ROW_NUMBER() or whatever other approaches. Thanks!

Server level trigger

Posted: 04 Jul 2013 07:55 AM PDT

Hi,I wanted to a create server level trigger which prevents change of recovery modelI got some resources from internet and here is the code. There are couple of problems with this code. 1. When I issue ALTER DATABASE COMMAND , it says Mail queued and because of the below ROLLBACK statement it is getting rollbacked and I dont receive any kind of email. 2. Other thing is that, I am able to see the RAISERROR() msg in management studio, but the ALTER DATABASE statement somehow getting auto-commit which allows the recovery model to be changed. Can anybody help me in acheiving this task. what necessary changes/logic should i need to incorporate in the below code to make the mail functionality as well prevent ALTER DATABASE statement getting executed.Other alternative is Policy Based Management but I wanted to implement this using trigger.Thanks in Advance. Use master go create database testdbgoALTER DATABASE testdb SET RECOVERY FULL;go create trigger [usp_Restrict_RecoveryModel_Changes]ON ALL SERVERFOR ALTER_DATABASE, DROP_DATABASEASBEGIN DECLARE @data xml DECLARE @trigger_name sysname, @LoginName sysname, @UserName sysname, @dbname sysname SET @data = EVENTDATA() DECLARE @Str nvarchar(max) SELECT @Str=cast(@data as nvarchar(max)); /* <EVENT_INSTANCE> <EventType>ALTER_DATABASE</EventType> <PostTime>2007-01-12T20:05:27.527</PostTime> <SPID>65</SPID> <ServerName>NAME</ServerName> <LoginName>sa</LoginName> <DatabaseName>TestSecurity</DatabaseName> <TSQLCommand> <SetOptions ANSI_NULLS="ON" ANSI_NULL_DEFAULT="ON" ANSI_PADDING="ON" QUOTED_IDENTIFIER="ON" ENCRYPTED="FALSE" /> <CommandText>ALTER DATABASE [TestSecurity] SET RECOVERY SIMPLE WITH NO_WAIT </CommandText> </TSQLCommand> </EVENT_INSTANCE> */ -- send email to self before you will raise error EXEC msdb.dbo.sp_send_dbmail @recipients='test@gmail.com', @subject = '!!!****Attempt to alter database***!!!', @body = @Str, @body_format = 'HTML', @profile_name ='Test profile' RAISERROR ('ALTER DATABASE DISABLED!',10, 1) ROLLBACKEND GO---trying to change recovery model to 'SIMPLE', we need to prevent this happening!!!!ALTER DATABASE testdb SET RECOVERY FULL;go Thanks in Advance.

[SQL Server 2008 issues] Find out Free Virtual Memory

[SQL Server 2008 issues] Find out Free Virtual Memory


Find out Free Virtual Memory

Posted: 04 Jul 2013 03:49 PM PDT

HiHow to find out free virtual memory size by sql query

Autonumber field sometimes skips a 1000

Posted: 04 Jul 2013 01:06 AM PDT

I have an order table with an ID field as its primary key. This is a straightforward indentity field which increment with each new record, well that's how its supposed to work anyway.Every now and then it skips a 1000 numbers for no apparent reason, see example below:108373108369107360107357And:107336107335106340106338These are actual committed orders, so its normal that it sometime skips a couple of numbers. But a thousand is a bit strange. HHas anybody every encountered this issue and what can I do about it?

I want deadlock notification using mail address and record stored in table.

Posted: 04 Jul 2013 05:18 PM PDT

I want deadlock notification using mail address and record stored in table for mssql

how do i select previous date using ssis expression?

Posted: 04 Jul 2013 04:32 PM PDT

hi i have ssis package and following expression which gives me todays date and time for file name@[User::FilePath]+ "Bloomberg_"+REPLACE((DT_STR, 20, 1252)(DT_DBTIMESTAMP)@[System::StartTime], ":", "")+".xls"\\public\\Bloomberg_Upload\\Bloomberg_2013-07-05 005738.xlsI need to get one date previous like following only for weekdays:\\public\\Bloomberg_Upload\\Bloomberg_2013-07-04 005738.xlsHow can I do this ?For Monday -If I execute my package on Monday date should be of Friday.please guide me

tempdb file delete

Posted: 04 Jul 2013 05:31 PM PDT

ALTER DATABASE tempdbMODIFY FILE (name=tempdev,size=512MB);GOALTER DATABASE tempdbADD FILE (name=tempdev2,size=512MB,filename='N:\Tempdb\tempdev2.ndf');GOALTER DATABASE tempdbADD FILE (name=tempdev3,size=512MB,filename='N:\Tempdb\tempdev3.ndf');GOALTER DATABASE tempdbADD FILE (name=tempdev4,size=512MB,filename='N:\Tempdb\tempdev4.ndf'); I want delete file tempdb4 then how to do it ?

I want create baseline for sql server

Posted: 04 Jul 2013 05:32 PM PDT

hi,Tell different tricks and fusible ways ?

Viewing Management Data Warehouse Reports from a web browser

Posted: 04 Jul 2013 05:32 PM PDT

Hi all!I am constructing a monitoring solution for our enterprise which comprises of Email notification of an error or warning and a link contained in the Email to a report that describes the problem in more detail. In addition to this, I would like certain users to be able to view the Management Data Warehouse (MDW) reports through their Web Browser. Although (once the MDW has been setup and data collection started) these reports are easy enough to view in the SSMS, I would prefer (as would my developers and users!) that an HTTP(S) link could be used to take them straight to where they need to go.Therefore, my question is this: Does anyone know how to use hyperlinks to access the MDW reports from a local web browser?Many thanks in advance!Regards,Kev

sqlquery

Posted: 04 Jul 2013 05:16 PM PDT

which is the programmer develope highest number of package.i have atablre which contain pname,title,developin,scost,dcost,sold

SSRS report Page break should fixed.

Posted: 04 Jul 2013 05:16 PM PDT

hi,I want ssrs report should print tow section in page and divided by page break that page should fixed..regards,DBA.

Stored Procedure is not responding, i am executing but does nothign and donot execute

Posted: 04 Jul 2013 10:20 AM PDT

I am getting error message when i run SP at the bottom[b]Error Msg:Msg 2714, Level 16, State 3, Procedure NIC_OA_GetPhysicianXRayReportsOut, Line 119There is already an object named 'NIC_OA_GetPhysicianXRayReportsOut' in the database[/b]/****** Object: StoredProcedure [dbo].[NIC_OA_GetPatientXRayReportsOut] Script Date: 03/27/2012 18:18:31 ******/IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[NIC_OA_GetPatientXRayReportsOut]') AND type in (N'P', N'PC'))DROP PROCEDURE [dbo].[NIC_OA_GetPatientXRayReportsOut]GO/****** Object: StoredProcedure [dbo].[NIC_OA_GetPatientXRayReportsOut] Script Date: 03/27/2012 18:18:31 ******/SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGO' ' ' Execution Samples: NIC_OA_GetPhysicianXRayReportsOut @lPhysician ' NIC_OA_GetPhysicianXRayReportsOut 70 ' ' ' REVISION HISTORY ' ' Date Developer Comments ' _________ __________________ _____________________________________________ ' '******************************************************************************/ create PROCEDURE [dbo].[NIC_OA_GetPhysicianXRayReportsOut] @lPhysician INT, @Page INT=1, @RecsPerPage INT=50 AS SET TRANSACTION ISOLATION LEVEL READ COMMITTED SET NOCOUNT ON CREATE TABLE #TempItems ( ID INT IDENTITY, lID INT NOT NULL ) INSERT INTO #TempItems (lID) --SELECT lID FROM tblItem SELECT a.lid FROM [PhysicianXRayRequisitions] a INNER JOIN [Map_XRayRequisitionToProgressNote] b ON a.lID = b.lXRayRequisition INNER JOIN [ProgressNote] c ON c.lID = b.lProgressNote INNER JOIN patient d ON c.lPatient = d.lID WHERE a.lPhysician = @lPhysician AND a.nRecordStatus = 1 AND a.bReportReceived = 0 AND a.bReportRemoved = 0 AND a.szLastPrintedBy IS NOT NULL ORDER BY a.dDateOrdered ASC -- Find out the first and last record we want DECLARE @FirstRec INT, @LastRec INT SELECT @FirstRec = ( @Page - 1 ) * @RecsPerPage SELECT @LastRec = ( @Page * @RecsPerPage + 1 ) SELECT Isnull(b.szLaboratory, 'non selected') AS szLaboratory, a.lID, a.bDiagnosticMammogram, a.bScreeningMammogram, a.dDateOrdered, a.dCheckForReport, p.szLast, p.szFirst, a.lPhysician, p.szFirst AS szPhysicianFirst, p.szLast AS szPhysicianLast, szLastPrintedBy, pat.szFirst AS szPatFirst, pat.szLast AS szPatLast, pat.lid AS lPatient, Rtrim(Isnull(DIform.szFileName, '')) AS szDIFormFileName, a.lDIform_data, DIform_data.lDIform, a.szListRequisition_FreeForm, Isnull(f.FormLiteFormID, 0) AS FormLiteFormID, f.FormLiteSnapshotID, TotalRecords = (SELECT Count(*) FROM #TempItems TI) FROM #TempItems Inner join [PhysicianXrayRequisitions] a ON a.lid = #TempItems.lID LEFT JOIN [Laboratory] b ON b.lID = a.lLaboratory LEFT JOIN FormLiteSnapshotMap f ON a.lid = f.lPhysicianXRayRequisitions INNER JOIN [Map_XRayRequisitionToProgressNote] c ON c.lXRayRequisition = a.lID INNER JOIN [ProgressNote] d ON c.lProgressNote = d.lID INNER JOIN [Patient] pat ON d.lPatient = pat.lID INNER JOIN [Physician] p ON p.lid = a.lPhysician LEFT JOIN DIform_data ON DIform_data.lid = a.lDIform_data LEFT JOIN DIform ON DIform.lid = DIform_data.lDIform WHERE #TempItems.ID > @FirstRec AND #TempItems.ID < @LastRec ORDER BY #TempItems.ID SET NOCOUNT OFF

Replication failing with Merge process could not enumerate

Posted: 05 Aug 2012 08:01 PM PDT

Hi All,Out of the blue my SQL server started going slow sporadically and every so often throws up some errors.SQL Sever : 2008Windows Server : 2008Merge replication with 1 subscriberThe error that is occurring...Error messages:The merge process could not enumerate changes at the 'Subscriber'. When troubleshooting, restart the synchronization with verbose history logging and specify an output file to which to write. (Source: MSSQL_REPL, Error number: MSSQL_REPL-2147200999)Get help: http://help/MSSQL_REPL-2147200999TCP Provider: An existing connection was forcibly closed by the remote host. (Source: MSSQLServer, Error number: 10054)Get help: http://help/10054Communication link failure (Source: MSSQLServer, Error number: 10054)Get help: http://help/10054Protocol error in TDS stream (Source: MSSQLServer, Error number: 0)Get help: http://help/0Has anyone come across this before??How does one switch on verbose logging for a merge replication?ThanksReggie

Return Empty is a specific record is encontered.

Posted: 04 Jul 2013 07:18 AM PDT

Hi all,This seems to be very easy (and probably is!!) but I'm just stuck here.I have a table that contains an ID, a date and a Status.I want to return the top row (most recent datetime) if any of the status for the same Id is not C (as in Canceled).Here's an example:create table #Temp1(RecID int, DateEntered Datetime, Status Varchar(1))INSERT INTO #Temp1 VALUES (1,'01-01-2013 10:20:10', 'N')INSERT INTO #Temp1 VALUES (1,'01-02-2013 08:14:00', 'R')INSERT INTO #Temp1 VALUES (1,'01-03-2013 03:30:00', 'E')INSERT INTO #Temp1 VALUES (2,'01-01-2013 14:58:00', 'N')INSERT INTO #Temp1 VALUES (2,'01-02-2013 08:23:00', 'R')INSERT INTO #Temp1 VALUES (2,'01-04-2013 22:14:00', 'C')select * from #Temp1drop table #temp1In this examples my result set should be:1 ; 01-03-2013 03:30:00 ; ENo record should be returned from RecId 2 because the last line is a 'C'.The 'C' status will always be in the last row.Thank you all for your help!!

Can I change a column from smallint to float?

Posted: 04 Jul 2013 09:53 AM PDT

I've got a column, in 1 table, that's a smallint. Then I've got another table with the same column name (it's meant to be the same thing) but it is a float. I'd really like to change the data type in the first table to a float, too, but am concerned that I might loose data. May I just change the data type from smallint to float, and will SQL Server 2008 R2 keep my data?

Retrieve the TOP 10 Elapsed time (query)

Posted: 03 Jul 2013 10:26 PM PDT

Hi ,Someone can give me a script that return the top 10 sql queries based on elapsed time for the past 7 days.Please ?

Impact of DBCC UPDATEUSAGE (0)

Posted: 03 Jul 2013 08:17 PM PDT

We got the advice to run DBCC UPDATEUSAGE, on one of our databases.What impact can be expected from this command?[b]Can this be done online?[/b]Does this impact the servers performance?This is for a 2005 system, databasesize is 40 Gb.On a 2008 system a comparable database of 10 Gb was done in 10 secs.On a 2008 R2 system (small) a 200 Gb database was done in just under 6 minutes.Test for a 2005 system on the backup of the database is in preparation.thanks for your time and attention,Ben Brugman

Getting error while creating database

Posted: 03 Jul 2013 11:27 PM PDT

Getting error while creating database."SELECT failed because the following SET options have incorrect settings: 'QUOTED_IDENTIFIER'. Verify that SET options are correct for use with indexed views and/or indexes on computed columns and/or query notifications and/or xml data type methods. (Microsoft SQL Server, Error: 1934)"Thanks,

sql 2000 to sql 2008

Posted: 04 Jul 2013 03:15 AM PDT

can someone tell why this works in sql2000, but not sql2008?DECLARE @rows varchar(5), @cnt int, @pak decimal, @results varchar(1000)SELECT @cnt = isnull(max(seq),0) from @patemp <<<[i] this temp table is empty so @cnt becomes zero[/i]IF @cnt>0 BEGIN UPDATE unanet..sequence_number SET @pak=last_number=last_number+@cnt WHERE table_name='project_assignment' [b] UPDATE @patemp SET pa_key=@pak-@cnt+seq[/b] << this worked in sql2000 but gives error in sql2008:Data type decimal of receiving variable is not equal to the data type decimal of column 'last_number'. [SQLSTATE 42000] (Error 425). The step failed.sorry, here table:CREATE TABLE [dbo].[sequence_number]( [table_name] [varchar](50) NOT NULL, [last_number] [decimal](15, 0) NOT NULL,

configure location of analysis services database and deployment

Posted: 03 Jul 2013 07:57 PM PDT

HiI am creating a new analysis services project and would like the analysis services database to be on a public server (not on my local pc).How do i configure that?or can the cube be developed locallally and deployed to a public server.Thanks in advance

Worker Thread

Posted: 03 Jul 2013 11:19 PM PDT

hi,fires an alarm upon detecting that the percentage of Worker Threads used.Please give query to find out Worker Threads

SQL I/O Errors

Posted: 03 Jul 2013 11:15 PM PDT

Hi,How can i get SQL I/O Errors by sql query

Reporting Services DRP Strategy

Posted: 04 Jul 2013 01:04 AM PDT

Hey Everyone,I'm looking for some insight on Reporting Services 2008. I currently am backing up the ReportServer Database and loading it weekly to our DRP site (we don't make many changes). While I fully understand that I could mirror this database, I have now decide to have this DRP Server to run off of the DRP data 'Sources'. Is there a way that when changes are made to reports that I can have them copied to the DRP site without having to manually upload the RDL files every time a change is made? If not then this just doesn't make any sense to me.Thanks in advance for your expertise!Steve

Find All Compressed tables in database - script

Posted: 12 Apr 2010 07:09 AM PDT

So here is what I have so far, seems to work but I swear i've seen a more simplistic way to do this. What I want to do is display all tables in a database that are using some sort of compression (regardless if it's PAGE or ROW). Below is what I already have, but if anyone knows of anything better I would be greatful. --script to identify compressed tablesSELECT st.name, st.object_id, sp.partition_id, sp.partition_number, sp.data_compression, sp.data_compression_desc FROM sys.partitions SPINNER JOIN sys.tables ST ONst.object_id = sp.object_idWHERE data_compression <> 0;-)

Table Locks

Posted: 03 Jul 2013 11:18 PM PDT

Hi, fires an alarm, when the number of times page locks escalated to table locks per second e

Insert with ' Symbol

Posted: 03 Jul 2013 08:31 PM PDT

Hi Team.Insert into table_Name values (1,'Text') -- Done.Insert into table_Name values (2,'Text's') - -- Unclosed quotation mark after the character string ')How to insert a value with ' Symbol.Please help..

memory

Posted: 03 Jul 2013 09:46 PM PDT

how to restrict memory at query level and also at server level?

Reg Query Count

Posted: 03 Jul 2013 09:41 PM PDT

Hi all, From the trace its been found that some sp are running 15k times . Is there any way we can reduce the count they are running ...

Transposing two rows of data to one row

Posted: 03 Jul 2013 09:17 PM PDT

I have a large table of 2m records and I need to combine every two Document No_ to one row and placing the other fields on one row against it as below:Document No_ Dimension Code Dimension Value Code------------------------------------------------------------------------------------------SHP99994 DEPARTMENT MHTSHP99994 TRADETYPE MI would like the code change the above display to the following:Document No_ Dimension Value Code_1 Dimension Value Code_2-----------------------------------------------------------------------------------------------SHP99994 MHT MCan you help please?Thank you in advance

SSIS - Oracle Source character set changed from non unicode to Unicode - Do I need to change the code?

Posted: 10 Sep 2012 06:06 AM PDT

Hi,Our source database Oracle is changed from non Unicode to Unicode. The mappings were done with source DT_STR for source and target external columns. Do I need to change the code to DT_WSTR for source and Target external columns with unicode conversion transformation? Or changing the Target SQL Server database character set will resolve the issue?Thank You.

How to find who changed the database to Single_user mode.

Posted: 03 Jul 2013 08:55 PM PDT

Hi All,One of our databases changed to single_user mode. But we could not find the exact user, hostname etc. Is there is any way to find these details?I had gone through the SQL Server Error logs and Windows event error logs. But could find the exact user who changed the database to single_user mode? Do database automatically changed to single user mode? Regards,Varun

Database Restoring

Posted: 03 Jul 2013 08:09 PM PDT

Hi All,How to set Database restoring mode to normal mode?Thanks,RR

Calculation For Time Attendance System

Posted: 03 Jul 2013 07:28 PM PDT

i had plot data from Entry Pass Scan in / out system into Database and write calculation in SQL script to generate Report in Aspx web page. i had different type of shift pattern following by[img]http://img841.imageshack.us/img841/7942/cj81.jpg[/img]Currently i had facing a problem which is a)Total actual working minute(as Work MIn) is 720min per day[12 hours per day] (if more than 720 minutes working hours is consider under category Extra OT hour). b) Cut off the scan in and out time from 0700 ~ 1900 (DAY) and 1900 ~ 0700(NIGHT) calculation. However, if the person late in or early out the calculation of the time is base from the last and the first scan. Anyone can help me or amend the calculation in SQL Script ? For the Work Hour calculation, i have a idea which is catch the (last scan time - EP_SHIFT_TIMEFROM) for every shift type Kindly advise , thank youSELECT CONVERT(VARCHAR(8),STUFF(STUFF(FIRSTSCAN.EP_SCAN_DATE,12,0,':'),15,0,':'),112) AS SCANDATE ,FIRSTSCAN.EP_EMP_COMPANY ,FIRSTSCAN.EP_EMP_DEPT ,FIRSTSCAN.EP_EMP_ID ,FIRSTSCAN.EP_EMP_NAME ,FIRSTSCAN.EP_EMP_SECTION ,FIRSTSCAN.EP_EMP_SHIFT ,FIRSTSCAN.EP_SHIFT ,right(FIRSTSCAN.EP_SCAN_DATE,6) AS FIRSTSCAN ,right(LASTSCAN.EP_SCAN_DATE,6) AS LASTSCAN ,SCANTIMECAL.INFAB AS INFAB_MIN ,SCANTIMECAL.OUTFAB AS OUTFAB_MIN ,DATEDIFF(MI,CAST(STUFF(STUFF(FIRSTSCAN.EP_SCAN_DATE,12,0,':'),15,0,':') AS DATETIME) ,CAST(STUFF(STUFF(LASTSCAN.EP_SCAN_DATE,12,0,':'),15,0,':') AS DATETIME)) AS WORK_MIN ,DATEDIFF(HOUR,CAST(STUFF(STUFF(FIRSTSCAN.EP_SCAN_DATE,12,0,':'),15,0,':') AS DATETIME) ,CAST(STUFF(STUFF(LASTSCAN.EP_SCAN_DATE,12,0,':'),15,0,':') AS DATETIME)) AS WORK_HOUR ,CASE WHEN FIRSTSCAN.EP_SHIFT <> 'R1' AND FIRSTSCAN.EP_SHIFT <> 'R2' THEN ROUND(CAST((DATEDIFF(MINUTE,CAST(STUFF(STUFF((CASE WHEN SHIFTCAL.EP_SHIFT = 'N1' THEN CONVERT(VARCHAR(8),DATEADD(DAY,+1,LEFT(FIRSTSCAN.EP_SCAN_DATE ,8)),112) + ' ' + REPLACE(CONVERT(VARCHAR(8),DATEADD(HOUR,+0,SHIFTDESC.EP_SHIFT_TIMETO + ':00'),108),':','') ELSE LEFT(FIRSTSCAN.EP_SCAN_DATE ,8) + ' ' + REPLACE(CONVERT(VARCHAR(8),DATEADD(HOUR,+0,SHIFTDESC.EP_SHIFT_TIMETO + ':00'),108),':','') END),12,0,':'),15,0,':') AS DATETIME) ,CAST(STUFF(STUFF(LASTSCAN.EP_SCAN_DATE,12,0,':'),15,0,':') AS DATETIME)) / 60.0) AS FLOAT),2) ELSE ROUND(CAST((DATEDIFF(MINUTE,CAST(STUFF(STUFF(FIRSTSCAN.EP_SCAN_DATE,12,0,':'),15,0,':') AS DATETIME) ,CAST(STUFF(STUFF(LASTSCAN.EP_SCAN_DATE,12,0,':'),15,0,':') AS DATETIME)) / 60.0) AS FLOAT),2) END AS OTWORK_HOUR ,CASE WHEN (FIRSTSCAN.EP_SHIFT = 'N1' OR FIRSTSCAN.EP_SHIFT = 'D1') AND ROUND(CAST((DATEDIFF(MINUTE,CAST(STUFF(STUFF((CASE WHEN SHIFTCAL.EP_SHIFT = 'N1' THEN CONVERT(VARCHAR(8),DATEADD(DAY,+1,LEFT(FIRSTSCAN.EP_SCAN_DATE ,8)),112) + ' ' + REPLACE(CONVERT(VARCHAR(8),DATEADD(HOUR,+0,SHIFTDESC.EP_SHIFT_TIMETO + ':00'),108),':','') ELSE LEFT(FIRSTSCAN.EP_SCAN_DATE ,8) + ' ' + REPLACE(CONVERT(VARCHAR(8),DATEADD(HOUR,+0,SHIFTDESC.EP_SHIFT_TIMETO + ':00'),108),':','') END),12,0,':'),15,0,':') AS DATETIME) ,CAST(STUFF(STUFF(LASTSCAN.EP_SCAN_DATE,12,0,':'),15,0,':') AS DATETIME)) / 60.0) AS FLOAT),2) >= 3 THEN 3 - 0.25 END OTHOUR_FIX ,CASE WHEN (FIRSTSCAN.EP_SHIFT = 'N1' OR FIRSTSCAN.EP_SHIFT = 'D1') AND ROUND(CAST((DATEDIFF(MINUTE,CAST(STUFF(STUFF((CASE WHEN SHIFTCAL.EP_SHIFT = 'N1' THEN CONVERT(VARCHAR(8),DATEADD(DAY,+1,LEFT(FIRSTSCAN.EP_SCAN_DATE ,8)),112) + ' ' + REPLACE(CONVERT(VARCHAR(8),DATEADD(HOUR,+0,SHIFTDESC.EP_SHIFT_TIMETO + ':00'),108),':','') ELSE LEFT(FIRSTSCAN.EP_SCAN_DATE ,8) + ' ' + REPLACE(CONVERT(VARCHAR(8),DATEADD(HOUR,+0,SHIFTDESC.EP_SHIFT_TIMETO + ':00'),108),':','') END),12,0,':'),15,0,':') AS DATETIME) ,CAST(STUFF(STUFF(LASTSCAN.EP_SCAN_DATE,12,0,':'),15,0,':') AS DATETIME)) / 60.0) AS FLOAT),2) >= 3 THEN ROUND(CAST((DATEDIFF(MINUTE,CAST(STUFF(STUFF((CASE WHEN SHIFTCAL.EP_SHIFT = 'N1' THEN CONVERT(VARCHAR(8),DATEADD(DAY,+1,LEFT(FIRSTSCAN.EP_SCAN_DATE ,8)),112) + ' ' + REPLACE(CONVERT(VARCHAR(8),DATEADD(HOUR,+0,SHIFTDESC.EP_SHIFT_TIMETO + ':00'),108),':','') ELSE LEFT(FIRSTSCAN.EP_SCAN_DATE ,8) + ' ' + REPLACE(CONVERT(VARCHAR(8),DATEADD(HOUR,+0,SHIFTDESC.EP_SHIFT_TIMETO + ':00'),108),':','') END),12,0,':'),15,0,':') AS DATETIME), CAST(STUFF(STUFF(LASTSCAN.EP_SCAN_DATE,12,0,':'),15,0,':') AS DATETIME)) / 60.0) AS FLOAT) - 3,2) ELSE CASE WHEN (FIRSTSCAN.EP_SHIFT = 'R1' OR FIRSTSCAN.EP_SHIFT = 'R2') THEN ROUND(CAST((DATEDIFF(MINUTE,CAST(STUFF(STUFF(FIRSTSCAN.EP_SCAN_DATE,12,0,':'),15,0,':') AS DATETIME) ,Cast(STUFF(Stuff(LASTSCAN.EP_SCAN_DATE,12,0,':'),15,0,':') AS DATETIME)) / 60.0) AS FLOAT),2)END END OTHOUR_EXTRA ,CASE WHEN FIRSTSCAN.EP_SHIFT <> 'R1' AND FIRSTSCAN.EP_SHIFT <> 'R2' THEN FIRSTSCAN.LATEIN END LATEIN ,CASE WHEN FIRSTSCAN.EP_SHIFT <> 'R1' AND FIRSTSCAN.EP_SHIFT <> 'R2' THEN CASE WHEN SCANTIMECAL.OUTFAB >= SHIFTDESC.EP_SHIFT_OGRACE THEN 1 END END AS BREAK_ABNORMAL ,CASE WHEN FIRSTSCAN.EP_SHIFT <> 'R1' AND FIRSTSCAN.EP_SHIFT <> 'R2' THEN CASE WHEN RIGHT(LASTSCAN.EP_SCAN_DATE,6) < REPLACE(CONVERT(VARCHAR(8), CONVERT(VARCHAR(8),SHIFTDESC.EP_SHIFT_TIMETO,108),108),':','') THEN 1 END END AS EARLYOUT_NORMAL ,(CASE WHEN FIRSTSCAN.EP_SHIFT<> 'NS' AND FIRSTSCAN.EP_SHIFT <> 'R1' AND FIRSTSCAN.EP_SHIFT <> 'R2' THEN (CASE WHEN RIGHT(LASTSCAN.EP_SCAN_DATE,6) < REPLACE(CONVERT(VARCHAR(8),DATEADD(HOUR,+3,CONVERT(VARCHAR(8),SHIFTDESC.EP_SHIFT_TIMETO,108)),108),':','') THEN 1 END) END)AS EARLYOUT_SHIFT FROM ( SELECT ROW_NUMBER() OVER(PARTITION BY SCANHIST.EP_EMP_ID,CASE WHEN RIGHT(SCANHIST.EP_SCAN_DATE,6) < '130000' AND SHIFTCAL.EP_SHIFT = 'N1' THEN CONVERT(VARCHAR(8),DATEADD(DAY,-1,CONVERT(VARCHAR(8),LEFT(SCANHIST.EP_SCAN_DATE,8),112)),112) ELSE LEFT(SCANHIST.EP_SCAN_DATE,8) END ORDER BY SCANHIST.EP_EMP_ID) AS RowNum ,SCANHIST.EP_SCAN_DATE ,CASE WHEN RIGHT(SCANHIST.EP_SCAN_DATE,6) < '130000' AND SHIFTCAL.EP_SHIFT = 'N1' THEN CONVERT(VARCHAR(8),DATEADD(DAY,-1,CONVERT(VARCHAR(8),LEFT(SCANHIST.EP_SCAN_DATE,8),112)),112) ELSE LEFT(SCANHIST.EP_SCAN_DATE,8) END AS EMP_WORKDATE ,EMPINFO.EP_EMP_COMPANY ,SCANHIST.EP_EMP_ID ,SCANHIST.EP_EMP_NAME ,SCANHIST.EP_EMP_DEPT ,SCANHIST.EP_EMP_SECTION ,SCANHIST.EP_EMP_SHIFT ,SHIFTCAL.EP_SHIFT ,SCANHIST.EP_SCAN_ID ,SCANHIST.EP_TRANS_LOC ,CASE WHEN RIGHT(SCANHIST.EP_SCAN_DATE,6) > REPLACE(SHIFTDESC.EP_SHIFT_TIMEFR,':','') THEN 1 END AS LATEIN FROM [AcmkIMS].[dbo].[EP_SCAN_HIST] SCANHIST JOIN [AcmkIMS].[dbo].[EP_EMP_INFO] EMPINFO ON EMPINFO.EP_EMP_ID = SCANHIST.EP_EMP_ID JOIN [AcmkIMS].[dbo].[EP_SHIFT_CALENDAR] SHIFTCAL ON SHIFTCAL.EP_SHIFT_NAME = SCANHIST.EP_EMP_SHIFT AND SHIFTCAL.EP_SHIFT_DATE = LEFT(SCANHIST.EP_SCAN_DATE,8) JOIN [AcmkIMS].[dbo].[EP_SHIFT_DESC] SHIFTDESC ON SHIFTDESC.EP_SHIFT_NAME = SHIFTCAL.EP_SHIFT WHERE 1=1 AND SCANHIST.EP_SCAN_DATE >= '20130524' + ' ' + CASE WHEN (SHIFTCAL.EP_SHIFT <> 'R1' AND SHIFTCAL.EP_SHIFT <> 'R2') THEN REPLACE(CONVERT(VARCHAR(8),DATEADD(HOUR,-4,SHIFTDESC.EP_SHIFT_TIMEFR + ':00'),108),':','') ELSE REPLACE(CONVERT(VARCHAR(8),DATEADD(HOUR,-0,SHIFTDESC.EP_SHIFT_TIMEFR + ':00'),108),':','') END AND SCANHIST.EP_SCAN_DATE < CASE WHEN (SHIFTCAL.EP_SHIFT = 'N1') THEN CONVERT(VARCHAR(8),DATEADD(DAY,+1,'20130526'),112) + ' ' + REPLACE(CONVERT(VARCHAR(8),DATEADD(HOUR,+6,SHIFTDESC.EP_SHIFT_TIMETO + ':00'),108),':','') ELSE CASE WHEN (SHIFTCAL.EP_SHIFT = 'R1' OR SHIFTCAL.EP_SHIFT = 'R2') THEN '20130526' + ' ' + REPLACE(CONVERT(VARCHAR(8),DATEADD(HOUR,+0,SHIFTDESC.EP_SHIFT_TIMETO + ':00'),108),':','') ELSE '20130526' + ' ' + REPLACE(CONVERT(VARCHAR(8),DATEADD(HOUR,+6,SHIFTDESC.EP_SHIFT_TIMETO + ':00'),108),':','') END END AND SCANHIST.EP_TRANS_LOC = 'IN' AND EMPINFO.EP_EMP_LEVEL > '10' --AND EMPINFO.EP_EMP_LEVEL <> '' --AND SCANHIST.EP_EMP_DEPT = '' --AND SCANHIST.EP_EMP_SECTION = '' --AND SCANHIST.EP_EMP_SHIFT = '' --AND SCANHIST.EP_EMP_ID = '' AND SCANHIST.EP_EMP_SHIFT ='A' )FIRSTSCAN OUTER APPLY ( SELECT TOP 1 SCANHIST.EP_SCAN_DATE ,EMPINFO.EP_EMP_COMPANY ,SCANHIST.EP_EMP_ID ,SCANHIST.EP_EMP_NAME ,SCANHIST.EP_EMP_DEPT ,SCANHIST.EP_EMP_SECTION ,SCANHIST.EP_EMP_SHIFT ,SHIFTCAL.EP_SHIFT ,SCANHIST.EP_SCAN_ID ,SCANHIST.EP_TRANS_LOC FROM [AcmkIMS].[dbo].[EP_SCAN_HIST] SCANHIST JOIN [AcmkIMS].[dbo].[EP_EMP_INFO] EMPINFO ON EMPINFO.EP_EMP_ID = SCANHIST.EP_EMP_ID JOIN [AcmkIMS].[dbo].[EP_SHIFT_CALENDAR] SHIFTCAL ON SHIFTCAL.EP_SHIFT_NAME = SCANHIST.EP_EMP_SHIFT AND SHIFTCAL.EP_SHIFT_DATE = LEFT(FIRSTSCAN.EP_SCAN_DATE,8) JOIN [AcmkIMS].[dbo].[EP_SHIFT_DESC] SHIFTDESC ON SHIFTDESC.EP_SHIFT_NAME = SHIFTCAL.EP_SHIFT WHERE 1=1 AND SCANHIST.EP_SCAN_DATE > FIRSTSCAN.EP_SCAN_DATE AND SCANHIST.EP_SCAN_DATE < CASE WHEN (FIRSTSCAN.EP_SHIFT = 'N1') THEN CONVERT(VARCHAR(8),DATEADD(DAY,+1,LEFT(FIRSTSCAN.EP_SCAN_DATE,8)),112) + ' ' + REPLACE(CONVERT(VARCHAR(8),DATEADD(HOUR,+6,SHIFTDESC.EP_SHIFT_TIMETO + ':00'),108),':','') else CASE WHEN (FIRSTSCAN.EP_SHIFT = 'R1' OR FIRSTSCAN.EP_SHIFT = 'R2') THEN left(FIRSTSCAN.EP_SCAN_DATE,8) + ' ' + REPLACE(CONVERT(VARCHAR(8),DATEADD(HOUR,+0,SHIFTDESC.EP_SHIFT_TIMETO + ':00'),108),':','') ELSE left(FIRSTSCAN.EP_SCAN_DATE,8) + ' ' + REPLACE(CONVERT(VARCHAR(8),DATEADD(HOUR,+6,SHIFTDESC.EP_SHIFT_TIMETO + ':00'),108),':','') END END AND SCANHIST.EP_TRANS_LOC = 'OUT' AND EMPINFO.EP_EMP_LEVEL > '10' AND EMPINFO.EP_EMP_LEVEL <> '' AND SCANHIST.EP_EMP_ID = FIRSTSCAN.EP_EMP_ID ORDER BY SCANHIST.EP_SCAN_DATE )LASTSCAN OUTER APPLY ( SELECT GROUP_SCANTIMECAL.EP_EMP_ID ,SUM(CAST(GROUP_SCANTIMECAL.INFAB_MIN AS FLOAT)) AS INFAB ,SUM(CAST(GROUP_SCANTIMECAL.OUTFAB_MIN AS FLOAT)) AS OUTFAB FROM ( SELECT SCANHIST.EP_SCAN_DATE ,SCANHIST.EP_EMP_ID ,SCANHIST.EP_EMP_NAME ,SCANHIST.EP_EMP_DEPT ,SCANHIST.EP_EMP_SECTION ,SCANHIST.EP_EMP_SHIFT ,SCANHIST.EP_TRANS_LOC ,DATEDIFF(MI,CAST(STUFF(STUFF(SCANHIST.EP_SCAN_DATE,12,0,':'),15,0,':') AS DATETIME), CAST(STUFF(STUFF(NEXTSCAN.EP_SCAN_DATE,12,0,':'),15,0,':') AS DATETIME)) AS INFAB_MIN ,DATEDIFF(MI,CAST(STUFF(STUFF(NEXTSCAN.EP_SCAN_DATE,12,0,':'),15,0,':') AS DATETIME), CAST(STUFF(STUFF(PREVSCAN.EP_SCAN_DATE,12,0,':'),15,0,':') AS DATETIME)) AS OUTFAB_MIN FROM [AcmkIMS].[dbo].[EP_SCAN_HIST] SCANHIST OUTER APPLY ( SELECT TOP 1 NEXTSCAN.EP_SCAN_DATE ,NEXTSCAN.EP_EMP_ID ,NEXTSCAN.EP_EMP_NAME ,NEXTSCAN.EP_EMP_DEPT ,NEXTSCAN.EP_EMP_SECTION ,NEXTSCAN.EP_EMP_SHIFT ,NEXTSCAN.EP_SCAN_ID ,NEXTSCAN.EP_TRANS_DESC ,NEXTSCAN.EP_TRANS_LOC FROM [AcmkIMS].[dbo].[EP_SCAN_HIST] NEXTSCAN JOIN [AcmkIMS].[dbo].[EP_EMP_INFO] EMPINFO ON EMPINFO.EP_EMP_ID = NEXTSCAN.EP_EMP_ID JOIN [AcmkIMS].[dbo].[EP_SHIFT_CALENDAR] SHIFTCAL ON SHIFTCAL.EP_SHIFT_NAME = NEXTSCAN.EP_EMP_SHIFT AND SHIFTCAL.EP_SHIFT_DATE = LEFT(FIRSTSCAN.EP_SCAN_DATE,8) JOIN [AcmkIMS].[dbo].[EP_SHIFT_DESC] SHIFTDESC ON SHIFTDESC.EP_SHIFT_NAME = SHIFTCAL.EP_SHIFT WHERE 1=1 AND SCANHIST.EP_SCAN_ID = NEXTSCAN.EP_SCAN_ID AND NEXTSCAN.EP_SCAN_DATE > SCANHIST.EP_SCAN_DATE AND NEXTSCAN.EP_SCAN_DATE < CASE WHEN (FIRSTSCAN.EP_SHIFT = 'N1') THEN CONVERT(VARCHAR(8),DATEADD(DAY,+1,LEFT(NEXTSCAN.EP_SCAN_DATE,8)),112) + ' ' + REPLACE(CONVERT(VARCHAR(8),DATEADD(HOUR,+6,SHIFTDESC.EP_SHIFT_TIMETO + ':00'),108),':','') ELSE CASE WHEN (FIRSTSCAN.EP_SHIFT = 'R1' OR FIRSTSCAN.EP_SHIFT = 'R2') THEN LEFT(NEXTSCAN.EP_SCAN_DATE,8) + ' ' + REPLACE(CONVERT(VARCHAR(8),DATEADD(HOUR,+0,SHIFTDESC.EP_SHIFT_TIMETO + ':00'),108),':','') ELSE LEFT(NEXTSCAN.EP_SCAN_DATE,8) + ' ' + REPLACE(CONVERT(VARCHAR(8),DATEADD(HOUR,+6,SHIFTDESC.EP_SHIFT_TIMETO + ':00'),108),':','') END END AND NEXTSCAN.EP_TRANS_LOC = 'OUT' ORDER BY NEXTSCAN.EP_SCAN_DATE )NEXTSCAN OUTER APPLY ( SELECT TOP 1 PREVSCAN.EP_SCAN_DATE ,PREVSCAN.EP_EMP_ID ,PREVSCAN.EP_EMP_NAME ,PREVSCAN.EP_EMP_DEPT ,PREVSCAN.EP_EMP_SECTION ,PREVSCAN.EP_EMP_SHIFT ,PREVSCAN.EP_SCAN_ID ,PREVSCAN.EP_TRANS_DESC ,PREVSCAN.EP_TRANS_LOC FROM [AcmkIMS].[dbo].[EP_SCAN_HIST] PREVSCAN JOIN [AcmkIMS].[dbo].[EP_EMP_INFO] EMPINFO ON EMPINFO.EP_EMP_ID = PREVSCAN.EP_EMP_ID JOIN [AcmkIMS].[dbo].[EP_SHIFT_CALENDAR] SHIFTCAL ON SHIFTCAL.EP_SHIFT_NAME = PREVSCAN.EP_EMP_SHIFT AND SHIFTCAL.EP_SHIFT_DATE = LEFT(FIRSTSCAN.EP_SCAN_DATE,8) JOIN [AcmkIMS].[dbo].[EP_SHIFT_DESC] SHIFTDESC ON SHIFTDESC.EP_SHIFT_NAME = SHIFTCAL.EP_SHIFT WHERE 1=1 AND SCANHIST.EP_SCAN_ID = PREVSCAN.EP_SCAN_ID AND PREVSCAN.EP_SCAN_DATE > SCANHIST.EP_SCAN_DATE AND PREVSCAN.EP_SCAN_DATE < CASE WHEN (FIRSTSCAN.EP_SHIFT = 'N1') THEN CONVERT(VARCHAR(8),DATEADD(DAY,+1,LEFT(SCANHIST.EP_SCAN_DATE,8)),112) + ' ' + REPLACE(CONVERT(VARCHAR(8),DATEADD(HOUR,+6,SHIFTDESC.EP_SHIFT_TIMETO + ':00'),108),':','') ELSE CASE WHEN (FIRSTSCAN.EP_SHIFT = 'R1' OR FIRSTSCAN.EP_SHIFT = 'R2') THEN LEFT(SCANHIST.EP_SCAN_DATE,8) + ' ' + REPLACE(CONVERT(VARCHAR(8),DATEADD(HOUR,+0,SHIFTDESC.EP_SHIFT_TIMETO + ':00'),108),':','') ELSE LEFT(SCANHIST.EP_SCAN_DATE,8) + ' ' + REPLACE(CONVERT(VARCHAR(8),DATEADD(HOUR,+6,SHIFTDESC.EP_SHIFT_TIMETO + ':00'),108),':','') END END AND PREVSCAN.EP_TRANS_LOC = 'IN' ORDER BY PREVSCAN.EP_SCAN_DATE )PREVSCAN JOIN [AcmkIMS].[dbo].[EP_EMP_INFO] EMPINFO ON EMPINFO.EP_EMP_ID = SCANHIST.EP_EMP_ID JOIN [AcmkIMS].[dbo].[EP_SHIFT_CALENDAR] SHIFTCAL ON SHIFTCAL.EP_SHIFT_NAME = SCANHIST.EP_EMP_SHIFT AND SHIFTCAL.EP_SHIFT_DATE = LEFT(FIRSTSCAN.EP_SCAN_DATE,8) JOIN [AcmkIMS].[dbo].[EP_SHIFT_DESC] SHIFTDESC ON SHIFTDESC.EP_SHIFT_NAME = SHIFTCAL.EP_SHIFT WHERE 1=1 AND SCANHIST.EP_SCAN_DATE >= FIRSTSCAN.EP_SCAN_DATE AND SCANHIST.EP_SCAN_DATE < CASE WHEN (FIRSTSCAN.EP_SHIFT = 'N1') THEN CONVERT(VARCHAR(8),DATEADD(DAY,+1,LEFT(FIRSTSCAN.EP_SCAN_DATE,8)),112) + ' ' + REPLACE(CONVERT(VARCHAR(8),DATEADD(HOUR,+6,SHIFTDESC.EP_SHIFT_TIMETO + ':00'),108),':','') ELSE CASE WHEN (FIRSTSCAN.EP_SHIFT = 'R1' OR FIRSTSCAN.EP_SHIFT = 'R2') THEN LEFT(FIRSTSCAN.EP_SCAN_DATE,8) + ' ' + REPLACE(CONVERT(VARCHAR(8),DATEADD(HOUR,+0,SHIFTDESC.EP_SHIFT_TIMETO + ':00'),108),':','') ELSE LEFT(FIRSTSCAN.EP_SCAN_DATE,8) + ' ' + REPLACE(CONVERT(VARCHAR(8),DATEADD(HOUR,+6,SHIFTDESC.EP_SHIFT_TIMETO + ':00'),108),':','') END END AND SCANHIST.EP_EMP_ID = FIRSTSCAN.EP_EMP_ID AND SCANHIST.EP_TRANS_LOC = 'IN' )GROUP_SCANTIMECAL GROUP BY GROUP_SCANTIMECAL.EP_EMP_ID )SCANTIMECAL JOIN [AcmkIMS].[dbo].[EP_SHIFT_CALENDAR] SHIFTCAL ON SHIFTCAL.EP_SHIFT_NAME = FIRSTSCAN.EP_EMP_SHIFT AND SHIFTCAL.EP_SHIFT_DATE = LEFT(FIRSTSCAN.EP_SCAN_DATE,8) JOIN [AcmkIMS].[dbo].[EP_SHIFT_DESC] SHIFTDESC ON SHIFTDESC.EP_SHIFT_NAME = SHIFTCAL.EP_SHIFT WHERE 1=1 AND FIRSTSCAN.RowNum = 1 AND CONVERT(VARCHAR(8),STUFF(STUFF(FIRSTSCAN.EP_SCAN_DATE,12,0,':'),15,0,':'),112) BETWEEN '20130524' AND '20130526' ORDER BY FIRSTSCAN.EP_EMP_ID

Search This Blog