Monday, September 30, 2013

[how to] Monitor Usage on a SSAS 2012 Tabular Model

[how to] Monitor Usage on a SSAS 2012 Tabular Model


Monitor Usage on a SSAS 2012 Tabular Model

Posted: 30 Sep 2013 07:15 PM PDT

I have a SSAS 2012 tabular model and I am trying to figure out the most efficient way to track which users are accessing my model. My tabular model is on a server that is used by other databases, so system wide tracing and logging are not options that I'd like to implement as I don't want to affect the performance on the server for those other databases. I haven't been able to find out much while searching online and am still pretty new to tabular models.

Has anyone implemented something similar to what I am looking for or has any ideas?

Unattended SQL install on a remote machine

Posted: 30 Sep 2013 06:54 PM PDT

My product installation does an unattended install of SQL (SQLExpress by default). I use command line arguments rather than a configuration file, so I'm quite familiar with this process.

This unattended install occurs on the same machine that my install media is running on. However I need to look at installing the SQL instance on a separate machine to what the install media is running on. The target machine will be on the same network.

Is it possible to do this unattended? If so, is it simply extra command line arguments or is it a different process (i.e. a PowerShell script)?

(Note: can someone with sufficient rep please add the tag for me, as I cannot yet?)

SSDT/SSIS 2012 - How to automatically load some projects?

Posted: 30 Sep 2013 03:37 PM PDT

I did not use the default projects folder in SSDT, ie SSIS 2012. I used my own instead. Every time I open SSIS, I have to load the projects manually. Is there any way I can load them automatically ?

Audit table insert restriction

Posted: 30 Sep 2013 02:36 PM PDT

So I'm trying to create my own audit table for some user management analysis and stumbled across a little problem. I use this trigger

CREATE OR REPLACE   TRIGGER usr_t_audit_user_alnd      AFTER LOGON ON DATABASE      BEGIN          INSERT INTO usr_t_audit_user (username, session_id, logon_day, logon_time)          VALUES(UPPER(USER), SYS_CONTEXT('USERENV','SID'), SYSDATE, TO_CHAR(SYSDATE, 'HH24:MI:SS'));      END;  

to insert some standard informations into my table. It works fine, except the logoff time inside my other trigger isn't always that correct. But my actual problem is, that I don't want to insert all user logons. For example I only want to audit some chosen users. How can I achieve such a restriction? I can't put a WHERE-clause in my trigger. Anybody got some ideas?

partitioning by datetime vs date vs int performance

Posted: 30 Sep 2013 04:06 PM PDT

I am trying to determine which partitioning strategy will be most efficient for my queries on SQL Server.

We know we want daily partitions. The data has a natural DATETIME field in it, so my gut instinct is to partition on that value. However, we have discussed the possibility of adding a DATE column (with just the time information stripped out), or possibly coding the date as an integer (YYYYMMDD, e.g., 20130930) and using that as the partitioning column.

All queries on this data will be on a specific date (WHERE ItemTime = '2013-09-30') or range of dates (WHERE ItemTime BETWEEN '2013-09-15' AND '2013-09-30'). For now, we always query on a date, but future requirements may include time details (WHERE ItemTime BETWEEN '2013-09-29 20:30:00' AND '2013-09-30 10:15:45').

I have tested performance of each strategy on a few hundred thousand rows of data, and seen no real difference. The production deployment, however, will be in the hundreds of millions up to maybe a couple billion rows.

Is one of these strategies going to lead to more efficient queries than the others? Why or why not?

Thanks for your help.

[EDIT] The queries will be formatted by application code, so I'm not concerned about how to translate between DATETIME, DATE, and INT. I can assume the query will be properly formatted based on whichever partitioning scheme is chosen. I'm just trying to find out if partition elimination will be faster using one of these data types.

How to concatenate multi row sql output

Posted: 30 Sep 2013 05:39 PM PDT

Version Information:

Oracle Database 11g Release 11.1.0.7.0 - 64bit Production  HP Server Automation v9.14  

I am creating a report of all my HP-SA managed servers. The report includes data from multiple tables. With the exception of Networking information everything is included on one row. I'm trying to come up with a method to concatenate the NIC NAME, MAC, IP, DNS so that I don't get multiple rows for each active NIC.

I ran into roadblocks with sub select queries not knowing how to pass the D.DVC_ID from the outer query into the sub query. I briefly tried using WM_CONCAT and LISTAGG without success. Any help would be greatly appreciated

Posting the full SQL in case any other HPSA users out there could benefit from knowing where in the database to look for some basic hardware information:

SELECT D.SYSTEM_NAME AS "HOSTNAME",      D.DVC_DESC AS "OPSWARE NAME",      I.HW_ADDR AS "MAC ADDRESS",      I.SLOT AS "INTERFACE",      I.IP_ADDRESS AS "IP ADDRESS",      I.NETMASK AS "SUBNET",      D.DEFAULT_GW AS "GATEWAY",      D.DVC_MFG AS "SERVER VENDOR",      D.DVC_MODEL AS "SERVER MODEL",      CONCAT (          ROUND(MEM.QUANTITY / 1024 / 1024, 1),          ' GB'          ) AS "RAM",      CPU.CPU_MODEL AS "CPU TYPE",      COUNT(CPU.CPU_SLOT) AS "CPU QUANTITY"  FROM TRUTH.DEVICES D  INNER JOIN TRUTH.MEMORY_COMPONENTS MEM ON (D.DVC_ID = MEM.DVC_ID)  INNER JOIN TRUTH.CPU_COMPONENTS CPU ON (D.DVC_ID = CPU.DVC_ID)  INNER JOIN TRUTH.DEVICE_ROLES DR ON (D.DVC_ID = DR.DVC_ID)  INNER JOIN TRUTH.INTERFACES I ON (D.DVC_ID = I.DVC_ID)  INNER JOIN TRUTH.CUSTOMER_CLOUDS CC ON (DR.CUST_CLD_ID = CC.CUST_CLD_ID)  INNER JOIN TRUTH.DATA_CENTERS DC ON (DC.DATA_CENTER_ID = CC.DATA_CENTER_ID)  WHERE MEM.MEMORY_TYPE != 'SWAP'      AND I.ENABLED_FLG = 'Y'  GROUP BY D.SYSTEM_NAME,      D.DVC_DESC,      I.HW_ADDR,      I.SLOT,      I.IP_ADDRESS,      D.DVC_MFG,      D.DVC_MODEL,      MEM.QUANTITY,      CPU.CPU_MODEL,      I.NETMASK,      D.DEFAULT_GW  ORDER BY HOSTNAME ASC,      INTERFACE ASC  

With a sample result of:

HOSTNAME         OPSWARE NAME  MAC ADDRESS        INTERFACE                IP ADDRESS      SUBNET         GATEWAY       SERVER VENDOR  SERVER MODEL             RAM   CPU TYPE                                CPU QUANTITY    ---------------  ------------  -----------------  -----------------------  --------------  -------------  ------------  -------------  -----------------------  ----  --------------------------------------  ------------    WIN-6V18DDNQRL4  Server1       00:50:56:BB:66:A2  Local Area Connection    172.31.0.149    255.255.255.0  172.31.0.10   VMWARE, INC.   VMWARE VIRTUAL PLATFORM  4 GB  Intel(R) Xeon(R) CPU  E5345  @ 2.33GHz  2               WIN-6V18DDNQRL4  Server1       00:50:56:BB:03:D3  Local Area Connection 2  169.254.246.97  255.255.0.0    172.31.0.10   VMWARE, INC.   VMWARE VIRTUAL PLATFORM  4 GB  Intel(R) Xeon(R) CPU  E5345  @ 2.33GHz  2               

Desired Output would have everything to do with the server on one row similar to the following:

HOSTNAME         OPSWARE NAME  (MAC ADDRESS/INTERFACE/IP ADDRESS/SUBNET/GATEWAY)                                                                                                                 SERVER VENDOR  SERVER MODEL             RAM   CPU TYPE                                CPU QUANTITY    ---------------  ------------  ----------------------------------------------------------------------------------------------------------------------------------------------------------------  -------------  -----------------------  ----  --------------------------------------  ------------    WIN-6V18DDNQRL4  Server1       (00:50:56:BB:66:A2/Local Area Connection/172.31.0.149/255.255.255.0/172.31.0.10,00:50:56:BB:03:D3/Local Area Connection 2/169.254.246.97/255.255.0.0/172.31.0.10)  VMWARE, INC.   VMWARE VIRTUAL PLATFORM  4 GB  Intel(R) Xeon(R) CPU  E5345  @ 2.33GHz  2               

Thank you, Lars

Query to compare two subsets of data from the same table?

Posted: 30 Sep 2013 02:01 PM PDT

My problem:

I'm trying to compare to subsets of data within one table, and I've got two methods that partially work and one certainty that there's got to be a more correct way to do it.

The idea here is a table which contains datasets about the same systems over time, and I would like to compare them and especially to see when there are introductions or absences. Allow me to demonstrate with a simple test table:

mysql> select * from gocore;  +-----+------------+------+----------+----------+  | uid | server     | tag  | software | revision |  +-----+------------+------+----------+----------+  |   1 | enterprise | old  | apache   | 2.2.25   |  |   2 | enterprise | new  | apache   | 2.4.6    |  |   3 | enterprise | new  | tomcat   | 7.0.42   |  |   4 | enterprise | old  | geronimo | 2.1.7    |  +-----+------------+------+----------+----------+  

In this example there are two datasets - the tag "old" dataset, and the tag "new" dataset. Each reflects a data sample taken at some point in time. For server "enterprise", we have one software package that changed over time (apache), one software package that was introduced (tomcat), and one software package that became absent (geronimo).

My goal: A query that will allow me to summarize the state between "old" and "new":

+------------+----------+----------+----------+  | server     | software | revision | revision |  +------------+----------+----------+----------+  | enterprise | apache   | 2.2.25   | 2.4.6    |  | enterprise | geronimo | 2.1.7    | NULL     |  | enterprise | tomcat   | NULL     | 7.0.42   |  +------------+----------+----------+----------+  

It is important for my purposes to be able to see the 'NULL' cells above - I need to know when software has been added or removed from the system. TO BE CLEAR, the table above is not the result of a query - it's me using a text editor to fix up what I was getting to describe what I'm looking for. I need your help to figure out the query that would make that table :)

My Kludges:

If I perform a LEFT JOIN and use the WHERE clause to discriminate between "old" and "new" tags, I get results for only those entries that exist under both tags:

mysql> select old.server, old.software, old.revision, new.software, new.revision      -> from gocore as old left join gocore as new on old.software = new.software      -> where old.tag = 'old' and new.tag = 'new';  +------------+----------+----------+----------+----------+  | server     | software | revision | software | revision |  +------------+----------+----------+----------+----------+  | enterprise | apache   | 2.2.25   | apache   | 2.4.6    |  +------------+----------+----------+----------+----------+  

My next try was to create two views so that I could perform the JOIN without throwing the tag filter into the mix:

mysql> create view gc_old as select uid,server,tag,software,revision      -> from gocore where tag = 'old';  Query OK, 0 rows affected (0.00 sec)    mysql> create view gc_new as select uid,server,tag,software,revision      -> from gocore where tag = 'new';  Query OK, 0 rows affected (0.00 sec)    mysql> select * from gc_old;  +-----+------------+------+----------+----------+  | uid | server     | tag  | software | revision |  +-----+------------+------+----------+----------+  |   1 | enterprise | old  | apache   | 2.2.25   |  |   4 | enterprise | old  | geronimo | 2.1.7    |  +-----+------------+------+----------+----------+  2 rows in set (0.00 sec)    mysql> select * from gc_new;  +-----+------------+------+----------+----------+  | uid | server     | tag  | software | revision |  +-----+------------+------+----------+----------+  |   2 | enterprise | new  | apache   | 2.4.6    |  |   3 | enterprise | new  | tomcat   | 7.0.42   |  +-----+------------+------+----------+----------+  2 rows in set (0.00 sec)    mysql> select old.server, old.software, old.revision, new.revision      -> from gc_old as old left join gc_new as new      -> on old.software = new.software;  +------------+----------+----------+----------+  | server     | software | revision | revision |  +------------+----------+----------+----------+  | enterprise | apache   | 2.2.25   | 2.4.6    |  | enterprise | geronimo | 2.1.7    | NULL     |  +------------+----------+----------+----------+  2 rows in set (0.00 sec)  

That works better - I now see absence, not just change, but I still don't see introduction. And I had to go create views for this, which strikes me as inflexible as the dataset is added to over time.

My questions:

  1. How can I make sure that all the holes show up, as represented by the NULL cells in the "My goal" section above?
  2. Can I do it within a single query, avoiding the creation of views as a crutch?

Any help you can provide is greatly appreciated. Thanks!

MySQL create all rows for (over) normalized schema at start or as data is available

Posted: 30 Sep 2013 01:16 PM PDT

I am developing an app, and using MySQL and the db schema is provided. To me the db looks (over)normalized. Data for a single user will be spread over dozen tables, and around 150 rows in total. Some data will be available on user signup (around 40 rows), the rest will be added overtime as all the data is needed to properly use the app. I am creating a stored procedure for creating new users. So I was thinking that when creating a user should I create ALL the rows with either available data or NULL values? or just create the rows whose data is available? Will there be any performance bonus when all the rows for a user are together? for example 1 table will have exact 11 rows for each user, but when user joins there exactly 8 rows will be populated. The other 3 rows will be added after a while. So should I create those 3 rows with NULLs? In the app we will be displaying data for all the available rows together. So if all the rows a query have to return are together will it provide noticeable performance?

EDIT Its basically a match-making app. Tables are stripped down to give you overview.
User Table

CREATE TABLE IF NOT EXISTS `tbl_user` (  `id_user` int(10) unsigned NOT NULL AUTO_INCREMENT,  `email` char(60) NOT NULL,  ....  PRIMARY KEY (`id_user`),  UNIQUE KEY `email` (`email`),  );  

Thn there is a basic info table which has a dozen questions.

CREATE TABLE IF NOT EXISTS `tbl_criteria` (  `id_criteria` int(10) unsigned NOT NULL AUTO_INCREMENT,  `criteria_text` char(200) NOT NULL,  ....  PRIMARY KEY (`id_criteria`)  );  

and thn we have an intersect table

CREATE TABLE IF NOT EXISTS `tbl_user_criteria` (  `id_user` int(10) unsigned NOT NULL,  `id_criteria` int(10) unsigned NOT NULL,  `id_answer` tinyint(3) unsigned NOT NULL,  PRIMARY KEY (`id_user`, `id_criteria`)  );  

and there are like dozen more intersection tables.

MS Sql Server Management Studio gives error on start

Posted: 30 Sep 2013 01:41 PM PDT

When i try to connect my sa account on sql server management studio it gives me this error:

*ps: the error message is turkish but you will see english version in 3rd image

enter image description here

Then i clicked on "Get Help For" thing as its shown above. It showed me this:

enter image description here

So i clicked to "Yes" button. Then it opened web page below:

*ps: error message is the same in first image but its english.

enter image description here

After that, i tried to start sql server process in services windows regarding 3rd image. But it gave me this error message:

It means "Windows couldn't start SQL SERVER service on local computer. For more information see event logs. If this is not a Microsoft service, call provider and give this special error code: 3417"

I wanted to look system event logs from C:\Windows\System32\winevt\Logs directory but there was 84 different .evtx files, I couldn't know what to do. How can i fix this?

*ps: I'm also not little bit confused about this problem's scope. I also thought to post this on superuser and/or stackoverflow but this place looked more convenient to me. So, if i'm mistaken please warn me.

enter image description here

edit:

I opened Event Log application in Control Panel\Administrative Tools according to Jon Siegel's suggestions and filtered all event logs from SQL Server Reporting Services (MSSQLSERVER) source. There are 82 events from "SQL Server Reporting Services (MSSQLSERVER)" But they're all information, their event id is 0. And when i right clied on them and chose Event Properties, here is the dateils tab:

*ps: After EventData, It says "Service started successfully"

enter image description here

SQL Server full text indexing disabled

Posted: 30 Sep 2013 12:26 PM PDT

I have a SQL Server 2008 R2 server instance where I'm unable to use full text catalogs. It was upgraded from SQL Server 2008. When I go to the database properties of an existing database or try to create a new database the "Use full-text indexing" checkbox is disabled.

Some things that I have checked are:

  • Full text search service is installed and started
  • Named pipes protocol is enabled
  • Full text service is running under local system account
  • EXEC sp_fulltext_database 'enable' does run successfully

What else could go wrong, because I'm stuck at it?

How do I access the build configuration in SSDT, ie SSIS 2012? [on hold]

Posted: 30 Sep 2013 12:21 PM PDT

Oracle database on different disk

Posted: 30 Sep 2013 11:47 AM PDT

I have a VM with Oracle Server installed, now Oracle home is defined locally. The VM's disk is no longer expandable because of snapshots (VMWare Workstation claims expansions and snapshots don't play nice together). I had to create a new virtual disk to be able to host a dmp file given to me which is just over 70GB.

With that said, how can I import this dmp onto the new disk?

The current Oracle is C:\Oracle and the new drive is d:

I have minimal experience with Oracle so I would appreciate a detailed answer.

Thanks

Multiple versions of SQL Server on the same development machine

Posted: 30 Sep 2013 10:38 AM PDT

I am consolidating some SQL Servers to simplify administration. I am thinking about a new VM with SQL Server 2005/2008R2/2012 each as a separate named instance on the same machine. This machine will only be used in a 'sandbox' environment primarily to test restores of our backups. No applications will be hitting this.

Am I likely to run into any issues with multiple versions? I remember running into issues years ago with SQL 2000/2005 on the same box. Do you folks use separate machines for restore testing?

How to partition a table by timestamp, where data from every i'th day goes to partition i?

Posted: 30 Sep 2013 12:14 PM PDT

I'm looking into partitioning a table in my InnoDB database. I have a column corresponding to a UTC timestamp, and I want to partition around that.

The basic idea that I want to capture is a partitioning scheme where, with a fixed number of partitions, we cycle through them as follows:

Given 3 partitions (3 for simplicity)

  • Data with timestamp column from day 1 goes into partition 1
  • Data with timestamp column from day 2 goes into partition 2
  • Data with timestamp column from day 3 goes into partition 3
  • Data with timestamp column from day 4 goes into partition 1
  • ..... day 5 .... partition 2
  • ... rinse and repeat

Basically, extract the day out of the timestamp and put the row into partition DAY MOD N where DAY is the day that the timestamp corresponds to (filtering out hours/minutes/seconds) and N is the number of partitions.

This is probably easily done, but I can't find a similar example to emulate this with. What would the ALTER TABLE query be to partition in this fashion?

Update

By atxdba's suggestion, I tried partitioning by hash. I tried the following statement:

ALTER TABLE table_to_partition PARTITION BY HASH(DAYOFMONTH(FROM_UNIXTIME(my_timestamp))) partitions 8;  

This results in error code 1564: This partition function is not allowed. Looking at this list of limitations for partitioning, it doesn't appear that FROM_UNIXTIMESTAMP is supported for partioning a table, so a different transformation from timestamp to date is required.

I have to create INDEX but which? [on hold]

Posted: 30 Sep 2013 11:41 AM PDT

I have to create INDEX but which with this query ?

SELECT *  FROM TABLE  WHERE A = 1    AND B = '5427dbfac0a8280d74bc672c72636a4f'    AND C = 'BIL'    AND D = 'B'    and E = '01'  GROUP BY F,G,E,H,I  ORDER BY B,C,E,G,H,I;  

How can I synchronize data between two RDS MySQL databases

Posted: 30 Sep 2013 05:48 PM PDT

I have 2 Amazon RDS MySQL instances, one is MySql v5.5 and the new one is 5.6. I want to upgrade the 5.5 to the 5.6 but you cannot do upgrades at this time, so we are trying to migrate the data. I did a dump of the 5.5 and have imported the dump into the 5.6 but since I need to keep our down time to a minimum, I decided to try to do a data synchronization between the two.

What are my options for this?

  • I have a few tables that are huge! (1-8gb)
  • Tables are all innodb
  • Dump download took 3 hours
  • Dump Upload took 4+ hours

I need to try to stay under 5 hours total!

Wrong error in SSIS - [Execute SQL Task] Error: There is an invalid number of result bindings returned for the ResultSetType: “ResultSetType_Rowset”

Posted: 30 Sep 2013 02:50 PM PDT

I have an Execute SQL task which gives returns only one row with one column - a number. I set my result set to single row. Despite that, the task fails. Why ? How do I fix it ?

[Execute SQL Task] Error: There is an invalid number of result bindings returned for the   ResultSetType: "ResultSetType_Rowset".  

storing arrays or images in a database table

Posted: 30 Sep 2013 06:06 PM PDT

I'm a beginner with both database design and use of forums such as this one so forgive me if I forget to provide some relevant information below.

Basically I'm trying to store a set of arrays, in this case raster images, of 2500*2500 in a database in order to extract timeseries for individual pixels. I currently have roughly 1500 images and every day an image is added.

The original raw files of bytetype are only small, but my database table grows very quickly in the following form:

     pixelnr, date, rastervalue        PK clustered (pixelnr,date)  

I understand that this is because I need a pixelnr and date for every pixelvalue. Unfortunately I don't know how else to store the data in a database.

I've tried to optimize the table for select statements of timeseries per pixel with a clustered PK on pixelnr,date. The drawback is that inserts of new images take an increasingly long time.

There are only two things I do with this table: - bulk insert from a csv file. I convert a new image to a CSV file with the same form as the table. It will have all pixels, but only for a single date.

  • run simple select queries to derive time series for a number of pixels, usually up to 25000 pixels:

    SELECT Pix,Date,Val  FROM FullTable INNER JOIN PixelSubset ....  

I now have two questions:

1) Is there a more efficient way to store this kind of data? I've not found much useful on storing arrays in an RDBMS.

2) How to improve the performance of bulk inserts (all pixels for a single date) into the the table while keeping the clustered index optimised as described above.

I use SQL server 2012 developer edition on a PC with 8GB RAM and a RAID system. There's not much room for hardware changes....

Any pointers particularly to more efficient storage of such data in a RMDBS would be greatly appreciated!

How can I integrate a new system to a design scheme I previously created?

Posted: 30 Sep 2013 02:13 PM PDT

This is a follow-up to a question I wrote a while ago, for reference, you can look it up here.

I've developed a curriculum based database scheme that looks like this(Thanks Joel Brown) enter image description here

Now that a new educational system is introduced(while the old one is still not "deprecated") I have been looking for a way to integrate both curriculum systems together without creating a whole new different system; As the two systems are fundamentally different, this have been quite a challenge for me.

Here's a picture describing the new system . . .

enter image description here

(English level 1, 2, and 3 are classified as courses)

See, the new system removes the whole grades system. A jr. high school student can enroll in the exact same course(English level 3, for example) as a sr. high school student.

Is there a way to make those two systems work with my current design scheme? Or should I create a whole new scheme for the new system?

Targeting MSX with job on Multi-Server Administration

Posted: 30 Sep 2013 02:24 PM PDT

I am using Multi-Server Administration to execute jobs on multiple targets. This works well, but I am not able to enlist the master as a target. I receive the following error when attempting to enlist the MSX as a TSX:

Server 'XXXXX' is an MSX. Cannot enlist one MSX into another MSX. (Microsoft SQL Server, Error: 14299)

I would think that running these same jobs on the master would not require managing a local job as well as the multi-server jobs.

Any help is appreciated.

FileWatcher Troubleshooting Options

Posted: 30 Sep 2013 11:24 AM PDT

I have setup a Credential/Procedure/Program/File Watcher/Job to monitor a folder for new files, but it is not working. I am trying to figure out what I can check to troubleshoot why this isn't working. After scaling back the code to a bare minimum, here are somethings I have already done.

  • Verify that files are being created in the folder monitored - They are.
  • Verify that the procedure can be run by itself - It can.
  • Verify that the File Watcher shows up in DBA_Scheduler_File_Watchers and is enabled. - It does and is.
  • Verify that DBA_Scheduler_Job_Run_Details shows a successful execution - It does NOT show any entries for this job.
  • Check the alert log - Nothing interesting.
  • Check for other trace files - I am getting trace files mattching *_j000_*.trc, but they just have this: FILE_TRANSFER error is: with nothing after it.

Could someone explain what I am doing wrong or give me a way to troubleshoot this further? Here is my test code:

--Create Credential.  BEGIN      sys.dbms_scheduler.create_credential(          username => 'oracle',          password => 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxx',          database_role => NULL,          windows_domain => NULL,          comments => NULL,          credential_name => 'TestCredential'      );  END;  /      CREATE TABLE FilesFound(FoundDate Date);      CREATE OR REPLACE PROCEDURE TestProcedure(iFileInfo In sys.scheduler_filewatcher_result) AS  BEGIN     INSERT INTO FilesFound VALUES (sysdate);     COMMIT;  END;  /      BEGIN     DBMS_SCHEDULER.create_program(        program_name => 'TestProgram',        program_type => 'stored_procedure',        program_action => 'TestProcedure',        number_of_arguments => 1,        enabled => False);     DBMS_SCHEDULER.define_metadata_argument(        program_name => 'TestProgram',        metadata_attribute => 'event_message',        argument_position => 1);  END;  /        BEGIN  dbms_scheduler.create_file_watcher(     file_watcher_name => 'TestFileWatcher',     directory_path => '/u01/test',     file_name => '*.*',     credential_name => 'TestCredential',     destination => NULL,     enabled => False);  END;  /      BEGIN     dbms_scheduler.create_job(     job_name => 'TestJob',     program_name => 'TestProgram',     event_condition => NULL,     queue_spec => 'TestFileWatcher',     auto_drop => False,     enabled => False);       --Enable Everything.     dbms_scheduler.enable('TestProgram, TestFileWatcher, TestJob');  end;  /    --Set a one minute check interval.  BEGIN    DBMS_SCHEDULER.set_attribute(      'file_watcher_schedule',      'repeat_interval',      'freq=minutely; interval=1');  END;  /    --Create a file.  DECLARE    vFile utl_file.file_type;  BEGIN    EXECUTE IMMEDIATE 'create or replace directory TESTDIRECTORY as ''/u01/test''';    vFile := utl_file.fopen('TESTDIRECTORY', 'TestFileWatcher.txt', 'w', NULL);    utl_file.put_line(vFile, 'File has arrived '||SYSTIMESTAMP, TRUE);    utl_file.fclose(vFile);  END;  /      --Delay to give time for file to appear.  BEGIN     DBMS_LOCK.SLEEP(120);  END;  /    --Check the table.  BEGIN     FOR vx IN (select count(*) ct from sys.FilesFound) LOOP        If (vx.ct = 0) Then           DBMS_Output.Put_Line('Failed - No File Watcher Activity.');        Else           DBMS_Output.Put_Line('Success - File Watcher Activity Found.');        End If;     END Loop;  END;  /    --Cleanup.  EXECUTE dbms_scheduler.drop_job(job_name => 'TestJob');  EXECUTE dbms_scheduler.drop_program(program_name => 'TestProgram');  EXECUTE dbms_scheduler.drop_file_watcher(file_watcher_name => 'TestFileWatcher');  EXECUTE DBMS_SCHEDULER.drop_credential(credential_name => 'TestCredential');  drop table FilesFound;  drop procedure TestProcedure;  drop directory TestDirectory;  

What if I want to keep using MyISAM

Posted: 30 Sep 2013 10:31 AM PDT

In my current use of MyISAM the databases are extremely small and kept for a few weeks after which time they are deleted and completely rebuilt with a clean slate. The script uses MYISAM tables and other than lack of support in the latest MYSQL versions, I have never had a problem with the way they work.
Changing to INNODB is something that would be time consuming and provide a zero benefit. However, I am concerned that eventually there may come a time when either the Older MySQL version is not available or some other process forces the move.

My question is: Does anyone know of a specific reason to change from MyISAM to MyinnoDB just to keep up with MySQL.

If I can keep using what works for my own specific use, I see no reason to worry with testing and possible errors when there are more pressing issues that do need attention that have problems now. MyISAM does not have a problem at this time.

restrict user host settings to socket connection only

Posted: 30 Sep 2013 01:24 PM PDT

Is there a way to restrict a user's host setting so that (s)he can only connect by socket and not TCP/IP? I'd like to have this setting for the root account.

edit: As Abdul Manaf pointed out skip-networking can be used to turn off TCP/IP connectivity altogether. But can it be done on a user account basis?

Unable to create a new listener in oracle 11g

Posted: 30 Sep 2013 03:24 PM PDT

In net manager i get an error message "A valid logfile is required" when i click on listener and choose '+' button. and then it keeps on giving me that error when i click on any other tab or button in net manager.

Grant access to a table to all users

Posted: 30 Sep 2013 10:24 AM PDT

Is it possible to assign a grant to a table for all users, or a default permission so that when new users are created they will have the specific grants for that table to SELECT, UPDATE, INSERT and DELETE?

Performance difference between MySQL and PostgreSQL for the same schema/queries

Posted: 30 Sep 2013 07:24 PM PDT

I'm a newbie DBA, and I have experience in Microsoft SQL Server but I want to jump to FLOSS.

I'm starting a company, and we develop an app (PHP) with a Postgres backend, and we did some tests comparing with MySQL too. We observe that MySQL is twice as fast as PostgreSQL.

I did a tangible performance test:

  • Same columns in table with equivalent column datatypes.
  • Same number of rows.
  • Same indexes in both (primary key included).
  • The CPU load are idle and Postgres machine it's significantly better.
  • And the same query (obviously).

What am I doing wrong?

P.S: I read many "howtos" on performance tuning for database engines.
P.S(2): We're using InnoDB (one file per table) on the MySQL database.


Hi Mat!

I did the three common select (and hardest) queries.

The question about disk, certainly it's not the same; In Postgres it's a SSD (almost three time fastest).

MySQL cache data:

+------------------------------+----------------------+  | Variable_name                | Value                |  +------------------------------+----------------------+  | binlog_cache_size            | 32768                |  | have_query_cache             | YES                  |  | key_cache_age_threshold      | 300                  |  | key_cache_block_size         | 1024                 |  | key_cache_division_limit     | 100                  |  | max_binlog_cache_size        | 18446744073709547520 |  | query_cache_limit            | 1048576              |  | query_cache_min_res_unit     | 4096                 |  | query_cache_size             | 16777216             |  | query_cache_type             | ON                   |  | query_cache_wlock_invalidate | OFF                  |  | table_definition_cache       | 256                  |  | table_open_cache             | 64                   |  | thread_cache_size            | 8                    |  +------------------------------+----------------------+  

I don't know how to view this in PostgreSQL.

Thanks in advance.

NoSQL approach: design table "scheme"

Posted: 30 Sep 2013 01:49 PM PDT

We need to store simple log data in a database, but we also need to get and filter data in realtime, based on 'user', simple 'tag' (ie. redis, memcache, php, mysql, etc.) and timestamp. We need to scale horizontally and real fast data access on billions rows.

In a SQL approach, table can be like this:

ID | timestamp | tag | user       | log text  1  | 19543532  | 1   | root       | { text log }  2  | 19543532  | 3   | redis-user | { text log }  

where tag 1 and 3 are different and related to another table (ie. tag_id | tag_name). I think this is a relational approach and we can create three index (timestamp, tag and user) in order to speed up data access.

What is a good practice to reproduce this in a NoSQL database like DynamoDB (AWS) where we can create only HASH or RANGE index? Does a SQL database fit better than a DynamoDB?

My first attempt is:

First table: ID hash index

ID | log text  1  | { text log }  2  | { text log }  

Second table: USER, TIMESTAMP range index

user | timestamp | id  root | 123145122 | 1  redis| 123491241 | 2  

Third table: TAG index

tag        | id  debug      | 1  production | 2  

Thank you in advice!

User has no password set

Posted: 30 Sep 2013 03:07 PM PDT

I see these when running MySQLTuner.pl:

-------- Security Recommendations  -------------------------------------------  [!!] User '@debian' has no password set.  [!!] User '@localhost' has no password set.  ------------------------------------------------------------------------------  
  • Is it a security hole?
  • How to fix it?

Thanks

[MS SQL Server] SWITCH statement: required indexes in target table?

[MS SQL Server] SWITCH statement: required indexes in target table?


SWITCH statement: required indexes in target table?

Posted: 29 Sep 2013 05:43 PM PDT

hello,is there some information about which indexes have to be created on a table to which I want to switch a partition? Books online says:[i]The corresponding indexes, or index partitions, must also reside in the same filegroup.[/i]But it does not tell me wether the target table [b]has[/b] to have any indexes at all.After some testing I got the following:Test 1: A partitioned source table with a clustered PK. The target table, not partitioned, needs the same PK definition.Test 2: A partioned source table with a nonclusterd PK and an additional clustered index. In this case the target table, not partitioned, does not need any index or PK to perform the switch.Now I'd like to have the official rules what is neccessary for the target table. Did not find any useful on google.For people who like to evaluate my testing results, here is my testing skript:[code="sql"]-- create a test databasecreate database Test01;goUSE Test01GO-- create the parition function and schemeCREATE PARTITION FUNCTION [PFN_EventPartitionKey](smallint) AS RANGE RIGHT FOR VALUES (1309, 1310)GOCREATE PARTITION SCHEME [PS_EventPartitionKey] AS PARTITION [PFN_EventPartitionKey] TO ([PRIMARY], [PRIMARY], [PRIMARY] )GO-- create 2 data tables with different kind of PK and clustered indexcreate table dbo.PartitionTest1( Id bigint not null, PartitionKey smallint not null, Col1 varchar(50) null ) on PS_EventPartitionKey( PartitionKey )goalter table dbo.PartitionTest1 add constraint PartitionTest1_PK primary key clustered( Id, PartitionKey ) on PS_EventPartitionKey( PartitionKey )gocreate table dbo.PartitionTest2( Id bigint not null, PartitionKey smallint not null, Col1 varchar(50) null ) on PS_EventPartitionKey( PartitionKey )goalter table dbo.PartitionTest2 add constraint PartitionTest2_PK primary key nonclustered( Id, PartitionKey ) on PS_EventPartitionKey( PartitionKey )gocreate nonclustered index cix_PartitionTest2 on dbo.PartitionTest2 ( Col1 ) on PS_EventPartitionKey( PartitionKey )go-- now create the target tables, in which we want so switchcreate table dbo.PartitionTest1Archive( Id bigint not null, PartitionKey smallint not null, Col1 varchar(50) null ) on [PRIMARY]go--alter table dbo.PartitionTest1Archive-- add constraint PartitionTest1Archive_PK-- primary key clustered( Id, PartitionKey )-- on [PRIMARY]--go-- this raises an error. we have to create the PK first (the comment here above)alter table dbo.PartitionTest1 switch partition 2 to dbo.PartitionTest1Archive;gocreate table dbo.PartitionTest2Archive( Id bigint not null, PartitionKey smallint not null, Col1 varchar(50) null ) on [PRIMARY]go-- this worksalter table dbo.PartitionTest2 switch partition 2 to dbo.PartitionTest2Archive;go[/code]

Management Data Warehouse Purge

Posted: 29 Sep 2013 09:51 PM PDT

Does anyone know where the table core.source_info_internal gets the days_until_expiration value from.I have an MDW database which has grown to epic proportions.It looks like the MDW purge SQL agent job which executes the core.sp_purge_data is getting the valid_through date via the core.snapshots view which the purge is based on from the core.source_info_internal - days_until_expiration column. The days_until_expiration value is set to 730 for a number of collections. I cannot seem to find this value in the Data Collection retention days setting in the SQL agent job which are set to a more manageable no. of days. Could these values have been changed via the agent jobs but not yet persisted to the table until a restart of the agent service is executed?

drop auto stats

Posted: 29 Sep 2013 07:51 AM PDT

Dear Experts How to drop all auto sys statistics in the whole databaseThanks lot

[Articles] Hybrid Databases

[Articles] Hybrid Databases


Hybrid Databases

Posted: 29 Sep 2013 11:00 PM PDT

We are getting more and more types of data that we have to manage and store in our databases. Steve Jones notes that SQL Server can handle almost all your needs.

[SQL 2012] how to solve this

[SQL 2012] how to solve this


how to solve this

Posted: 30 Sep 2013 01:40 AM PDT

two tablestable1 table2StudentId studentidTestId teacheridQuestionId classidPointsAchievedPointsPossible Write a query that would return teachers who's students had highest overall percent correct on 10 questions with lowest overall percent correct. Percent correct is points achieved/points possible.

Can I have SQL 2012 AND SQL Express 2008 R2 installed on the same machine?

Posted: 30 Sep 2013 12:48 AM PDT

I'm looking at removing the SQL Server 2008 R2 Developer Edition that I have installed on my machine (a Windows 7 Ultimate machine), and updating to SQL Server 2012 Developer Edition. I also have SQL Server 2008 R2 Express installed, using a named instance (SQLExpress). Can I have both installed on the same machine? (I'll want SQL 2012 to be the default instance.)

Sql Server 2012 can't see 2nd drive

Posted: 14 May 2012 02:33 PM PDT

I installed Sql Server 2012 Express. Everything went without a hitch until I attempted to create a new database and place it on a second drive (E:). The location window displays C: and an external drive (Z:), but not the E: drive where I wish to place the database. Permissions show full control, but Sql Server won't display it.

Error in attaching Adventureworks 2012 database

Posted: 27 May 2013 09:55 AM PDT

hi i have problem in attaching Adventure works 2012 database Attach database failed for Server '..........\MSSQLSERVER2012'. (Microsoft.SqlServer.Smo)For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=11.0.2100.60+((SQL11_RTM).120210-1917+)&EvtSrc=Microsoft.SqlServer.Management.Smo.ExceptionTemplates.FailedOperationExceptionText&EvtID=Attach+database+Server&LinkId=20476------------------------------ADDITIONAL INFORMATION:An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)------------------------------Directory lookup for the file "C:\CodePlex Database\AdventureWorks2012_log.ldf" failed with the operating system error 2(The system cannot find the file specified.). (Microsoft SQL Server, Error: 5133)

[T-SQL] Usage of OR in WHERE clause

[T-SQL] Usage of OR in WHERE clause


Usage of OR in WHERE clause

Posted: 30 Sep 2013 12:47 AM PDT

Apologies for a silly question, but I was taught in the past to avoid usage of OR in WHERE clause as it effectedSELECT's performance and ordered SQL engine to re-scan the table, so I keep avoiding since.Is it a same with SQL 2008 and higher and are there any guidelines / best practice please, e.g. I use IN or CASE where possible?Thank you.

Error arthematic operations inside varchar @variable

Posted: 29 Sep 2013 10:43 PM PDT

Hi I'm getting conversion error in the messages while executing below query.Please help me understand this. [code="sql"]DECLARE @Linkedserver sysname = 'DataServerSeven';DECLARE @Databasename sysname = 'dbVinnyStaging';Declare @DeltaSizeInMB BIGINTDeclare @EDWSourceSystemID INT = 4100Declare @SourceSystemID INT = 4100DECLARE @DeltaSize Table (DeltaSize BIGINT)Declare @DeltaSizeSQL Varchar(MAX)Declare @TempBatchNumber INT = 1736Set @DeltaSizeSQL = 'Select SUM(A.DeltaCount * B.AvgRecordSize) from '+@Linkedserver+'.'+@Databasename+'.dbo.parameterhistory A Join Operations..DatasetTables B (Nolock) ON A.ReferenceObjectName = B.TableName where A.SourceExecutionKey in ('+@TempBatchNumber+') and A.SourceSystemID in ('+@SourceSystemID+') and B.EDWSourceSystemID in ('+@EDWSourceSystemID+');'Insert INTO @DeltaSize EXECUTE sp_executesql @DeltaSizeSQL SelecT @DeltaSizeInMB = (Select sum(cast(DeltaSize AS Numeric(15,4))/1024/1024) from @DeltaSize) select @DeltaSizeInMB[/code][code="sql"]Msg 245, Level 16, State 1, Line 9Conversion failed when converting the nvarchar value 'Select SUM(A.DeltaCount * B.AvgRecordSize) from DataServerSeven.dbVinnyStaging.dbo.parameterhistory A Join Operations..DatasetTables B (Nolock) ON A.ReferenceObjectName = B.TableName where A.SourceExecutionKey in (' to data type int.[/code]

Update script

Posted: 29 Sep 2013 08:21 PM PDT

Hi,I a mtrying to create update staments including data for the table in this way but I am getting error [i]Incorrect syntax near '+'.[/i]Can anyone tell me whats wrongs in this script?[code="sql"] SELECT 'UPDATE table_1SET labeltext ='+ T.labeltext 'WHERE LanguageID = 10AND LabelKey ='+ T.LabelKey 'AND FileID =' + T.FileID[/code]

[SQL Server 2008 issues] Convert DD MMM YYYY to yyyymmdd in string

[SQL Server 2008 issues] Convert DD MMM YYYY to yyyymmdd in string


Convert DD MMM YYYY to yyyymmdd in string

Posted: 29 Sep 2013 11:21 AM PDT

Hi,I am struggling with changing the date format of [b]dd MMM YYYY[u][/u][/b] in the sample strings below [u]into [/u][b]yyymmdd[u][/u][/b]Sample Data InputColumn Name: [b]SourceName[/b][quote]12 Jan 2013 Test.docx 01 February 2001 File.pptx Meeting 04 Feb 2012.xls 09 September 2011.txt 30 Jan 13.doc[/quote]Sample Data OutputColumn Name: [b]SourceName[/b][quote]20130112 Test.docx 20010201 File.pptx Meeting 20120204.xls 20110911.txt 20130130.doc[/quote]

Returning related records from the same table

Posted: 29 Sep 2013 11:00 AM PDT

I have a table named batch. Which contains a PK field(ID) and FK field(OriginatingID). The issue I have is that I need to be able to return all related rows related to the PK field. The records in the table are similar to belowID ORIGINATIONID BATCHNUM1 null 1002 1 100EXP13 2 100EXP24 3 100EXP3I have been trying to figure out to write a select statement that would return all the rows that are related to the Batch 100I have thought about doing something with the BATCHNUM field however, the data entered in that field doesn't always follow the same conventions as far as the data that is entered.Any and all assistance is appreciated!Thanks

Rolling 3 month average cost help

Posted: 22 Sep 2013 03:16 PM PDT

Hi guys,I need some help calculating a 3 month avergae rolling cost. I have been trying to figure this out for a couple days now, as well as get help, but to no avail.What I'm trying to do is replicate a 3 month average cost metric that I have in excel to a sql query, so that I can use it in SSRS as a data set. In excel I take the avergae of the sum of the cost and divide it by the count of members.I have two tables that derive the data. Below is example of my tables:Table name: AddmissionContract Admissiondate SumofCost 0606 200701 8639.380607 200702 22895.940608 200703 123752.28null 200704 61378.49Table name: MembersContract Admissiondate CountofMembers0606 200701 860607 200702 1020608 200703 90null 200704 120

how to add a column to a table ?

Posted: 28 Sep 2013 11:51 PM PDT

How to edit table data in SQL Server Management Studio ?I right click on table > Edit top 200 data ---> then I go to cell ---> then I modify data.but how do I save this data ? It says data is not committed yet with a red mark.I tried clicking Execute SQL button at the top ...but that did not do anything.

Query to create log file for sql select query...

Posted: 29 Sep 2013 03:58 AM PDT

can you please tell me how to create a log file in sql server 2008

Installation Error

Posted: 19 Mar 2009 06:31 AM PDT

Hi All,I need urgent help from anyone. I was trying to install the sql server 2008 standard edition. When I try to install, i'm getting the following during the final stage --SQL Server Setup has encountered the following error:The collation was not found.Error code 0x84B40000.----------someone help me please.Thank you.

Sunday, September 29, 2013

[how to] Spell checking the developed reports

[how to] Spell checking the developed reports


Spell checking the developed reports

Posted: 29 Sep 2013 06:53 PM PDT

We have a large number of SSRS reports which need to be checked, to ensure there are no spelling mistakes (English) in them. I'm wondering if there is a fast and reliable way to do that? Checking each report manually takes time and it's not 100% accurate.

Thanks for your help.

MySQL not using indexes when joining against another table

Posted: 29 Sep 2013 02:55 PM PDT

I have two tables, the first table contains all articles / blog posts within a CMS. Some of these articles may also appear in a magazine, in which case they have a foreign key relationship with another table that contains magazine specific information.

Here is a simplified version of the create table syntax for these two tables with some non-essential rows stripped out:

CREATE TABLE `base_article` (    `id` int(11) NOT NULL AUTO_INCREMENT,    `date_published` datetime DEFAULT NULL,    `title` varchar(255) NOT NULL,    `description` text,    `content` longtext,    `is_published` int(11) NOT NULL DEFAULT '0',    PRIMARY KEY (`id`),    KEY `base_article_date_published` (`date_published`),    KEY `base_article_is_published` (`is_published`)  ) ENGINE=InnoDB DEFAULT CHARSET=latin1;    CREATE TABLE `mag_article` (      `basearticle_ptr_id` int(11) NOT NULL,      `issue_slug` varchar(8) DEFAULT NULL,      `rubric` varchar(75) DEFAULT NULL,      PRIMARY KEY (`basearticle_ptr_id`),      KEY `mag_article_issue_slug` (`issue_slug`),      CONSTRAINT `basearticle_ptr_id_refs_id` FOREIGN KEY (`basearticle_ptr_id`) REFERENCES `base_article` (`id`)  ) ENGINE=InnoDB DEFAULT CHARSET=latin1;  

The CMS contains around 250,000 articles total and I have written a simple Python script that can be used to populate a test database with sample data if they want to replicate this issue locally.

If I select from one of these tables, MySQL has no problem picking an appropriate index or retrieving articles quickly. However, when the two tables are joined together in a simple query such as:

SELECT * FROM `base_article`   INNER JOIN `mag_article` ON (`mag_article`.`basearticle_ptr_id` = `base_article`.`id`)  WHERE is_published = 1  ORDER BY `base_article`.`date_published` DESC  LIMIT 30  

MySQL fails to pick an appropriate query and performance plummets. Here is the relevant explain extended (the execution time for which is over a second):

+----+-------------+--------------+--------+-----------------------------------+---------+---------+----------------------------------------+-------+----------+---------------------------------+  | id | select_type |    table     |  type  |           possible_keys           |   key   | key_len |                  ref                   | rows  | filtered |              Extra              |  +----+-------------+--------------+--------+-----------------------------------+---------+---------+----------------------------------------+-------+----------+---------------------------------+  |  1 | SIMPLE      | mag_article  | ALL    | PRIMARY                           | NULL    | NULL    | NULL                                   | 23830 | 100.00   | Using temporary; Using filesort |  |  1 | SIMPLE      | base_article | eq_ref | PRIMARY,base_article_is_published | PRIMARY | 4       | my_test.mag_article.basearticle_ptr_id |     1 | 100.00   | Using where                     |  +----+-------------+--------------+--------+-----------------------------------+---------+---------+----------------------------------------+-------+----------+---------------------------------+  

One potential solution is to force an index. Running the same query with FORCE INDEX (base_articel_date_published) results in a query that executes in around 1.6 milliseconds.

+----+-------------+--------------+--------+---------------+-----------------------------+---------+-------------------------+------+-----------+-------------+  | id | select_type |    table     |  type  | possible_keys |             key             | key_len |           ref           | rows | filtered  |    Extra    |  +----+-------------+--------------+--------+---------------+-----------------------------+---------+-------------------------+------+-----------+-------------+  |  1 | SIMPLE      | base_article | index  | NULL          | base_article_date_published |       9 | NULL                    |   30 | 833396.69 | Using where |  |  1 | SIMPLE      | mag_article  | eq_ref | PRIMARY       | PRIMARY                     |       4 | my_test.base_article.id |    1 | 100.00    |             |  +----+-------------+--------------+--------+---------------+-----------------------------+---------+-------------------------+------+-----------+-------------+  

I would prefer not to have to force an index on this query if I can avoid it, for several reasons. Most notably, this basic query can be filtered / modified in a variety of ways (such as filtering by the issue_slug) after which base_article_date_published may no longer be the best index to use.

Can anyone suggest a strategy for improving performance for this query?

MySQL select query - date range with time range

Posted: 29 Sep 2013 08:33 PM PDT

TABLE

EventID, EventTime (DATETIME)

1       2013-09-29 23:55:00.0  2       2013-10-01 00:05:00.0  3       2013-09-29 23:55:00.0  4       2013-09-29 23:45:00.0  5       2013-10-02 23:05:00.0  6       2013-09-26 23:50:00.0  7       2013-09-25 23:55:00.0  8       2013-09-24 22:55:00.0  9       2013-09-29 00:10:00.0  10      2013-09-27 23:45:00.0  

Scenarios:

S.no#   EventTime       +/-day  +/-time(minutes)    expected rows    1   2013-09-29 23:55    0    0          1,3  2   2013-09-29 23:55    4    0          1,3,7  3   2013-09-29 23:50    0   10          1,3,4  4   2013-09-26 00:00    7   15          1,2,3,4,6,7,9,10  5   2013-09-24 22:55    1   60          7,8  6   2013-09-24 22:55    3   60          6,7,8,10  

For given date and time, rows must be selected considering +/- day and/or +/- time (both are also user inputs). Note:

  1. Day may have range. If 0, same date must be considered.
  2. Time may have range. If 0, same time must be considered.
  3. Time range must be calculated for time component of date and must not be included in date range. Example: for scenario # 4, though it has 7 days range, but as time must be only around 15 minutes, 5th an 8th rows were excluded.

'Scenarios' is provided for explanation only. My requirement is to fetch only those records from EventTable which satisfy input for EventTime, day and time value. If day and time (can be positive number only) is non-zero value, query should return +/- of that value for EventTime, else for same day and time as EventTime.

I could narrow rows by:

SELECT * FROM EventTable   WHERE EventTime BETWEEN StartEventTime AND EndEventTime  AND TIME( EventTime ) = TIME( StartEventTime )  

StartEventTime and EndEventTime in YYYY-MM-DD HH:MM:SS format.

But, this matches exact time, I'm unble to +/- time.

Please help frame SQL statement. Thanks!

Convert Data From DB2 in MainFrame To SQL SERVER In HP Server?

Posted: 29 Sep 2013 11:25 AM PDT

I have a problem.

We use DB2 in MainFrame ( OS 360 ) And we have 300 tables inside that.

And we have SQL SERVER 2008 R2 in Windows Server 2008 In HP Server .

We have one sync program that sync all record that Insert/Update/Delete From DB2 To SQL.

Size Of MDF file in SQL SERVER 2008 is about 300 GByte and Sum of all record is about 500,000,000.

unfortunately, someone or some malware delete some data from SQL SERVER .

with verification the data count of each table we find that about 250,000 record was delete and we do not sure that some record maybe Updated?!?

We want do this :

Find record that deleted.

Find record that Updated.

recover the data in SQL SERVER.

and i most say that we have not any BackUp from DB2 database. and we can not restore BackUp of SQL Server because last back up is more that 3 days ago. and we most fix it as soon as possible .

Are exist any 3rd party software for verification and fix this problem and what is the best solution.

DBCC loginfo contains thousands of VLF entries with status 2

Posted: 29 Sep 2013 04:31 PM PDT

I've inherited this huge SQL Server 2000 prod database and suddenly the transaction log ran out of space. The transaction log gets backed up every hour but I think over the weekend some existing jobs to re-index some tables grew the log file to the point where it ran out of space. The first thing I did was backup the log (the database is in full recovery mode) and then ran dbcc sqlperf to check the log space used percentage. It was still at over 99%. So I did some more research and found another dbcc command, loginfo.

When I saw that output, there were almost 15K rows returned with all of status 2 and some of the CreateLSN seemed dated a while back. I checked for any open transactions and there weren't any.

It seems like the active portion of the log file is permanently keeping the file very large forcing any new transactions to request more space. As far as I can tell, there's no replication going on requiring the VLFs to stay active.

Moving data in multiple tables from old database to one table in new database

Posted: 29 Sep 2013 03:23 PM PDT

I have two SQL Server 2008 databases. One is for my old application (full of data) and the other is for a complete re-write of my application.

I want to migrate the data, but I have no idea how to tackle it. Some of the tables have exactly the same structure, but some tables in the new database are composed from columns of multiple tables in the old database.

Is there a way to create such a script/project/application in which I can define the data transformation while also maintain constrains?

Need advice on this EER Diagram

Posted: 29 Sep 2013 09:44 AM PDT

This db will work with a CakePhp app and need advice about design. Is it good idea to put orders on center or should I put clients on center?

Clients has orders and an order record can only have one domain or one hosting. An order record cannot have more than one domain or hosting record same time. It decides which of it has by orders_types.

enter image description here

Oracle missing table during recovery to dual mode

Posted: 29 Sep 2013 06:56 AM PDT

I'm running dual db oracle, but one of them has gone down; db2 running in single mode now (timeout on db1 probably), when using the db_recovery tool it says it can't find table x on the live db and its needed for the recovery and the return to dual state. But when I check in sqlplus or radoui, table x is there.

I tried to recover again with the tool, but no luck.

So I'm a bit reluctant on dropping it and recreating it, since its on a live system that needs this high availability solution. What else could I try here?

MySQL or MongoDB for my specific requirements

Posted: 29 Sep 2013 05:56 AM PDT

I am working on a Search, Discovery and Analytics platform with a typical stack that looks like the one you see on this page- http://www.lucidworks.com/lucidworks-big-data/

I am in the process of designing the content acquisition module (please refer to the architecture in the link above). I need to figure out which data store is more suitable to my requirements (Mongodb or MySQL).. Here is some information about the data input and what i do with that data..

  1. My input is basically a file (REST APIs to upload it) that contains JSON or XML (one of them).
  2. The content of this file varies from 1 million key-value records (in JSON) to minimal data (which is basically used as an incremental update on previous data, ~10 key-value pairs).
  3. I need to validate each key-value record against a schema.
  4. I store this data in my data store.
  5. I have multiple clients that can upload data and the data from two clients is completely independent..
  6. This data is then fed to Apache Solr for indexing.
  7. Content acquisition module is basically a J2EE stack.

Because of point# 2(incremental update) & 3, I have been thinking of using MySQL but I am skeptic about its scalability issues. I know that we can handle it but I am worried about the cost for scaling.

Can anyone help me decide by their experience what would be the best approach for me..

Isolation level is SERIALIZABLE but I sometimes get the error "SQL error: Deadlock found when trying to get lock; try restarting transaction"

Posted: 29 Sep 2013 05:55 AM PDT

I randomly find logs saying that a deadlock occurred. I couldn't figure out why, even after reading the InnoDB status. So I thought I can change the default isolation level to SERIALIZABLE, but I still get that error !

  • Why is this error still occurring while the isolation level is the highest in isolation, how could a deal lock occur ? May be I need to give the system sometime for the change to take effect ?
  • Could someone please clarify why is wrong ? And why would the below queries conflict ? And how are they conflicting in the first place ?

This is the InnoDB status

Welcome to the MySQL monitor.  Commands end with ; or \g.  Your MySQL connection id is 15766624  Server version: 5.1.66-log Source distribution    Copyright (c) 2000, 2012, Oracle and/or its affiliates. All rights reserved.    Oracle is a registered trademark of Oracle Corporation and/or its  affiliates. Other names may be trademarks of their respective  owners.    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.    mysql> show innodb status;  +--------+------+-----------+  | Type   | Name | Status                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           |  +--------+------+-----------+  | InnoDB |      |  =====================================  130929 13:58:56 INNODB MONITOR OUTPUT  =====================================  Per second averages calculated from the last 36 seconds  ----------  SEMAPHORES  ----------  OS WAIT ARRAY INFO: reservation count 13784135, signal count 13557520  Mutex spin waits 0, rounds 427152487, OS waits 3885622  RW-shared spins 23284349, OS waits 8651736; RW-excl spins 32263860, OS waits 713978  ------------------------  LATEST DETECTED DEADLOCK  ------------------------  130929 13:54:26  *** (1) TRANSACTION:  TRANSACTION 0 3146969698, ACTIVE 0 sec, process no 17212, OS thread id 140169609684736 starting index read  mysql tables in use 1, locked 1  LOCK WAIT 2 lock struct(s), heap size 368, 1 row lock(s)  MySQL thread id 15759332, query id 1502083966 213.158.168.181 mgelbana Updating  update probe_table set IsRunning='0', testagent_id ='0'                          where isRunning=20 and Mgmt_IP='10.26.4.170' and testagent_id ='7'  *** (1) WAITING FOR THIS LOCK TO BE GRANTED:  RECORD LOCKS space id 0 page no 3932172 n bits 144 index `PRIMARY` of table `mschema`.`probe_table` trx id 0 3146969698 lock_mode X waiting  Record lock, heap no 2 PHYSICAL RECORD: n_fields 38; compact format; info bits 0   0: len 4; hex 80000004; asc     ;; 1: len 6; hex 0000bb91ab83; asc       ;; 2: len 7; hex 000016401f2125; asc    @ !%;; 3: len 30; hex 4e415352322d503031585f452d432d4547202d2064656c65746564206f6e; asc NASR2-P01X_E-C-EG - deleted on;...(truncated); 4: len 4; hex 80000000; asc     ;; 5: len 11; hex 34312e33332e3235322e32; asc 41.33.252.2;; 6: len 9; hex 31302e32362e302e32; asc 10.26.0.2;; 7: len 5; hex 4e41535232; asc NASR2;; 8: len 8; hex 4269672045646765; asc Big Edge;; 9: len 4; hex 80000001; asc     ;; 10: len 4; hex 80000001; asc     ;; 11: len 4; hex 80000001; asc     ;; 12: len 4; hex 80000001; asc     ;; 13: len 4; hex 80000002; asc     ;; 14: len 4; hex 80000000; asc     ;; 15: len 4; hex 80000001; asc     ;; 16: len 4; hex 8000000a; asc     ;; 17: len 4; hex 80000000; asc     ;; 18: len 4; hex 80000000; asc     ;; 19: len 4; hex 80000000; asc     ;; 20: len 4; hex 80000000; asc     ;; 21: len 4; hex 80000000; asc     ;; 22: len 4; hex 80000000; asc     ;; 23: len 4; hex 80000000; asc     ;; 24: len 4; hex 52460351; asc RF Q;; 25: len 4; hex 524814c0; asc RH  ;; 26: len 4; hex 80000000; asc     ;; 27: len 4; hex 259e8160; asc %  `;; 28: len 0; hex ; asc ;; 29: len 0; hex ; asc ;; 30: len 0; hex ; asc ;; 31: len 1; hex 34; asc 4;; 32: SQL NULL; 33: len 4; hex 52481486; asc RH  ;; 34: SQL NULL; 35: len 4; hex 80000000; asc     ;; 36: len 1; hex 81; asc  ;; 37: len 0; hex ; asc ;;    *** (2) TRANSACTION:  TRANSACTION 0 3146969696, ACTIVE 0 sec, process no 17212, OS thread id 140182939023104 starting index read, thread declared inside InnoDB 114  mysql tables in use 2, locked 2  10 lock struct(s), heap size 1216, 393 row lock(s)  MySQL thread id 15759333, query id 1502083964 213.158.168.179 mgelbana Updating  update probe_table T1 set IsRunning=10 where                                  T1.Mgmt_IP='10.26.3.50'                                  and ( select COUNT from (                                  select count(*) COUNT                                  from probe_table T2                                  where T2.Mgmt_IP like ('10.26.3.50')                                  and T2.Active=1                                  and T2.IsDeleted=0                                  and T2.IsRunning>0 ) as temp) =0  *** (2) HOLDS THE LOCK(S):  RECORD LOCKS space id 0 page no 3932172 n bits 144 index `PRIMARY` of table `mschema`.`probe_table` trx id 0 3146969696 lock mode S  Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0   0: len 8; hex 73757072656d756d; asc supremum;;    Record lock, heap no 2 PHYSICAL RECORD: n_fields 38; compact format; info bits 0   0: len 4; hex 80000004; asc     ;; 1: len 6; hex 0000bb91ab83; asc       ;; 2: len 7; hex 000016401f2125; asc    @ !%;; 3: len 30; hex 4e415352322d503031585f452d432d4547202d2064656c65746564206f6e; asc NASR2-P01X_E-C-EG - deleted on;...(truncated); 4: len 4; hex 80000000; asc     ;; 5: len 11; hex 34312e33332e3235322e32; asc 41.33.252.2;; 6: len 9; hex 31302e32362e302e32; asc 10.26.0.2;; 7: len 5; hex 4e41535232; asc NASR2;; 8: len 8; hex 4269672045646765; asc Big Edge;; 9: len 4; hex 80000001; asc     ;; 10: len 4; hex 80000001; asc     ;; 11: len 4; hex 80000001; asc     ;; 12: len 4; hex 80000001; asc     ;; 13: len 4; hex 80000002; asc     ;; 14: len 4; hex 80000000; asc     ;; 15: len 4; hex 80000001; asc     ;; 16: len 4; hex 8000000a; asc     ;; 17: len 4; hex 80000000; asc     ;; 18: len 4; hex 80000000; asc     ;; 19: len 4; hex 80000000; asc     ;; 20: len 4; hex 80000000; asc     ;; 21: len 4; hex 80000000; asc     ;; 22: len 4; hex 80000000; asc     ;; 23: len 4; hex 80000000; asc     ;; 24: len 4; hex 52460351; asc RF Q;; 25: len 4; hex 524814c0; asc RH  ;; 26: len 4; hex 80000000; asc     ;; 27: len 4; hex 259e8160; asc %  `;; 28: len 0; hex ; asc ;; 29: len 0; hex ; asc ;; 30: len 0; hex ; asc ;; 31: len 1; hex 34; asc 4;; 32: SQL NULL; 33: len 4; hex 52481486; asc RH  ;; 34: SQL NULL; 35: len 4; hex 80000000; asc     ;; 36: len 1; hex 81; asc  ;; 37: len 0; hex ; asc ;;    //Truncated similar lines as before    *** WE ROLL BACK TRANSACTION (1)  ------------  TRANSACTIONS  ------------  Trx id counter 0 3147484722  Purge done for trx's n:o < 0 3147480777 undo n:o < 0 0  History list length 558  ... truncated...   17212, OS thread id 140187977627392  MySQL thread id 11989168, query id 1502566489 213.158.168.179 mgelbana  ---TRANSACTION 0 3147430451, not started, process no 17212, OS thread id 140175652677376  MySQL thread id 11989169, query id 1502528171 213.158.168.179 mgelbana  ---TRANSACTION 0 3147470893, not started, process no 17212, OS thread id 140187965114112  MySQL thread id 11989159, query id 1502566515 213.158.168.179 mgelbana  ---TRANSACTION 0 3147470786, not started, process no 17212, OS thread id 140187952334592  MySQL thread id 11989152, query id 1502566500 213.158.168.179 mgelbana  ---TRANSACTION 0 3147408351, not started, process no 17212, OS thread id 140187960588032  MySQL thread id 11989157, query id 1502507046 213.158.168.179 mgelbana  ---TRANSACTION 0 3147400379, not started, process no 17212, OS thread id 140187950204672  MySQL thread id 11989158, query id 1502499405 213.158.168.179 mgelbana  ---TRANSACTION 0 3147465854, not started, process no 17212, OS thread id 140187944347392  MySQL thread id 11989154, query id 1502566464 213.158.168.179 mgelbana  ---TRANSACTION 0 3147470810, not started, process no 17212, OS thread id 140187977361152  MySQL thread id 11989151, query id 1502566420 213.158.168.179 mgelbana  ---TRANSACTION 0 3147479125, not started, process no 17212, OS thread id 140174847637248  MySQL thread id 11989149, query id 1502574522 213.158.168.179 ipk  ---TRANSACTION 0 3147448043, not started, process no 17212, OS thread id 140187974166272  MySQL thread id 11989148, query id 1502545014 213.158.168.179 ipk  ---TRANSACTION 0 3147479147, not started, process no 17212, OS thread id 140187962451712  MySQL thread id 11989147, query id 1502574557 213.158.168.179 ipk  ---TRANSACTION 0 3147479131, not started, process no 17212, OS thread id 140187975763712  MySQL thread id 11989146, query id 1502574536 213.158.168.179 ipk  ---TRANSACTION 0 3147479114, not started, process no 17212, OS thread id 140187952867072  MySQL thread id 11989145, query id 1502574506 213.158.168.179 ipk  ---TRANSACTION 0 3147479446, not started, process no 17212, OS thread id 140187981354752  MySQL thread id 11989122, query id 1502574882 213.158.168.179 ipk  ---TRANSACTION 0 3147479453, not started, process no 17212, OS thread id 140187966711552  MySQL thread id 11989121, query id 1502574891 213.158.168.179 ipk  ---TRANSACTION 0 3147479465, not started, process no 17212, OS thread id 140187970705152  MySQL thread id 11989120, query id 1502574909 213.158.168.179 ipk  ---TRANSACTION 0 3147479458, not started, process no 17212, OS thread id 140187956594432  MySQL thread id 11989119, query id 1502574897 213.158.168.179 ipk  ---TRANSACTION 0 3147479469, not started, process no 17212, OS thread id 140187940087552  MySQL thread id 11989118, query id 1502574920 213.158.168.179 ipk  ---TRANSACTION 0 3147484721, ACTIVE 0 sec, process no 17212, OS thread id 140187978159872 committing  MySQL thread id 15766642, query id 1502579811 213.158.168.181 mgelbana Sending data  select probe_table.id from probe_table, testagent_probe_availability                  where                  testagent_probe_availability.probe_id=probe_table.id                  and testagent_probe_availability.testagent_id='7'                  and probe_table.Mgmt_IP='10.26.3.46'                  and testagent_probe_availability.LastUpdate <NOW() - INTERVAL 15 SECOND  Trx read view will not see trx with id >= 0 3147484722, sees < 0 3147484722  ---TRANSACTION 0 3147484718, COMMITTED IN MEMORY, process no 17212, OS thread id 140187965380352 committing, thread declared inside InnoDB 498  mysql tables in use 1, locked 1  , undo log entries 1  MySQL thread id 15759694, query id 1502579810 213.158.168.179 mgelbana freeing items  INSERT INTO                                  test_execution_instance_test_execution                                  (                                  test_execution_instance_id,                                  test_execution_id,                                  tdr_id                                  )                                  VALUES                                  (                                  9845737,                                  229,                                  56692353                                  )                                  ON DUPLICATE KEY                                  UPDATE tdr_id=56692353  --------  FILE I/O  --------  I/O thread 0 state: waiting for i/o request (insert buffer thread)  I/O thread 1 state: waiting for i/o request (log thread)  I/O thread 2 state: waiting for i/o request (read thread)  I/O thread 3 state: waiting for i/o request (write thread)  Pending normal aio reads: 0, aio writes: 0,   ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0  Pending flushes (fsync) log: 0; buffer pool: 0  21554657 OS file reads, 214798857 OS file writes, 168300939 OS fsyncs  0.86 reads/s, 16384 avg bytes/read, 342.71 writes/s, 318.82 fsyncs/s  -------------------------------------  INSERT BUFFER AND ADAPTIVE HASH INDEX  -------------------------------------  Ibuf: size 17, free list len 716, seg size 734,  3291458 inserts, 3287812 merged recs, 782100 merges  Hash table size 8850487, node heap has 20003 buffer(s)  33110.58 hash searches/s, 9787.06 non-hash searches/s  ---  LOG  ---  Log sequence number 1529 376191403  Log flushed up to   1529 376191403  Last checkpoint at  1529 370275038  0 pending log writes, 0 pending chkp writes  166206736 log i/o's done, 317.96 log i/o's/second  ----------------------  BUFFER POOL AND MEMORY  ----------------------  Total memory allocated 4739435474; in additional pool allocated 1048576  Dictionary memory allocated 854560  Buffer pool size   262144  Free buffers       0  Database pages     242141  Modified db pages  2700  Pending reads 0  Pending writes: LRU 0, flush list 0, single page 0  Pages read 96288321, created 2946757, written 79786511  0.86 reads/s, 1.56 creates/s, 30.25 writes/s  Buffer pool hit rate 1000 / 1000  --------------  ROW OPERATIONS  --------------  0 queries inside InnoDB, 0 queries in queue  1 read views open inside InnoDB  Main thread process no. 17212, id 140182950844160, state: sleeping  Number of rows inserted 155157892, updated 1604892534, deleted 4001463, read 205802315876  109.25 inserts/s, 348.57 updates/s, 0.00 deletes/s, 219954.89 reads/s  ----------------------------  END OF INNODB MONITOR OUTPUT  ============================   |  +--------+------+-----------+  1 row in set, 1 warning (0.00 sec)    mysql>  

Why can't I change the isolation levels of my connection in MySQL?

Posted: 29 Sep 2013 04:50 AM PDT

I am trying to experiment a bit with the isolation level of MySQL in a test environment.
I do the following:

mysql> set @@session.tx_isolation='READ-UNCOMMITED';  ERROR 1231 (42000): Variable 'tx_isolation' can't be set to the value of 'READ-UNCOMMITED'  

This also fails:

mysql> update information_schema.session_variables set variable_value='READ-UNCOMMITED' where variable_name='TX_ISOLATION';  ERROR 1044 (42000): Access denied for user 'root'@'localhost' to database 'information_schema'   

What am I doing wrong here? I am root. So it should not be a rights issue.

How to estimate the I/O operations performed by MySQL queries? [duplicate]

Posted: 29 Sep 2013 03:19 AM PDT

This question already has an answer here:

On Amazon's RDS, you pay about $0.10 per 1 million I/O requests. I've noticed a very high number (in hundreds of thousands) of I/O requests for a very low traffic database. On seeing this, I did some further research, and found this question which describes that for a 6,000 user website, he was generating 800 million I/O requests a month, which would cost him about $80 / month.

Therefore, I want to know in advance how many I/O operations a MySQL query would generate, and how to optimize / minimize them. Is there any way to estimate how many I/O operations a query would perform, and any general rules I can follow to keep them as low as possible?

Handling empty fields

Posted: 29 Sep 2013 12:13 AM PDT

SELECT Name,  SUM(Switch([Date] = DateAdd('d', - 7, DATE ()), NetAmount) )AS DAY1   FROM Customer  GROUP BY Name  

My question is how to return the value 0 when the date is not present (an empty field). Should I use SWITCH or IIF or something else?

pg_upgrade unrecognized configuration parameter "unix_socket_directory"

Posted: 29 Sep 2013 12:16 AM PDT

I'm trying to upgrade Postgresql from 9.2 to 9.3 in Fedora 18 using this command as the postgres user

$ pg_upgrade -b /bin -B /usr/pgsql-9.3/bin -d /var/lib/pgsql/data -D /var/lib/pgsql/9.3/data/ -j 2 -u postgres  

The error in the log

command: "/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D "/var/lib/pgsql/data" -o "-p 50432 -b -c listen_addresses='' -c unix_socket_permissions=0700 -c unix_socket_directory='/var/lib/pgsql'" start >> "pg_upgrade_server.log" 2>&1 waiting for server to start....FATAL: unrecognized configuration parameter "unix_socket_directory" .... stopped waiting pg_ctl: could not start server

As pointed by a_horse in the comments that parameter was replaced by unix_socket_directories (plural) in 9.3. But the server version being started is the old one 9.2:

$ /bin/pg_ctl --version  pg_ctl (PostgreSQL) 9.2.4  

Any ideas?

Connecting to Oracle Database with ODBC/Python

Posted: 29 Sep 2013 01:24 PM PDT

We've recently installed a new piece of software that uses an Oracle Database to store the data. My goal is to connect to that database and pull the data from the back end to do analysis on. I'm having the world's most difficult time connecting to it.

I need to access it two ways:

  1. ODBC - I would like to setup an ODBC connection in Windows 7, so that I can pull the information using Excel. I know how to setup the connection, but I cannot for the life of me manage to setup the Oracle Drivers correctly. All the links from documentation for this piece of software go to dead/changed pages. Apparently the Oracle website has changed since this documentation was written.

  2. Python - From what I can tell, cx_Oracle seems to be the best module, but again I can't seem to set it up correctly. And again, most documentation refers to sections of the Oracle website that has since changed. I'm not sold on using cx_Oracle if someone has a better recommendation.

Thanks for looking over this, and I appreciate your help.

MySQL User with Host set to a Range of IP Addresses

Posted: 29 Sep 2013 04:24 AM PDT

I have a hosted MySQL DB that I need to give access to from my customers site. They have a fast broadband connection which can present them with a range of IP addresses from a fixed pool.

So I need to lock down the access to their MySQL user so that the Host setting is a range of addresses within a class. Not the whole class.

I realise I can use a wildcard such as 81.91.71.% but that is too generous. Their range is more like

81.10.20.1 --> 81.10.20.15

Is there a way to do this other than create 15 users, one for each individual IP address? thanks Mark

Getting the most out of SQL Server with a million rows a day

Posted: 29 Sep 2013 03:24 AM PDT

I have a windows service that is inserting 1000 rows at 1 minute intervals to a SQL Server 2008 Standard database table that looks like:

id, datetime, key_id, value

key_id is a foreign key to another table that looks like id, name

I'm building a localhost website reporting interface in PHP. I will be doing queries from this interface like:

  • Last 7 days of data for key_id's 1,2,50,377 at 1 minute intervals
  • Last 30 days of data for key_id's 40,47,732,400,43,22,18,5,14 at 1 hour intervals
  • 1 hour intervals for key_id 7,8,20,40 for all of May 2009

And I want it to be as fast as possible.

How should I tune SQL Server to be performant in this case?

Removing "duplicates" within a time range

Posted: 29 Sep 2013 07:24 AM PDT

I have a table that contains number plate data with a date and time stamp, collected by a plate-reading CCTV camera at a site entrance. Vehicles stopping at the site entrance will create multiple entries of the same plate for a short period, with slightly different entry times, as in the example below:

PLATE  | Date_in   | TIME_IN  DE54RPY|2013-08-29 | 14-24-30  DE54RPY|2013-08-29 | 14-24-36  DE54RPY|2013-08-29 | 14-24-42  DE54RPY|2013-08-29 | 14-24-48  

I found another solution on this site that suggested using this to filter out only the minimum timestamp for each plate:

select b.* from   (select plate,date_in,MIN(time_in) time_in  from tblin group by plate,date_in) A  inner join tblin B using (plate,date_in,time_in)  

The issue arises when a vehicle leaves the site and returns a while later, producing more entries in the table. These later entries will also be removed by the example solution above.

I need to find a way of filtering out a single instance (minimum or maximum) of a plate detection within a certain time period (e.g: a few minutes), and I'm not sure how to approach this. Could anybody here help me?

MySQL table architecture

Posted: 29 Sep 2013 10:24 AM PDT

Background information:

I have a table containing upload information, every day I get more and more uploads but most of the queries I run center around information gathered within the last 12 months; frequently it uses an even tighter time scale and queries are restricted to information gathered in the last 30 days

This system has been in existence from 2004 when there were 400 uploads, today there are 2+ million

Table structure:

CREATE TABLE `data_mediagallery` (    `id` int(11) NOT NULL AUTO_INCREMENT,    `status` tinyint(3) unsigned NOT NULL DEFAULT '0',    `contenttype` char(40) NOT NULL DEFAULT '',    `filename` varchar(100) NOT NULL DEFAULT '',    `injector` char(40) NOT NULL DEFAULT '',    `hits` int(11) DEFAULT '0',    `message` longtext,    `date` datetime DEFAULT NULL,    `title` varchar(80) NOT NULL DEFAULT '',    `tags` varchar(255) NOT NULL DEFAULT '',    `metadata` blob,    `location` char(8) NOT NULL DEFAULT '',    `uid` int(11) unsigned NOT NULL DEFAULT '0',    `filesize` bigint(20) NOT NULL DEFAULT '0',    `upload` datetime DEFAULT NULL,    `privacy` tinyint(3) unsigned NOT NULL DEFAULT '0',    `width` int(10) unsigned NOT NULL DEFAULT '0',    `height` int(10) unsigned NOT NULL DEFAULT '0',    `offensive` int(10) unsigned NOT NULL DEFAULT '0',    `sourcelocation` char(8) NOT NULL DEFAULT '',    `autoblog` tinyint(1) NOT NULL DEFAULT '0',    `extension` char(10) NOT NULL DEFAULT '',    `filetype` tinyint(3) unsigned NOT NULL DEFAULT '0',    `conversiontime` float NOT NULL DEFAULT '0',    `converttime` datetime DEFAULT NULL,    `sender` varchar(100) NOT NULL DEFAULT '',    `vhost` int(10) unsigned NOT NULL DEFAULT '0',    `channel` int(10) unsigned NOT NULL DEFAULT '0',    `rotation` tinyint(3) unsigned NOT NULL DEFAULT '0',    `ofilesize` bigint(20) NOT NULL DEFAULT '0',    `moderationstatus` tinyint(3) unsigned NOT NULL DEFAULT '0',    `rating` decimal(8,6) DEFAULT NULL,    `votecount` int(10) unsigned NOT NULL DEFAULT '0',    `url` varchar(150) NOT NULL DEFAULT '',    `geo_latitude` double DEFAULT NULL,    `geo_longitude` double DEFAULT NULL,    `length` decimal(8,2) DEFAULT '0.00',    `parentid` int(11) NOT NULL DEFAULT '0',    `language` char(2) NOT NULL DEFAULT '',    `author` varchar(100) NOT NULL DEFAULT '',    `context` tinyint(3) unsigned NOT NULL DEFAULT '0',    `externalid` varchar(255) DEFAULT NULL,    `originalsaved` bit(1) NOT NULL DEFAULT b'1',    `hidden` tinyint(4) NOT NULL DEFAULT '0',    `commentcount` int(11) NOT NULL DEFAULT '0',    `approvedcomments` int(11) NOT NULL DEFAULT '0',    `notdeniedcomments` int(11) NOT NULL DEFAULT '0',    `lastupdatetime` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,    `channelleft` int(10) unsigned NOT NULL DEFAULT '0',    `originalLocation` char(8) NOT NULL DEFAULT '',    PRIMARY KEY (`id`),    KEY `upload` (`upload`),    KEY `vhostupload` (`vhost`,`upload`),    KEY `vhostmodstatus` (`vhost`,`status`,`moderationstatus`,`uid`),    KEY `complexfiletype` (`vhost`,`status`,`moderationstatus`,`filetype`,`channel`),    KEY `vhostcontext` (`vhost`,`moderationstatus`,`context`,`parentid`,`status`,`filetype`),    KEY `externalid` (`externalid`),    KEY `externalcomments`.    KEY `vhostchannel` (`vhost`,`status`,`moderationstatus`,`context`,`channelleft`)  ) ENGINE=InnoDB;  

Questions

Is there a way to partition the table that would make the most sense? Does partitioning even make sense? How do I deal with new data if I do partition?

mongodb user for ubuntu EC2 instance

Posted: 29 Sep 2013 08:24 AM PDT

I am trying to install mongodb on Ubuntu EC2 instance. However, I am confused about what user the DB would run as:

If I follow: http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/

Then it says : "mongodb"

If I follow : http://docs.mongodb.org/ecosystem/tutorial/install-mongodb-on-amazon-ec2/

It says : "mongod"

I think it leads me to inconsistent state. There was a process running is ps output for mongodb but sudo service mongodb status or stop says: Unknown Instance.

What should be the user of mongodb?

Can't change root password: "The system cannot find the file specified."

Posted: 29 Sep 2013 09:24 AM PDT

I'm trying to change the root password in MySQL on my development machine (I've just installed MySQL, so it currently doesn't have a password), but it keeps failing with the following error message:

The system cannot find the file specified.

I'm using MySQL 5.1.70 (x86, 32-bit) on Windows 7 SP1 (64 bits). I've added MySQL's "bin" directory to my "Path" environment variable.

In the comments of the MySQL documentation, I read that I should have installed the service using the absolute path, so I stopped MySQL, and uninstalled it:

C:\Windows\system32>mysqld --remove  Service successfully removed.  

Then I installed it again, using the absolute path this time:

C:\Windows\system32>C:\web\mysql-5.1.70\bin\mysqld.exe --install  Service successfully installed.  

I started MySQL, and tried to change the password again:

C:\Windows\system32>mysqladmin -u root password Pe%8XiduwOqdZ<ZFE5!  The system cannot find the file specified.  

I also tried with quotes:

C:\Windows\system32>mysqladmin -u root password 'Pe%8XiduwOqdZ<ZFE5!'  The system cannot find the file specified.  

I also tried to change the current directory to MySQL's "bin" directory:

C:\Windows\system32>cd C:\web\mysql-5.1.70\bin    C:\web\mysql-5.1.70\bin>mysqladmin -u root password Pe%8XiduwOqdZ<ZFE5!  The system cannot find the file specified.    C:\web\mysql-5.1.70\bin>mysqladmin -u root password 'Pe%8XiduwOqdZ<ZFE5!'  The system cannot find the file specified.  

What's wrong?

How to take partitions or filegroups out of service

Posted: 29 Sep 2013 02:24 AM PDT

I'm an experienced SQL Server DBA but new to partitioning, and I have a couple of questions. Using SQL Server 2008 R2 Enterprise Edition.

I've inherited a large customer metrics database that grows by about 10 GB per day. This database currently consists of one large data file in one filegroup (PRIMARY). All tables have a datetime column called InsertedDate. I want to horizontally partition the data by InsertedDate, using a separate data file for each calendar week.

In a test environment I added the required additional filegroups and data files to this database, put a clustered index on InsertedDate in each table, and set up the partition function and partition scheme. By querying sys.partitions and other system tables, I've confirmed that the data is now physically residing in the correct partitions and data files.

Among others, the goals are:

  • Decrease backup time by only backing up the PRIMARY file group and the file group for the current date range (I'm currently running nightly full backups). Once a date range is in the past, that partition will never be written to again, so I'd like to set the filegroup to read-only and back it up one final time.

  • Be able to eventually take a partition "out of service". After 3 months there's no longer a need to keep older data online, so I'd like to take take that data offline (but be able to bring that data back online again if necessary).

Questions:

1) How to I perform backups on this partitioned database without having to back up the entire database? I can back up an individual filegroup, but to restore it requires the rest of the database, which defeats the purpose of using multiple smaller data files.

2) How to I take a partition out of service? I've read about switching, but it seems that only works if you want to move data between partitions within the same database. I want to be able to simply and safely take a range of data offline (and bring it back online of necessary).

Fulltext stoplist replication

Posted: 29 Sep 2013 08:24 PM PDT

In MS SQL Server 2008 R2 there is replication of table with fulltext index on it. But stoplist that is associated with replicated fulltext index doesn't replicate.

Is there any possibility to replicate stoplist also?

BIT columns all "1" after a phpMyAdmin export/import

Posted: 29 Sep 2013 01:24 AM PDT

I have to import data from a MySQL database using phpMyAdmin because that's the only interface my hosting provider supports.

I have exported my database from my local machine using phpMyAdmin. After that I imported the script file to my host. All of data in the columns that is BIT type are changed to '1'. Hosting database version is 5.5.29.

Is it a phpMyAdmin problem, or MySQL version problem? How can I fix this?

Postgres caching and bytea

Posted: 29 Sep 2013 06:24 AM PDT

I have a DB which, among other things, stores images (as bytea, if that's interesting). It also stores users' metadata, activity logs, etc., but images currently take 500MB out of the total 600MB data.

I'm a complete newbie, but if I understand correctly PG has (at least) 2 pools of buffers, one for indexes and one for "heap" (data).

I imagine that my images 'pollute' the heap cache, making virtually nothing else cached (the cache obviously is not large enough to accommodate all the images).

Can I stop the image blocks only (I don't mind the image indexes being cached) from being cached?

Show processlist / Max concurrent connections seem to max out at 131

Posted: 29 Sep 2013 12:24 AM PDT

When I put my database under a load test (basically just a basic write and basic read from our app), and run show processlist; in the middle of it, there is always 131 rows in the set and that's it. I'm running a master slave with 16GB of memory on each on a Joyent Percona instances which is based on Solaris 10. They are configured to each use up to 8GB memory and 5000 max concurrent connection. What could be limiting it to 131?

Deadlock free MERGE-alike in MySQL?

Posted: 29 Sep 2013 05:24 AM PDT

I am trying to implement some parts of MERGE in the MySQL driver in Drupal. Of course, Drupal has something but in truth it only works because the most frequent MERGE issuer just eats exceptions.

So, whatever we try, deadlocks occur. What we do, we start a transaction, then SELECT ... FOR UPDATE, try an INSERT and if it causes an 23xxx integrity error try an UPDATE instead. Deadlocks. We removed the FOR UPDATE cos we decided that for our use, it's OK. Still deadlocks.

I can't just switch isolation levels because READ COMMITTED needs row logging per http://dev.mysql.com/doc/refman/5.1/en/set-transaction.html

As of MySQL 5.1, if you use READ COMMITTED [...] you must use row-based binary logging.

And per http://stackoverflow.com/a/2032096/308851 READ UNCOMMITTED also needs row logging. And here comes http://dev.mysql.com/doc/refman/5.1/en/binary-log-setting.html

To change the global binlog_format value, you must have the SUPER privilege. This is also true for the session value as of MySQL 5.1.29.

I can't require every Drupal setup to have SUPER nor we can say that Drupal is incompatible with statement based binlogs when that's the default and the most widespread.

INSERT ... ON DUPLICATE KEY is neither versatile enough nor is it deadlock free http://bugs.mysql.com/bug.php?id=52020

So what now?

sql server database sharding - what to do with common data / non sharded data

Posted: 29 Sep 2013 07:24 PM PDT

We have a very large scale enterprise level database. As part of our business model all web users hit our web servers at the same time each month which in turn hammer our sql box. The traffic is very heavy and continues to grow heavier the larger the company grows. sql proc optimization has been performed and hardware has already been scaled up to a very high level.

We are looking to shard the database now to ensure that we can handle company growth and future loads.

We have decided what particular data should be sharded. It is a subset of our database which is highly utilized.

However, my question is regarding the non sharded data which is common/universal. An example of data like this may be an Inventory table for instance or possibly an Employee table, user table etc .

I see two options to handle this common/universal data:

1) design 1 - Place the common/universal data in an external database. All writes will occur here. This data will then be replicated down to each shard allowing each shard to read this data and inner join to this data in t-sql procs.

2) design 2 - Give each shard its own copy of all common/universal data. Let each shard write locally to these tables and utilize sql merge replication to update/sync this data on all other shards.

concerns about design #1

1) Transactional issues: If you have a situation in which you must write or update data in a shard and then write/update a common/universal table in 1 stored proc for instance, you will no longer be able to do this easily. The data now exists on seperate sql instances and databases. You may need to involve MS DTS to see if you can wrap these writes into a transaction since they are in a separate database. Performance is a concern here and possible rewrites may be involved for procs that write to sharded and common data.

2)a loss of referential integrity. Not possible to do cross database referential integrity.

3) Recoding large areas of the system so that it knows to write common data to the new universal database but read common data from the shards.

4). increased database trips. Like #1 above, when you run into a situation in which you must update sharded data and common data you are going to make multiple round trips to accomplish this since the data is now in separate databases. Some network latency here but I am not worried about this issue as much as the above 3.

concerns about design #2

In design #2 each shard gets its own instance of all common/universal data. This means that all code that joins to or updates common data continues to work/run just like it does today. There is very little recoding/rewriting needed from the development team. However, this design completely depends on merge replication to keep data in sync across all shards. the dbas are highly skilled and are very concerned that merge replication may not be able to handle this and should merge replication fail, that recovery from this failure is not great and could impact us very negatively.

I am curious to know if anyone has gone with design option #2. I am also curious to know if i am overlooking a 3rd or 4th design option that I do not see.

thank you in advance.

Problem with PIVOT [on hold]

Posted: 29 Sep 2013 01:56 AM PDT

I am facing a small issue while using PIVOT table. My problem is as following.

I have a single table with the procedures id and their steps and i want all the steps needs to be displayed in 5 column then next seq...

Correct output with where condition inside VIEW


S18CW | 001.000 | **Laser Scribe    | Mark Mask Layer (5500/100)    |PTest w/rework, |dispo, & scrap WAIT state feature |Mark Photostrip    |NBL Mask Layer (5500/100)  |NBL Implant|**  

Bold columns are steps for the lot 001.000 so I am using below query to get the above data

select      flow,      MainProcSeqNo,      Oper,      isnull([1],'') as Oper1,      isnull([2],'') as Oper2,      isnull([3],'') as Oper3,      isnull([4],'') as Oper4,      isnull([5],'') as Oper5  from (      SELECT DISTINCT TOP (100) PERCENT          fl.Flow,          fl.CallProcTitle AS oper,          fl.MainProcSeqNo,          fl2.CallProcTitle,          DENSE_RANK() OVER (              PARTITION BY fl.MainProcSeqNo              order by CAST(fl2.MainProcSeqNo AS float) * 1000 - CAST(fl.MainProcSeqNo AS float) * 1000          ) AS nxt      FROM          dbo.PromisAllFlows AS fl      LEFT OUTER JOIN          dbo.PromisAllFlows AS fl2      ON          fl2.Flow = fl.Flow      where          CAST(fl2.MainProcSeqNo AS float) * 1000 - CAST(fl.MainProcSeqNo AS float) * 1000 BETWEEN 1 AND 5000          **and fl.flow='S18CW'**  ) as a   pivot (      max(CallProcTitle)      for nxt in ([1],[2],[3],[4],[5])  ) as pvt order by MainProcSeqNo  

My problem is when i use WHERE condition inside sub query it is working perfectly but if i use outside of view it gives me wrong output like all the blanks instead of data in pivot table

enter image description here

Search This Blog