Thursday, August 15, 2013

[how to] SQL Database Query [on hold]

[how to] SQL Database Query [on hold]


SQL Database Query [on hold]

Posted: 15 Aug 2013 08:56 PM PDT

select CustomerName, City,CustomerReferrelID, Customerid,CustomerCode, RightId,leftID from customers where rightid>62000 and leftid>=62000 and customerid not between 70000 and 75000 and customerid not in( 62927, 63056, 63060, 63069, 63072, 63126, 64095, 64101, 64103, 64112, 64119, 64142, 64153, 64159, 75095, 75237, 75401, 75489, 75512, 75530, 75540, 75558, 75566, 75655, 75656, 75787, 75897, 75898, 75928, 75936, 76081, 76198, 76486, 76716, 76717, 76731, 76774, 76777, 76929, 77165, 77184, 77571, 77592, 77731, 77732, 77925, 77941)

order by rightid asc

i want to get creationdate of customerid, leftid, rightid from customers table..all info is persent in customers table....customerid is primary key too.

note: rightid, leftid are also customerid of some other customers

thanks in advance.

Use Oracle Wallet without autologin enabled

Posted: 15 Aug 2013 06:39 PM PDT

Question: is it possible to use the Oracle Wallet without enabling auto login in Wallet manager?

Problem I'm facing: When auto login is disabled, the sqlplus call:

sqlplus /@orcl  

fails with:

TNS:wallet open failed  

When enabling autologin, it connects automatically.

The expected behaviour is that, when autologin is disabled, you are asked the wallet password, and then connect.

How to get data comparations from to mysql tables

Posted: 15 Aug 2013 05:50 PM PDT

What I have: The next structure:

table_ChrM -> id (PRIMARY with auto increment) -> LOCUS (varchar (9)) Example value: (At1g30500) (Fixed Number of LOCUS values =173 rows) -> StartPoint (INT) (Exclusive value assigned by locus) -> EndPoint (INT) (Exclusive value assigned by locus)

table_DMRs -> id (Primary with auto increment) -> StartPos (INT)(Exclusive value assigned by id) -> EndPos (INT) (Exclusive value assigned by id) -> Other...

What I want : Select all values from table DMRs that fulfill the next condition,

EndPos >= EndPoint AND StartPos <= StartPoint

This for every LOCUS in table ChrM (Total of 173 times) and get an (LOCUS,EndPos,StartPos) array. For example, if I search with the At1g30500 LOCUS... I should get all values from table DMRs that fulfill the established condition.

Distributed Database Architecture Using MySQL Replication

Posted: 15 Aug 2013 06:46 PM PDT

I've been task to move our web servers to the cloud for all countries that use our services for localization.

The issue is, they must share the same data and therefore I need to set up some database architecture that reads/writes locally but is shared amongst other servers.

I found an article that is pretty much what I want to do. Example below.

http://www.compassitesinc.com/blogs/distributed-database-architecture-using-mysql-replication/

However my DBA says this is not possible. So my question is, is this possible? And if so, how?

We are a LAMP shop (Redhat, Apache, PHP5, MySql 5.5).

Thank you.

How to get MAX and MIN in GROUP BY query?

Posted: 15 Aug 2013 12:09 PM PDT

In a simple query

SELECT col1, col2, col3  FROM table1  GROUP BY col1  

each row includes the first value of col2 and col3 for each unique value of col.

How to select MAX and MIN for each value of col1.

something like

SELECT col1, minimum of col2, maximum of col2, minimum of col3, maximum of col3  

No SQL instances show in upgrade setup

Posted: 15 Aug 2013 11:35 AM PDT

I'm trying to upgrade sql 2005 sp4 to sql 2008 r2. However on the upgrade wizard it doesn't show me any instances to upgrade. Any ideas? The instance is there just fine. I can connect via sql mgmt studio.

Moving MySQL Cold Backup to Another DB Server

Posted: 15 Aug 2013 12:38 PM PDT

I am new to MySQL and want to know the info whether taking a cold backup from source and copying it onto new server with same version require any additional configurations settings to be changed.

As I have gone through few of the tutorials, none of the config changes are required.

Steps I am following

  1. Shutting down source server
  2. copying the backup (/var/lib/mysql).
  3. importing backup to new server (/var/lib/mysql).
  4. Start the new server.

Thanks in advance

Is it possible that a same INSERT operation performance will suddenly degrade?

Posted: 15 Aug 2013 11:33 AM PDT

I am working on a code performance test project.

Let me first describe the enviorment: MsSql Server, Ria Service

The automation test runs a big set of Test cases in midnight where have minimum network traffic or CPU usage on the server.

The tests will calls the Ria Service and the Ria Service will make calls to the database to do same operation repeatedly for couples times.

The performance metric is measured using Sql Profiling Trace, so we got all sql calls that made within each test and we can associates each of them.

I notice sometime a same SQL calls in a test its performance will suddenly jump up.

i.e. An INSERT operation of a test, usually is around 0.807 ms and its jumps to 200ms.

I wonder is this a normal behavior? Say there is no environment impacts to the test...

Query for View with unique entries from Database with repeated entries

Posted: 15 Aug 2013 02:51 PM PDT

I have a table with columns of Hotel names, city name and country name (of the respective hotels). I need to create a view of unique entries which will contain city name and country name columns. Any suggestions on how to go about it? Eg: There may be 100 hotels with city_name = Miami and country_name = USA. The view should however contain only 1 city_name = Miami and country_name = USA.

I figured I'd need a loop to run through the table but still struggling to put together a query.

Thanks in advance!

SQL Server 2012 Express fails at repair install, produces error 5178

Posted: 15 Aug 2013 01:48 PM PDT

My SQL Server Express Service will not start up.

To produce this problem, I basically cloned my old hard drive (Which had SQL Server Express 2012 installed) to a new hard drive (Seagate Momentus XT 750).

EDIT: I am adding info on how I cloned my hard drive as per request of SQLRockStar. I used Seagate DiscWizard. The program was producing errors when trying to clone the HD when using the simple "clone HD" command. So I "cloned" it the following way with the help of SeaGate Tech support:

  • Place new Hard drive(Momentus XT) in slave slot, Old HD in Master)
  • Create backup image of old HD and save on Old HD.
  • Create Seagate DiscWizard Bootable CD
  • Physically Swap both hard drive positions, (old now is in slave, new is in Master)
  • Boot with Seagate DiscWizard Bootable CD and restore backup image of old HD onto new HD.

Afterwards, I tried starting SQL Server Express 2012 on my Momentus XT and it would not start. So, I tried performing a repair installation of SQL Server, and it failed: see summary below:

Overall summary:    Final result:                  Failed: see details below    Exit code (Decimal):           -2061893608    Start time:                    2013-08-12 15:53:13    End time:                      2013-08-12 16:13:13    Requested action:              Repair    Setup completed with required actions for features.  Troubleshooting information for those features:    Next step for SQLEngine:       Use the following information to resolve the error, and then try the setup process again.    Next step for Replication:     Use the following information to resolve the error, and then try the setup process again.      Machine Properties:    Machine name:                  MATT-LAPTOP    Machine processor count:       8    OS version:                    Windows 7    OS service pack:               Service Pack 1    OS region:                     United States    OS language:                   English (United States)    OS architecture:               x64    Process architecture:          64 Bit    OS clustered:                  No    Product features discovered:    Product              Instance             Instance ID                    Feature                                  Language             Edition              Version         Clustered     SQL Server 2008                                                          Management Tools - Basic                 1033                 Express Edition      10.0.1600.22    No            SQL Server 2012      SQLEXPRESS           MSSQL11.SQLEXPRESS             Database Engine Services                 1033                 Express Edition      11.0.2316.0     No            SQL Server 2012      SQLEXPRESS           MSSQL11.SQLEXPRESS             SQL Server Replication                   1033                 Express Edition      11.0.2316.0     No            SQL Server 2012                                                          Management Tools - Basic                 1033                 Express Edition      11.0.2316.0     No            SQL Server 2012                                                          LocalDB                                  1033                 Express Edition      11.0.2318.0     No            Package properties:    Description:                   Microsoft SQL Server 2012     ProductName:                   SQL Server 2012    Type:                          RTM    Version:                       11    SPLevel:                       0    Installation location:         c:\215ca8b216eb992f2f4a\x64\setup\    Installation edition:          Express    User Input Settings:    ACTION:                        Repair    AGTDOMAINGROUP:                <empty>    AGTSVCACCOUNT:                 NT AUTHORITY\NETWORK SERVICE    AGTSVCPASSWORD:                <empty>    AGTSVCSTARTUPTYPE:             Disabled    ASCONFIGDIR:                   Config    ASSVCACCOUNT:                  <empty>    ASSVCPASSWORD:                 <empty>    CLTSTARTUPTYPE:                0    CLTSVCACCOUNT:                 <empty>    CLTSVCPASSWORD:                <empty>    CONFIGURATIONFILE:                 CTLRSTARTUPTYPE:               0    CTLRSVCACCOUNT:                <empty>    CTLRSVCPASSWORD:               <empty>    ENU:                           true    FAILOVERCLUSTERGROUP:          <empty>    FAILOVERCLUSTERNETWORKNAME:    <empty>    FTSVCACCOUNT:                  <empty>    FTSVCPASSWORD:                 <empty>    HELP:                          false    INDICATEPROGRESS:              false    INSTANCENAME:                  SQLEXPRESS    ISSVCACCOUNT:                  NT AUTHORITY\Network Service    ISSVCPASSWORD:                 <empty>    ISSVCSTARTUPTYPE:              Automatic    QUIET:                         false    QUIETSIMPLE:                   false    SQLSVCACCOUNT:                 NT Service\MSSQL$SQLEXPRESS    SQLSVCPASSWORD:                <empty>    UIMODE:                        AutoAdvance    X86:                           false      Configuration file:            C:\Program Files\Microsoft SQL Server\110\Setup Bootstrap\Log\20130812_155231\ConfigurationFile.ini    Detailed results:    Feature:                       Management Tools - Basic    Status:                        Passed      Feature:                       Database Engine Services    Status:                        Failed: see logs for details    Reason for failure:            An error occurred during the setup process of the feature.    Next Step:                     Use the following information to resolve the error, and then try the setup process again.    Component name:                SQL Server Database Engine Services Instance Features    Component error code:          0x851A0018    Error description:             Could not find the Database Engine startup handle.    Error help link:               http://go.microsoft.com/fwlink?LinkId=20476&ProdName=Microsoft+SQL+Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=11.0.2316.0&EvtType=0xD15B4EB2%400x4BDAF9BA%401306%4024&EvtType=0xD15B4EB2%400x4BDAF9BA%401306%4024      Feature:                       SQL Server Replication    Status:                        Failed: see logs for details    Reason for failure:            An error occurred for a dependency of the feature causing the setup process for the feature to fail.    Next Step:                     Use the following information to resolve the error, and then try the setup process again.    Component name:                SQL Server Database Engine Services Instance Features    Component error code:          0x851A0018    Error description:             Could not find the Database Engine startup handle.    Error help link:               http://go.microsoft.com/fwlink?LinkId=20476&ProdName=Microsoft+SQL+Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=11.0.2316.0&EvtType=0xD15B4EB2%400x4BDAF9BA%401306%4024&EvtType=0xD15B4EB2%400x4BDAF9BA%401306%4024      Feature:                       SQL Browser    Status:                        Passed      Feature:                       SQL Writer    Status:                        Passed      Feature:                       LocalDB    Status:                        Passed      Feature:                       SQL Client Connectivity    Status:                        Passed      Feature:                       SQL Client Connectivity SDK    Status:                        Passed      Feature:                       Setup Support Files    Status:                        Passed    Rules with failures:    Global rules:    There are no scenario-specific rules.    Rules report file:               C:\Program Files\Microsoft SQL Server\110\Setup Bootstrap\Log\20130812_155231\SystemConfigurationCheck_Report.htm    The following warnings were encountered while configuring settings on your SQL Server.  These resources / settings were missing or invalid so default values were used in recreating the missing resources.  Please review to make sure they don't require further customization for your applications:    Service SID support has been enabled on the service.  Service SID support has been enabled on the service.    The following resources could not be configured during repair without additional user input.  Review the warnings to understand your next steps:    The service failed to start for an unknown reason. For more information, see the event logs and the SQL Server error logs.  

I looked at the error log and it said

Error: 5178, Severity: 16, State: 1 Cannot use file 'c:\Program Files\Microsoft SQL Server\MSSQL10.SQLEXPRESS\MSSQL\DATA\master.mdf' because it was originally formatted with sector size 4096 and is now on a volume with sector size 3072. Move the file to a volume with a sector size that is the same as or smaller than the original sector size.

I read that last error message and am really confused. I'm led to believe that this is a problem with SQL Server, My HD has 4096 sector size.

UPDATE:

More information: I have discovered that My old hard Drive was 512 physical sector size and my new HD is 4096 sector size. I hear that there are conversion issues between the two sector sizes, but SQL Server is the only program that is producing errors on my system, I don't understand it.

Example optimisic offline lock in SQL/PL?

Posted: 15 Aug 2013 10:31 AM PDT

Can someone show me an example of an optimistic offline lock in an Oracle Stored Procedure? Optimistic Offline Lock

Error creating XTP Table in SQL Server 2014 CTP1 with Powershell

Posted: 15 Aug 2013 09:55 AM PDT

Having a couple of hours to play I created a SQL Server 2014 box on Azure and followed This SQLServerCentral Post to create a File Group and table with T-SQL. All good. I then decided to take a look at doing it with Powershell and I cannot create the table. I think it is to do with my Index creation. Can anyone see my mistake as I am going round in circles now.

Here's the T-SQL

-- Create new database  CREATE DATABASE TestDatabase  GO  --Add MEMORY_OPTIMIZED_DATA filegroup to the database.  ALTER DATABASE TestDatabase  ADD FILEGROUP XTPFileGroup CONTAINS MEMORY_OPTIMIZED_DATA  -- Add a new file to the previous created file group  ALTER DATABASE TestDatabase ADD FILE  (  NAME = N'HekatonFile1',  FILENAME = N'C:\Program Files\Microsoft SQL  Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\HekatonFile1')  TO FILEGROUP [HekatonFileGroup]  GO  -- Let's create a new Memory Optimized Table  CREATE TABLE TestTable  (   Col1 INT NOT NULL,  Col2 VARCHAR(100) NOT NULL,   Col3 VARCHAR(100) NOT NULL  CONSTRAINT chk_PrimaryKey PRIMARY KEY NONCLUSTERED HASH (Col1) WITH (BUCKET_COUNT = 1024)  ) WITH (MEMORY_OPTIMIZED = ON)  GO  

And Here is my Powershell

# To Load SQL Server Management Objects into PowerShell  [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO')  | out-null  [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMOExtended')  | out-null  $server= new-Object Microsoft.SqlServer.Management.Smo.Server("(local)")  $dbname = "HekatonTest"  $db = New-Object Microsoft.SqlServer.Management.Smo.Database($server, $dbname)  # Add FileGroups  $FG1Name = "PRIMARY"  $Normalfg = new-object ('Microsoft.SqlServer.Management.Smo.FileGroup') ($db, $FG1Name)  $db.FileGroups.Add($Normalfg)  $FG2Name = "MemOpt"  $MemOptFG = new-object ('Microsoft.SqlServer.Management.Smo.FileGroup') ($db, $FG2Name)  $MemOptFG.FileGroupType = "MemoryOptimizedDataFileGroup"  $db.FileGroups.Add($MemOptFG)  #Create datafiles  $normallogname = "HekatonTest_Data"  $dbdfnormal = new-object ('Microsoft.SqlServer.Management.Smo.DataFile') ($Normalfg, $normallogname)  $dbnormalfile= $server.Information.MasterDBPath + '\' + $normallogname + '.mdf'  $normalfg.Files.Add($dbdfnormal)  $dbdfnormal.FileName = $dbnormalfile  $dbdfnormal.Size = [double](5.0 * 1024.0)  $dbdfnormal.GrowthType = 'Percent'  $dbdfnormal.Growth = 25.0  $dbdfnormal.IsPrimaryFile = 'True'  $MemOptFilename = "MemOpt_Data"  $MemOptDataFile = new-object ('Microsoft.SqlServer.Management.Smo.DataFile') ($MemOptFG, $MemOptFilename)  $MemOptDataFilePath= $server.Information.MasterDBPath + '\' + $MemOptFilename + '.ndf'  $MemOptFG.Files.Add($MemOptDataFile)  $MemOptDataFile.FileName = $MemOptDataFilePath  $db.Create()   #Create Table  $dbname = "HekatonTest"  $db = $server.databases[$dbname]  $tb = new-object Microsoft.SqlServer.Management.Smo.Table($db, "MemOptTable")  #Add Columns  $col1 = new-object Microsoft.SqlServer.Management.Smo.Column($tb,"Col1", [Microsoft.SqlServer.Management.Smo.DataType]::Int)  $col2 = new-object Microsoft.SqlServer.Management.Smo.Column($tb, "Col2", [Microsoft.SqlServer.Management.Smo.DataType]::varchar(100))  $col3 = new-object Microsoft.SqlServer.Management.Smo.Column($tb, "Col3", [Microsoft.SqlServer.Management.Smo.DataType]::varchar(100))  $tb.Columns.Add($col1)  $tb.Columns.Add($col2)  $tb.Columns.Add($col3)  $C1Name =$col1.Name  #Create Index  $IX = New-Object Microsoft.SqlServer.Management.Smo.Index ($Tb, "PK_PrimaryKeyName")     $IX_col = New-Object Microsoft.SqlServer.Management.Smo.IndexedColumn ($IX,$C1Name)  $IX.IndexedColumns.Add($IX_col)  $IX.IndexKeyType = "DriPrimaryKey"  $IX.IndexType = "HashIndex"  $IX.BucketCount = "1024"  $IX.FileGroup = "PRIMARY"  $IX.IsMemoryOptimized = $true  $Tb.Indexes.Add($IX)  $tb.FileGroup = "MemOpt"  $tb.Durability = "SchemaAndData"  $tb.IsMemoryOptimized = $true  $tb.Create()  

I can create the filegroups and the database with PS and use SSMS to create Memory Optimised Tables (XTP) and if I take out the index creation then the table creates but isnt XTP obviously.

I know I can take the T-SQL and put it into Invoke-SQLCMD but I figured you ought to be able to do it with Powershell.

Can anyone point me in the right direction? For interest only, maybe its not available in the CTP

Modeling a database for easy counting / reporting

Posted: 15 Aug 2013 08:05 PM PDT

I have an app where user is known (user_id) and he can do several actions (action_id). Every time he makes an action I need to save the fact that he made it for reports/analytics. I guess it is similar to other analytic solutions and their db design.

Once I have the data, provided with a time window (minutes resolution) I need to count for each user (all or some) the number of times he did actions and which actions he did. (sum all data grouped by action_id).

Some assumptions:

  • The number of users are ~1000.
  • Action types are ~100.
  • Actions can happen 24/7.
  • The time windows can span from minutes to days and are random.
  • A time window can't go back more than 30 days.

I'm considering SQL, NoSQL and RRD to save the data.

I put RRD here because it's easy to implement the insert of the data into statds+graphite. I'm concerned if I take this approach, the querying (although provided by graphite) will not be indexed and will probably have to count all the data whenever I ask for a window/user (no indexing). Another problem is that when querying all the data, all users info will be needed, resulting in reading all the files concurrently which I'm not sure is a good thing.

SQL - Very easy implementation when inserting the data and querying. Easy to index, order and group by. However I'm not sure it's easy if I'm anticipating high traffic. Also, I'm not sure how effective is the count() of sql (haven't used SQL in the last few years) after group by. Can it offer parallel computation?

NoSQL - Is there a solution out there that is the right fit for this type of scenario (perhaps a Map/Reduce algorithm to fast generation of counts in a time window?)

Thanks for helping me model

Putting a Select statement in a transaction

Posted: 15 Aug 2013 01:05 PM PDT

What is the difference between these 2 queries:

start transaction;  select * From orders Where id=1;  UPDATE orders SET username="John" Where id=1;  commit;  

And without transaction:

select * From orders Where id=1;  UPDATE orders SET username="John" Where id=1;    

What is the effect of having a SELECT inside a transaction?

If Delete From orders Where id=1 was called from another session right after the Select in both cases, when will it be processed?

INT or TIMESTAMP for the change time column in a logging table?

Posted: 15 Aug 2013 02:05 PM PDT

We're having an argument over what to use for storing change date in a new log table for our site.

One side says, use INT for the timestamp, storing the UNIX timestamp. We can convert the datetime query in PHP to a UNIX integer and query based on that (WHERE log_date BETWEEN [convert 12/25/2012 to timestamp] AND [convert 12/31/2012 to timestamp]).

The other side says, use TIMESTAMP for this; we can then use the native tools to query. (WHERE log_date BETWEEN "2012-12-25" AND "2012-12-31").

Since this is a logging table, we're not concerned about the 1970-2038 range of TIMESTAMP.

Which makes more sense?

Problem compiling view when it is referencing a table in an other view: insufficient privileges

Posted: 15 Aug 2013 11:05 AM PDT

Oracle 11g R2 Logged on: SYS / AS SYSDBA

When I try to compile or create a view that is referencing local schema tables. It works fine.

Problem does occur when I try to compile the same view referencing a table in another schema like schema.table in my query.

Oracle throws the exception ORA-01031: insufficient privileges.

Remember I am using SYS account (sysdba).

Help creating this query

Posted: 15 Aug 2013 09:05 PM PDT

I'd like to integrate the following query:

SELECT parent.id , COUNT(child.id) AS child_count FROM messages parent INNER JOIN messages child ON child.parent_id = parent.id WHERE parent.parent_id = 0 GROUP BY parent.id;

INTO this query:

SELECT m.id,m.seen, m.uid, m.hash, m.date_created,m.subject, m.textplain,m.texthtml, CONCAT_WS('@', a.localpart, a.domain ) AS address FROM messages m LEFT JOIN message_address_fields maf ON maf.message_id = m.id LEFT JOIN addresses a ON a.id = maf.address_id WHERE maf.field_id =4 AND m.user_id =1

Here's a sql fiddle: http://sqlfiddle.com/#!2/bef27/1

Here's my schema:

 CREATE TABLE IF NOT EXISTS `addresses` (     `id` int(10) NOT NULL AUTO_INCREMENT,     `user_id` int(10) NOT NULL,     `name` text NOT NULL,     `localpart` text NOT NULL,     `domain` text NOT NULL,     PRIMARY KEY (`id`)    ) ENGINE=MyISAM  DEFAULT CHARSET=latin1;    CREATE TABLE IF NOT EXISTS `fields` (     `id` int(10) NOT NULL AUTO_INCREMENT,     `name` text,    PRIMARY KEY (`id`)   ) ENGINE=MyISAM  DEFAULT CHARSET=latin1 AUTO_INCREMENT=12 ;    CREATE TABLE IF NOT EXISTS `messages` (    `id` int(10) NOT NULL AUTO_INCREMENT,    `user_id` int(10) NOT NULL,    `account_folder_id` int(10) NOT NULL,    `hash` varchar(255) NOT NULL,    `subject` varchar(255) NOT NULL,    `texthtml` text NOT NULL,    `textplain` text NOT NULL,    `uid` int(10) NOT NULL,    `seen` tinyint(1) NOT NULL,    `flagged` tinyint(1) NOT NULL,    `date_created` int(11) NOT NULL DEFAULT '0',    `last_modified` int(11) NOT NULL DEFAULT '0',    PRIMARY KEY (`id`),    UNIQUE KEY `hash` (`hash`)   ) ENGINE=MyISAM  DEFAULT CHARSET=latin1;    CREATE TABLE IF NOT EXISTS `message_address_fields` (    `id` int(10) NOT NULL AUTO_INCREMENT,    `message_id` int(10) NOT NULL,    `field_id` int(10) NOT NULL,    `address_id` int(10) NOT NULL,    `date_created` int(11) NOT NULL DEFAULT '0',    `last_modified` int(11) NOT NULL DEFAULT '0',    PRIMARY KEY (`id`)  ) ENGINE=MyISAM  DEFAULT CHARSET=latin1;  

Thanks for the help!

QUESTION: This query limits the data to only messages that contain an address with field id of 4. How would I make it optional to have a correlated address with a field id of 4 or not

representation in ms-access

Posted: 15 Aug 2013 07:05 PM PDT

I have a database in microsoft access. I want to know how to look up a singular datum from a reference table giving a dynamic set of values. Here is a representation of what I mean:

I have the following tables:

Points for Pushups(m):

Reps      Age 17-21,          Age 22-26,         Age 27-31    1            6                  7                    8    2            7                  9                    9    3            9                  11                  12  

Fitness Tests:

Name  Reps   Test Date    Bob      2            1 jan 2009    Jill     1            5 may 2010  

People:

Name         DOB    Bob      1 jan 1987    Jill     2 feb 1985    Sal      3 Mar 1991    

I want the query to use People.DOB and the Test date to find the age the person was during the test. I then want the query to use this value to determine which column to look in, and the value from reps to determine which row to look in coming back with the singular value and naming it points.

for example I want bob to show

Query:

Name      DOB            Age AtTest   Reps      Points    Bob      1 Jan 1987         22         2          9  

Does anyone know how to do the dynamic reference part?

I know how to make the query and I know how to get age I just don't know how to use the values as columns in the reference table, I've seen it done, but long ago and never looked into it.

"Cannot add or update a child row" when deleting a record?

Posted: 15 Aug 2013 12:48 PM PDT

I have the two following tables:

survey_main  ----------  id  name    answers_main  --------------  id  survey_id  

I have the following foreign key constraint on the answers_main table:

CONSTRAINT `fk_answers_main_survey` FOREIGN KEY (`survey_id`) REFERENCES `survey_main` (`id`) ON DELETE CASCADE)  

If I try to delete a record from survey_main that has child records in the answers_main table I get the following error:

ERROR 1452 (23000): Cannot add or update a child row: a foreign key constraint fails (planet.answers_main, CONSTRAINT fk_answers_main_survey FOREIGN KEY (survey_id) REFERENCES survey_main (id) ON DELETE CASCADE)

I understand what the error is saying, but shouldn't the fact that I have cascading deletes make it so this error would never be thrown? What am I missing here?

UPDATE:

Here is the command I am running and the error it throws, copied directly form the command line:

mysql> delete from survey_main where id = 1750;  ERROR 1452 (23000): Cannot add or update a child row: a foreign key constraint fails (`planet`.`answers_main`, CONSTRAINT `fk_answers_main_survey` FOREIGN KEY (`survey_id`) REFERENCES `survey_main` (`id`) ON DELETE CASCADE)  

Here is what is showing under last foreign key error when I do SHOW ENGINE INNODB STATUS:

130627  8:43:11 Transaction:  TRANSACTION E7DB6, ACTIVE 0 sec updating or deleting  mysql tables in use 1, locked 1  27 lock struct(s), heap size 3112, 19 row lock(s), undo log entries 6  MySQL thread id 1007, OS thread handle 0x10e82a000, query id 56615 localhost root updating  delete from survey_main where id = 1750  Foreign key constraint fails for table `planet`.`answers_main`:  ,    CONSTRAINT `fk_answers_main_survey` FOREIGN KEY (`survey_id`) REFERENCES `survey_main` (`id`) ON DELETE CASCADE  Trying to add in child table, in index `survey_id` tuple:  DATA TUPLE: 6 fields;   0: len 4; hex 000006f0; asc     ;;   1: SQL NULL;   2: len 4; hex 00002fe2; asc   / ;;   3: len 4; hex 00004a0d; asc   J ;;   4: len 4; hex 00000c07; asc     ;;   5: len 4; hex 0004090c; asc     ;;    But in parent table `planet`.`survey_main`, in index `PRIMARY`,  the closest match we can find is record:  PHYSICAL RECORD: n_fields 22; compact format; info bits 32   0: len 4; hex 000006f0; asc     ;;   1: len 6; hex 0000000e7db6; asc     } ;;   2: len 7; hex 020000027a08bc; asc     z  ;;   3: len 17; hex 53757276657920666f7220746573747321; asc Survey for tests!;;   4: len 0; hex ; asc ;;   5: len 1; hex 01; asc  ;;   6: len 2; hex 0001; asc   ;;   7: len 4; hex 00000000; asc     ;;   8: len 4; hex 00000000; asc     ;;   9: len 4; hex 000006ba; asc     ;;   10: len 8; hex 8000000000000000; asc         ;;   11: len 1; hex 80; asc  ;;   12: SQL NULL;   13: len 8; hex 8000000000000000; asc         ;;   14: len 4; hex 00000c07; asc     ;;   15: len 8; hex 8000124f06e1707a; asc    O  pz;;   16: len 4; hex 00000c07; asc     ;;   17: len 8; hex 8000124f06e1707a; asc    O  pz;;   18: len 1; hex 81; asc  ;;   19: len 8; hex 0000013f864b33fb; asc    ? K3 ;;   20: len 8; hex 0000000000000000; asc         ;;   21: SQL NULL;  

UPDATE #2:

Below are the create table statements for both tables.

mysql> show create table survey_main\G  *************************** 1. row ***************************         Table: survey_main  Create Table: CREATE TABLE `survey_main` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT COMMENT 'Primary ID',    `title` varchar(150) NOT NULL,    `description` mediumtext NOT NULL,    `type` tinyint(3) unsigned NOT NULL DEFAULT '1',    `question_amt` smallint(5) unsigned NOT NULL,    `parent_survey_id` int(10) unsigned NOT NULL DEFAULT '0',    `taken_amt` int(10) unsigned NOT NULL DEFAULT '0',    `latest_campaign_id` int(10) unsigned DEFAULT NULL,    `last_taken_date` datetime NOT NULL,    `user_did_finish` tinyint(1) NOT NULL DEFAULT '0',    `last_user` int(10) unsigned DEFAULT NULL,    `last_date` datetime NOT NULL,    `create_user` int(10) unsigned DEFAULT NULL,    `create_date` datetime NOT NULL,    `modify_user` int(10) unsigned DEFAULT NULL,    `modify_date` datetime NOT NULL,    `active` tinyint(1) NOT NULL DEFAULT '1',    `utccdate` bigint(20) unsigned NOT NULL,    `utclasttakendate` bigint(20) unsigned NOT NULL,    `theme_id` int(10) unsigned DEFAULT NULL,    PRIMARY KEY (`id`),    KEY `create_user` (`create_user`),    KEY `title` (`title`),    KEY `latest_campaign_id` (`latest_campaign_id`),    KEY `fk_survey_main_lastUser` (`last_user`),    KEY `fk_survey_main_modifyUser` (`modify_user`),    KEY `fk_survey_main_theme` (`theme_id`),    CONSTRAINT `fk_survey_main_theme` FOREIGN KEY (`theme_id`) REFERENCES `themes` (`id`) ON DELETE SET NULL,    CONSTRAINT `fk_survey_main_campaign` FOREIGN KEY (`latest_campaign_id`) REFERENCES `survey_campaigns` (`id`) ON DELETE SET NULL,    CONSTRAINT `fk_survey_main_createUser` FOREIGN KEY (`create_user`) REFERENCES `users` (`id`) ON DELETE CASCADE,    CONSTRAINT `fk_survey_main_lastUser` FOREIGN KEY (`last_user`) REFERENCES `users` (`id`) ON DELETE SET NULL,    CONSTRAINT `fk_survey_main_modifyUser` FOREIGN KEY (`modify_user`) REFERENCES `users` (`id`) ON DELETE SET NULL  ) ENGINE=InnoDB AUTO_INCREMENT=1790 DEFAULT CHARSET=utf8 COMMENT='Main Table For Designed Surveys'  1 row in set (0.00 sec)    mysql> show create table answers_main\G  *************************** 1. row ***************************         Table: answers_main  Create Table: CREATE TABLE `answers_main` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `survey_id` int(10) unsigned DEFAULT NULL,    `campaign_id` int(10) unsigned DEFAULT NULL,    `question_id` int(10) unsigned DEFAULT NULL,    `participant_id` int(10) unsigned DEFAULT NULL,    `user_id` int(10) unsigned DEFAULT NULL,    `choice` int(10) unsigned DEFAULT NULL,    `scale_from` smallint(6) NOT NULL DEFAULT '0',    `scale_to` smallint(6) NOT NULL DEFAULT '0',    `date_fillin` datetime NOT NULL,    `addl_fillin` varchar(150) NOT NULL,    `left_blank` tinyint(1) NOT NULL DEFAULT '0',    `answer_date` datetime NOT NULL,    `utcanswerdate` bigint(20) unsigned NOT NULL,    PRIMARY KEY (`id`),    KEY `survey_id` (`survey_id`,`campaign_id`,`question_id`,`participant_id`,`user_id`),    KEY `fk_answers_main_campaign` (`campaign_id`),    KEY `fk_answers_main_question` (`question_id`),    KEY `fk_answers_main_participant` (`participant_id`),    KEY `fk_answers_main_user` (`user_id`),    CONSTRAINT `fk_answers_main_user` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON DELETE CASCADE,    CONSTRAINT `fk_answers_main_campaign` FOREIGN KEY (`campaign_id`) REFERENCES `survey_campaigns` (`id`) ON DELETE SET NULL,    CONSTRAINT `fk_answers_main_participant` FOREIGN KEY (`participant_id`) REFERENCES `survey_participants` (`id`) ON DELETE CASCADE,    CONSTRAINT `fk_answers_main_question` FOREIGN KEY (`question_id`) REFERENCES `survey_questions` (`id`) ON DELETE CASCADE,    CONSTRAINT `fk_answers_main_survey` FOREIGN KEY (`survey_id`) REFERENCES `survey_main` (`id`) ON DELETE CASCADE  ) ENGINE=InnoDB AUTO_INCREMENT=264471 DEFAULT CHARSET=utf8  1 row in set (0.00 sec)  

What is the best way to transport database fields from one database to another?

Posted: 15 Aug 2013 04:05 PM PDT

I have two databases. The table name and fields name are different and field numbers are unequal. I need to transport all fields from one database to another. I can import the database as CSV format.

I can use a PHP script which will accomplish this.

But is there any other way to do this easily without any script.

How to setup SQL active/active cluster to achieve Blue / Green instance switching?

Posted: 15 Aug 2013 04:45 PM PDT

I am wondering if anyone has ever used a multi-instance cluster (nee 'Active/Active') to achieve blue/green (or A/B) deployment scenarios, and what the best way of configuring it is (using SQL 2012 / Windows 2008 R2)?

To be specific, the scenario I want to achieve is to be able to switch between which cluster instance is being connected to by clients without either the clients or the SQL instances knowing (I stress I'm not talking about node failover here). I'm envisaging that the best way to achieve this is something like:

  • Setup 2 node cluster, each of which has InstanceA and InstanceB instances
  • Configure both InstanceA and InstanceB to listen as if they were the default instance on their cluster address (given each instance on a cluster has it's own unique IP)
  • Use DNS to switch which virtual address clients actually connect to.

This should hopefully enable me to do the following:

  • Deploy database to instance A, and have clients connect to it via DNS alias as if default instance
  • Deploy new version of database to instance B
  • Vet new version of database (connecting explicitly to cluster\InstanceB)
  • Redirect DNS alias to point to instance B's cluster name
  • Clients now connect to InstanceB without realising anything's changed
  • Both instances can still failover to the other node in a true outage

Joining the dots, it seems like this should be possible:

... but I've never seen a full example. Has anyone done it? Will what's proposed above work? What have I missed?

copy package from schema B to schema C

Posted: 15 Aug 2013 10:05 AM PDT

I am in the next situation: I am using oracle 11g. I am connected to an user, lets say schema1, where are a lot of permissions. I want to create a script which will copy the Package1 (and its body) from schema2 to schema3.

The script will be executed from schema1.

PS: I already look for a solution and I am not interested in export/import or in using other tools from toad, sql developer etc.

Time series data for ad platform

Posted: 15 Aug 2013 12:05 PM PDT

I am trying to figure out how to store time series data for an ad platform I am working on.

Basically I want to know some strategies/solutions for storing billions of rows of data so that I can easily search it (about 6-8 indexes on the table) and get fast counts based on queries.

I tried mySQL with the tokuDB engine and this seems to be very fast but is extremely slow when I try to do a COUNT query when the rows reached about 5-8 million.

I was looking at some noSQL alternatives but since I want to be able to search this data this is probably not the best solution. I was using dynamoDB. I would have had to store the data is many places in order to account for all the searching on the data.

What I am storing is a row in the database for each click on an AD that occurs. This table will grow very fast, especially when this site gets large.

Another solution would be to separate this data per advertiser. This means each advertiser will have their own table where all their data goes into. This means it will be much smaller and the COUNT queries will be much faster. I can even split it up by advertiser and month.

My goal is to give an advertiser the ability to search and display in a paginated way all their clicks. They should be able to get data between a time period and filter by about 5-8 other indexes if they want to.

If an account has REQUIRE SUBJECT, does it still need a password?

Posted: 15 Aug 2013 06:05 PM PDT

I'm in the process of setting up SSL-secured replication between two servers. Each server has its own public/private keypair, and the CA cert is just the concatenation of the two public certs, like this answer.

Now I'm updating the replication account with REQUIRE SUBJECT "exact subject of the client"

Is there any practical value to also having a password on the replication account (IDENTIFIED BY "secret")?

MySQL users corrupt

Posted: 15 Aug 2013 05:05 PM PDT

I have a strange situation here:

From time to time I cannot log in with any of my mysql users. I even cannot make a mysql dump.

So I started searching in mysql files and I found that users.MYD and users.MYI are modified in the time when the login problem occurs. The only way to return everything to work is to restore the users.* files from the time when the system was running okay.

I searched about the problem and I found that there was some bug in the MySQL, but it was in the older versions (4.X). I'm running 5.5.

Any ideas? Thanks!

I have multiple sources INSERTing into a MySQL innodb table. My periodic aggregation script never makes accurate aggregates. Why?

Posted: 15 Aug 2013 05:03 PM PDT

I apologize in advance if this is a repeat. I'm not really sure how to properly ask for what I'm running into.

I have a large InnoDB table set up. I have 3 sources that all INSERT concurrently at a moderately high volume (300-500 INSERT/s).

I have a PERL script running on a cron job every 5 minutes that aggregates data from the last time it ran. It keeps track of the last record it processed by storing the auto_increment value of the last row in a metadata table.

The aggregates are always off. But not by much. This has been so frustrating because it's just plain simple math (SELECT account_id,sum(sold) GROUP BY account_id). I have a suspicion that it has something to do with the transaction isolation (repeatable-read).

I recently found FlexViews which looks very cool and might address my problem. But I was wondering if anyone could:

  • Tell me what is going on here. Why is this happening?
  • What can be done to produce 100% accurate aggregates

I'm like 95% sure the auto_increment thing along with transaction commit ordering is hosing me up, but I don't understand enough about DBs to really ask the right question.

Oh, one thing to note, I've already checked over the field types. This issues isn't the result of rounding.

Log Shipping - RESTORE WITH STANDBY - on SQL Server 2012 keeps breaking

Posted: 15 Aug 2013 04:58 PM PDT

We are using log shipping and RESTORE WITH STANDBY on SQL Server 2012 in order to restore the database in read-only mode for reporting purposes. However, the log shipping setup keeps breaking after completing a restore of one or two log backups. Log-shipping only breaks when it is running as RESTORE WITH STANDBY; RESTORE WITH NORECOVERY does not cause any problems.

My only intuition about this is that the primary database is not that dynamic. Therefore, when there are no transactions, this causes issues with the RESTORE process, maybe?

Any ideas, known fixes?

I had it working for a few days by running a regular job that does heavy updating on two tables. When the job stopped running the log shipping setup quickly failed, unable to process the .trn file. I reset log-shipping and tried to see if it would keep running by just doing a small update, changing the value of one column of one record in a table, whoever it still failed.

Thanks for all your responses.

PS: An excerpt from our log

  02/25/2013 13:00:00,LSRestore_DBDB01-A_BulldogDB,In Progress,1,DBREPORTS,LSRestore_DBDB01-A_BulldogDB,Log shipping restore log job step.,,2013-02-25 13:00:12.31    *** Error: Could not apply log backup file '\\dbsan01\DBBackups\LSBackup_BulldogDB\BulldogDB_20130225180000.trn' to secondary database 'BulldogDB'.(Microsoft.SqlServer.Management.LogShipping) ***  2013-02-25 13:00:12.31  *** Error: An error occurred while processing the log for database 'BulldogDB'.  If possible restore from backup. If a backup is not available it might be necessary to rebuild the log.  An error occurred during recovery preventing the database 'BulldogDB' (8:0) from restarting. Diagnose the recovery errors and fix them or restore from a known good backup. If errors are not corrected or expected contact Technical Support.  RESTORE LOG is terminating abnormally.  Processed 0 pages for database 'BulldogDB' file 'BulldogDB' on file 1.  Processed 1 pages for database 'BulldogDB' file 'BulldogDB_log' on file 1.(.Net SqlClient Data Provider) ***  2013-02-25 13:00:12.32  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***  2013-02-25 13:00:12.32  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***  2013-02-25 13:00:12.32  Skipping log backup file '\\dbsan01\DBBackups\LSBackup_BulldogDB\BulldogDB_20130225180000.trn' for secondary database 'BulldogDB' because the file could not be verified.  2013-02-25 13:00:12.32  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***  2013-02-25 13:00:12.32  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***  2013-02-25 13:00:12.33  *** Error: An error occurred restoring the database access mode.(Microsoft.SqlServer.Management.LogShipping) ***  2013-02-25 13:00:12.33  *** Error: ExecuteScalar requires an open and available Connection. The connection's current state is closed.(System.Data) ***  2013-02-25 13:00:12.33  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***  2013-02-25 13:00:12.33  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***  2013-02-25 13:00:12.33  *** Error: An error occurred restoring the database access mode.(Microsoft.SqlServer.Management.LogShipping) ***  2013-02-25 13:00:12.33  *** Error: ExecuteScalar requires an open and available Connection. The connection's current state is closed.(System.Data) ***  2013-02-25 13:00:12.33  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***  2013-02-25 13:00:12.33  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***  2013-02-25 13:00:12.33  Deleting old log backup files. Primary Database: 'BulldogDB'  2013-02-25 13:00:12.33  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***  2013-02-25 13:00:12.33  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***,00:00:12,0,0,,,,0  

Coding an accounting database from scratch?

Posted: 15 Aug 2013 04:29 PM PDT

My new medical practice is picking up fast and my homemade Libreoffice Calc spreadsheet is quickly becoming cumbersome to track patient visits, accounts receivable, overhead reconciliation, etc. Most practices use some sort of practice management software to handle accounting as well as scheduling, paying bills, submitting insurance claims, etc. but my budget can't afford to pay for overpriced software. Besides, I like to do things my own way, as well as support open source software. (I've already determined that I don't need this function to come from my electronic charts)

Google searching didn't yield any templates and all of the accounting programs found were built for traditional businesses with purchasing, warehousing, human resource management, etc. that aren't adaptable for my needs and are unusable. You wouldn't think that I would be the first physician to attempt something like this, but the only thing that I could find is a design plan here: http://www.databaseanswers.org/data_models/

So now, I'm considering importing my spreadsheet into Libreoffice Base and slowly developing a custom accounting program for myself. However, I have no experience with databases and I don't want to spend a large amount of time on this if the amount or complexity of coding is beyond what I am willing to sacrifice from my study time. After browsing through textbooks at the local B&N, the first 1/3 of most books on MySQL, Filemaker Pro, and Access seem applicable for what is needed (complex mathematical modeling and data analysis is overkill), but I'm still hesitant about heading into this endeavor if the work load is larger than I care to handle.

So, is it doable for someone with remote programming experience (qbasic, visual basic) and basic spreadsheet knowledge to code a simple custom accounting database? If so, is LibreOffice Base sufficient? What program or textbook would you recommend?

Thank you!

Converting dbo file from windows to linux

Posted: 15 Aug 2013 09:05 AM PDT

I have a .dbo file which is created from windows. This file is succesfully reloaded into the mysql database in windows. I need to reload the dbo file into the mysql/mariadb database in linux. How I convert the file that was created from windows to linux?

No comments:

Post a Comment

Search This Blog