Sunday, August 18, 2013

[how to] Subselect dependent on first select value

[how to] Subselect dependent on first select value


Subselect dependent on first select value

Posted: 18 Aug 2013 05:58 PM PDT

I wish to do a select of all data matching a particular id. I would also like to return a selection of all data matching the alt_id field in the first select.

This example works fine for the main query of two tables and their data based on the contact_id I feed it, in this case '1':

SELECT e_contacts.*, e_credentials.*  FROM e_contacts, e_credentials,  WHERE e_contacts.contact_id = '1' AND   e_credentials.contact_id = e_contacts.contact_id  ORDER BY e_contacts.contact_id  

However, one of the fields in e_contacts is called alt_contact_id (wife or friend) which when holds a value, requires I get both contacts data.

So, I would like to have a second row of data returned for that alt_contact_id's data which exists in the same table. In other words we'll be requerying the same select statement but feeding it e_contacts.contact_id = alt_contact_id found in the first select (if alt_contact_id has a value of course).

This is the table structure: Table structure

I'm completely uncertain how to accomplish it.

MySQL Replication VS MySQL cluster for high availability

Posted: 18 Aug 2013 03:08 PM PDT

I need to figure out a solution for high availability in MySQL and I don't know what to use -master master replication -NDBCluster

Thanks

MCSE: BI - Advice?

Posted: 18 Aug 2013 01:56 PM PDT

I recently got interested in Data Analysis but have no formal maths background so the road to a career seems long and arduous with respect to getting on board with the maths. Then, I am introduced to the world of BI and Microsoft's BI qualifications seem to be the gold standard around my neck of the woods being listed on every job description.

So, my question to you is, as a person with little to no experience of SQL Server (outside of a masters degree module in T-SQL, stored procedures and such like), how long would it take to pass the 5 exams required to attain this certification if I studied in my own time after work. I currently work in a support role at a large university and have tech experience but no opportunity to work with data or SQL.

Does anyone have any advice with regards to the best way to study or prep for these exams in terms of training material online or course etc, any first hand experiences?

In particular I was worried by the statement in the official training manual that reads: Although this book helps prepare you for the 70-461 exam, you should consider it as one part of your exam preparation plan. Meaningful, real-world experience with SQL Server 2012 is required to pass this exam.

Truncated copy of a DB for a developer

Posted: 18 Aug 2013 03:19 PM PDT

I have an access to a DB (SQL Server 2008 R2). The size of the DB is about 40GB. I am a developer and I'd like to have a local copy of that DB to be able to work offline.

Is there an easy way to get truncated version of that DB (all db objects and about 1000rows per table)?

I have a backup of that DB on a test server, but I am not able to copy 40GB over my slow connection. I don't have permission to restore that backup and delete rows manually and backup again after.

I probably can script schema by Task -> Generate scripts, but how to script data and limit it to ~1000 rows per table(taking into account the fact that there are about a hundred tables and bounded through FK rows should remain safe without loosing a row at one end)

Inserting and updating records from one table to another

Posted: 18 Aug 2013 11:48 AM PDT

i want to insert record from one table to another but there is 11 conditions for that below is my tables

USE [dealsheet]  GO  /****** Object:  Table [dbo].[bse_trade]     ******/  SET ANSI_NULLS ON  GO  SET QUOTED_IDENTIFIER ON  GO  SET ANSI_PADDING ON  GO  CREATE TABLE [dbo].[bse_trade](      [segment] [varchar](3) NULL,      [process1] [int] NULL,      [memID] [int] NULL,      [traderID] [bigint] NULL,      [scd] [bigint] NULL,      [symbol] [varchar](20) NULL,      [tradeprice] [money] NULL,      [tradedqty] [bigint] NULL,      [oppmID] [int] NULL,      [opptrID] [int] NULL,      [tradetime1] [varchar](20) NULL,      [tradetime2] [varchar](20) NULL,      [accountID] [varchar](20) NULL,      [trnID] [bigint] NULL,      [trntyp] [varchar](3) NULL,      [transtype] [varchar](4) NULL,      [tradeID] [bigint] NULL,      [instID] [varchar](10) NULL,      [isin] [varchar](20) NULL,      [scrgrp] [varchar](3) NULL,      [settno] [varchar](20) NULL,      [orderentrytime] [varchar](20) NULL,      [locationID] [varchar](25) NULL  ) ON [PRIMARY]    GO  SET ANSI_PADDING OFF    ==========================  USE [database name]  GO  /****** Object:  Table [dbo].[consolidated]     ******/  SET ANSI_NULLS ON  GO  SET QUOTED_IDENTIFIER ON  GO  SET ANSI_PADDING ON  GO  CREATE TABLE [dbo].[consolidated](      [segment] [varchar](6) NULL,      [tradeID] [bigint] NULL,      [accountID] [varchar](20) NULL,      [symbol1] [varchar](20) NULL,      [series1] [varchar](6) NULL,      [tradedqty] [bigint] NULL,      [tradeprice] [money] NULL,      [transtype] [int] NULL,      [tradetime] [varchar](20) NULL,      [traderID] [bigint] NULL,      [locationID] [varchar](25) NULL,      [tradevalue] [money] NULL,      [orderentrytime] [varchar](20) NULL,      [process1] [int] NULL,      [isin] [varchar](20) NULL  ) ON [PRIMARY]    GO  SET ANSI_PADDING OFF  

and below are the conditions

1) check the process1 flag in bse_trade table where process1=0  2) Set process1=1 where process1=0  3) set transtype=1 where transtype=B , transtype=2 where transtype=S  {where process1=1}  4) set tradeprice =tradeprice/100 where process1=1  5) Concat tradeid + transtype  6) check for same combination in consolidation where segment =BSE  7) insert to consolidated if not exist , update if exist  8) set process1=0 in consolidated if insert  9) set process1=2 in bse_trade on insert or update  10) set tradevalue=tradedqty x tradeprice in consolidated where process1=0  11) set set process1=1 in consolidation.  

so any one suggest me how to write procedure for that

What does "Remove fully qualified host names from /etc/hosts and use these in config.ini for all clusters hosts" means?

Posted: 18 Aug 2013 09:29 AM PDT

Caution You cannot mix localhost and other host names or IP addresses in config.ini. For these reasons, the solution in such cases (other than to use IP addresses for all config.ini HostName entries) is to remove the fully qualified host names from /etc/hosts and use these in config.ini for all cluster hosts.

http://dev.mysql.com/doc/refman/5.1/en/mysql-cluster-installation.html

I understand localhost can't be used in config.ini but I'm confused as to what has to be removed. I would think that a fully qualified name has to be defined for 127.0.0.1 then used in config.ini even though it's local.

I think that this note is off. Can someone with some mysql cluster experience explain a little?

Oracle 11G Startup failed after altering the number of system processes

Posted: 18 Aug 2013 11:25 AM PDT

When I alter the number of system processes to 100000 and restart the database, I get the error below:

ORA-27154: post/wait create failed   ORA-27300: OS system dependent operation:semids_per_proc failed with status: 0  ORA-27301: OS failure message: Error 0  ORA-27302: failure occurred at: sskgpwcr2  ORA-27303: additional information: semids = 524, maxprocs = 100000  

How can I return it to the normal processes?

Creating a USER using the imp command [duplicate]

Posted: 18 Aug 2013 04:39 AM PDT

This question already has an answer here:

I was told I can use exp to export a certain USER, then DROP the user and with the imp command I can recreate that USER with all of its tables.

  1. Is it true?
  2. How do I do it?

I already have an ADMIN user with just about all privileges (there are 204 of them if it matters).

I tried various options of exporting the USER with the ADMIN account or with the USER's account, but no matter what I do the imp command fails if I try to run it when the USER has been dropped.

Example command:

-- From CLI:  exp USERID=MYUSER/PASS@INSTANCE FILE=mydump.dmp FULL=y  -- From SQLPLUS with admin user  DROP USER MYUSER CASCADE;  -- From CLI:  imp ADMIN/PASS@INSTANCE FILE=mydump.dmp FULL=y  -- The above command will put the contents of MYUSER into the ADMIN account ...  imp ADMIN/PASS@INSTANCE FILE=mydump.dmp FROMUSER=MYUSER TOUSER=MYUSER   -- The above command fails with  the below message:    . importing MYUSER's objects into MYUSER  IMP-00003: ORACLE error 1435 encountered  ORA-01435: user does not exist  Import terminated successfully with warnings.  

MySQL Cluster - strange syntax error

Posted: 18 Aug 2013 07:41 AM PDT

I have MySQL Cluster instance and want one column (BLOB) to be stored on disk instead of memory.
I tried to follow this tutorial: http://mikaelronstrom.blogspot.com/2006/02/how-to-define-table-that-uses-disk.html
And this is output I am getting:

mysql> CREATE DATABASE test1;  Query OK, 1 row affected (0.00 sec)  mysql> use test1;  Database changed  CREATE LOGFILE GROUP lg1      -> ADD UNDOFILE 'undofile.dat'      -> INITIAL_SIZE 16M      -> UNDO_BUFFER_SIZE = 1M      -> ENGINE = NDB;  ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '16M  UNDO_BUFFER_SIZE = 1M  ENGINE = NDB' at line 3  

Here is syntax for this command: http://dev.mysql.com/doc/refman/5.5/en/create-logfile-group.html
Any ideas why this is not working? I cannot find where I am violating this command syntax.

hbase vs postgres performance comparison? [on hold]

Posted: 18 Aug 2013 03:09 AM PDT

I know that this is comparing apples to oranges, but sometimes you have to choose what to eat...

I'm trying to find a rule of thumb as to the magnitudes for which to choose hbase over postgres. E.g., I want to do 'select sum(x) from t group by y'. Obviously this can be done with an HBase scan, but at what volumes is the performance difference substantial.

Looking for any information.

Why am I losing rows on left join?

Posted: 18 Aug 2013 11:17 AM PDT

DROP TABLE  IF EXISTS tg.ldap_large;    CREATE TABLE tg.ldap_large SELECT      ldap_full.uid,      ldap_full.mail,      ldap_full.givenname,      ldap_full.sn,      ldap_full.thomssupervisorid,      ldap_full.thomssupervisoriddescr,      sup_email.mail AS supemail  FROM      ldap_full  LEFT JOIN ldap_full AS sup_email ON ldap_full.thomssupervisorid = sup_email.uid;    ALTER TABLE ldap_large CHANGE uid uid VARCHAR (10) NOT NULL PRIMARY KEY;  

I have a feed that comes in with all employees. It has their supervisor's uid but no email. Just trying to add an supervisor email column on using a Left Join. Problem is that I lose a few rows doing this. Note that some user's supervisor uid will be NULL and we have seen employees that report to themselves. I would just like to take my existing table and add on supervisor email and if nothing then NULL.

SQL Server 2012 High Availability on VMware virtualization [duplicate]

Posted: 18 Aug 2013 03:58 AM PDT

This question is an exact duplicate of:

We're going to host a large SQL Server 2012 under a VMware virtualization and we need it to be highly available.

These are the three options I found supported, but couldn't find a lot of info and comparison on who's better and why (and mostly - what's the disadvantages):

  1. VMware HA - pro: cheap and easy to use and configure, just let the VM team deal with it. con: no HA for system upgrades, e.g. SQL upgrades, Windows services-packs\KBs etc.
  2. SQL Server Failover Cluster (FCS) - pro: well known, supported, saves space (compared to AlwaysOn Availability Group), HA for system upgrades. con: I haven't heard of successful clusters over VM, or at least good ones. Heard it's a mess and to "stay away from it". A bit harder to admin, some problems may take days to figure out (no-one with a lot of knowledge).
  3. SQL Server AlwaysOn Availability Group - pro: HA for everything! system, DB etc. con: New, added administering and hard to confiugre, new problems to deal with, double the space(!), "no strings attached" (to other servers).

That's pretty much it.

I would appreciate any info you may have about this, and would appreciate even more recommendations (hopefully based on test cases).

~Thanks.

SQL Server 2012 Express fails at repair install, produces error 5178

Posted: 18 Aug 2013 09:04 AM PDT

My SQL Server Express Service will not start up.

To produce this problem, I basically cloned my old hard drive (Which had SQL Server Express 2012 installed) to a new hard drive (Seagate Momentus XT 750).

EDIT: I am adding info on how I cloned my hard drive as per request of SQLRockStar. I used Seagate DiscWizard. The program was producing errors when trying to clone the HD when using the simple "clone HD" command. So I "cloned" it the following way with the help of SeaGate Tech support:

  • Place new Hard drive(Momentus XT) in slave slot, Old HD in Master)
  • Create backup image of old HD and save on Old HD.
  • Create Seagate DiscWizard Bootable CD
  • Physically Swap both hard drive positions, (old now is in slave, new is in Master)
  • Boot with Seagate DiscWizard Bootable CD and restore backup image of old HD onto new HD.

Afterwards, I tried starting SQL Server Express 2012 on my Momentus XT and it would not start. So, I tried performing a repair installation of SQL Server, and it failed: see summary below:

Overall summary:    Final result:                  Failed: see details below    Exit code (Decimal):           -2061893608    Start time:                    2013-08-12 15:53:13    End time:                      2013-08-12 16:13:13    Requested action:              Repair    Setup completed with required actions for features.  Troubleshooting information for those features:    Next step for SQLEngine:       Use the following information to resolve the error, and then try the setup process again.    Next step for Replication:     Use the following information to resolve the error, and then try the setup process again.      Machine Properties:    Machine name:                  MATT-LAPTOP    Machine processor count:       8    OS version:                    Windows 7    OS service pack:               Service Pack 1    OS region:                     United States    OS language:                   English (United States)    OS architecture:               x64    Process architecture:          64 Bit    OS clustered:                  No    Product features discovered:    Product              Instance             Instance ID                    Feature                                  Language             Edition              Version         Clustered     SQL Server 2008                                                          Management Tools - Basic                 1033                 Express Edition      10.0.1600.22    No            SQL Server 2012      SQLEXPRESS           MSSQL11.SQLEXPRESS             Database Engine Services                 1033                 Express Edition      11.0.2316.0     No            SQL Server 2012      SQLEXPRESS           MSSQL11.SQLEXPRESS             SQL Server Replication                   1033                 Express Edition      11.0.2316.0     No            SQL Server 2012                                                          Management Tools - Basic                 1033                 Express Edition      11.0.2316.0     No            SQL Server 2012                                                          LocalDB                                  1033                 Express Edition      11.0.2318.0     No            Package properties:    Description:                   Microsoft SQL Server 2012     ProductName:                   SQL Server 2012    Type:                          RTM    Version:                       11    SPLevel:                       0    Installation location:         c:\215ca8b216eb992f2f4a\x64\setup\    Installation edition:          Express    User Input Settings:    ACTION:                        Repair    AGTDOMAINGROUP:                <empty>    AGTSVCACCOUNT:                 NT AUTHORITY\NETWORK SERVICE    AGTSVCPASSWORD:                <empty>    AGTSVCSTARTUPTYPE:             Disabled    ASCONFIGDIR:                   Config    ASSVCACCOUNT:                  <empty>    ASSVCPASSWORD:                 <empty>    CLTSTARTUPTYPE:                0    CLTSVCACCOUNT:                 <empty>    CLTSVCPASSWORD:                <empty>    CONFIGURATIONFILE:                 CTLRSTARTUPTYPE:               0    CTLRSVCACCOUNT:                <empty>    CTLRSVCPASSWORD:               <empty>    ENU:                           true    FAILOVERCLUSTERGROUP:          <empty>    FAILOVERCLUSTERNETWORKNAME:    <empty>    FTSVCACCOUNT:                  <empty>    FTSVCPASSWORD:                 <empty>    HELP:                          false    INDICATEPROGRESS:              false    INSTANCENAME:                  SQLEXPRESS    ISSVCACCOUNT:                  NT AUTHORITY\Network Service    ISSVCPASSWORD:                 <empty>    ISSVCSTARTUPTYPE:              Automatic    QUIET:                         false    QUIETSIMPLE:                   false    SQLSVCACCOUNT:                 NT Service\MSSQL$SQLEXPRESS    SQLSVCPASSWORD:                <empty>    UIMODE:                        AutoAdvance    X86:                           false      Configuration file:            C:\Program Files\Microsoft SQL Server\110\Setup Bootstrap\Log\20130812_155231\ConfigurationFile.ini    Detailed results:    Feature:                       Management Tools - Basic    Status:                        Passed      Feature:                       Database Engine Services    Status:                        Failed: see logs for details    Reason for failure:            An error occurred during the setup process of the feature.    Next Step:                     Use the following information to resolve the error, and then try the setup process again.    Component name:                SQL Server Database Engine Services Instance Features    Component error code:          0x851A0018    Error description:             Could not find the Database Engine startup handle.    Error help link:               http://go.microsoft.com/fwlink?LinkId=20476&ProdName=Microsoft+SQL+Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=11.0.2316.0&EvtType=0xD15B4EB2%400x4BDAF9BA%401306%4024&EvtType=0xD15B4EB2%400x4BDAF9BA%401306%4024      Feature:                       SQL Server Replication    Status:                        Failed: see logs for details    Reason for failure:            An error occurred for a dependency of the feature causing the setup process for the feature to fail.    Next Step:                     Use the following information to resolve the error, and then try the setup process again.    Component name:                SQL Server Database Engine Services Instance Features    Component error code:          0x851A0018    Error description:             Could not find the Database Engine startup handle.    Error help link:               http://go.microsoft.com/fwlink?LinkId=20476&ProdName=Microsoft+SQL+Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=11.0.2316.0&EvtType=0xD15B4EB2%400x4BDAF9BA%401306%4024&EvtType=0xD15B4EB2%400x4BDAF9BA%401306%4024      Feature:                       SQL Browser    Status:                        Passed      Feature:                       SQL Writer    Status:                        Passed      Feature:                       LocalDB    Status:                        Passed      Feature:                       SQL Client Connectivity    Status:                        Passed      Feature:                       SQL Client Connectivity SDK    Status:                        Passed      Feature:                       Setup Support Files    Status:                        Passed    Rules with failures:    Global rules:    There are no scenario-specific rules.    Rules report file:               C:\Program Files\Microsoft SQL Server\110\Setup Bootstrap\Log\20130812_155231\SystemConfigurationCheck_Report.htm    The following warnings were encountered while configuring settings on your SQL Server.  These resources / settings were missing or invalid so default values were used in recreating the missing resources.  Please review to make sure they don't require further customization for your applications:    Service SID support has been enabled on the service.  Service SID support has been enabled on the service.    The following resources could not be configured during repair without additional user input.  Review the warnings to understand your next steps:    The service failed to start for an unknown reason. For more information, see the event logs and the SQL Server error logs.  

I looked at the error log and it said

Error: 5178, Severity: 16, State: 1 Cannot use file 'c:\Program Files\Microsoft SQL Server\MSSQL10.SQLEXPRESS\MSSQL\DATA\master.mdf' because it was originally formatted with sector size 4096 and is now on a volume with sector size 3072. Move the file to a volume with a sector size that is the same as or smaller than the original sector size.

I read that last error message and am really confused. I'm led to believe that this is a problem with SQL Server, My HD has 4096 sector size.

UPDATE:

More information: I have discovered that My old hard Drive was 512 physical sector size and my new HD is 4096 sector size. I hear that there are conversion issues between the two sector sizes, but SQL Server is the only program that is producing errors on my system, I don't understand it.

UPDATE:

To fix this problem, I am willing to delete the MasterDB File and recreate it through reinstall, but I don't know how to delete it.

Postgresql Streaming Replication - pgpool2 - failover

Posted: 18 Aug 2013 08:41 PM PDT

In my scenario I want pgpool to forward read only queries to slaves when the master goes down - I want my app to be in the "read only mode".

How do I setup pgpool to accept read only queries when master fails (streaming replication) ?

Currently I when the master goes down, pgpool waits for the master and doesn't forward any queries to slaves.

Why set up static data in views vs. using tables in mysql?

Posted: 18 Aug 2013 11:35 AM PDT

I get an LDAP feed nightly. I get it as a text file and dump/create my LDAPALL table. There are roughly 75K employees times about 50 fields.

I have the following too:

LDAPIMPORTANT - view that stores all 75K but only 15 fields

LDAPSHORT - view that stores all 75k but 5 fields

LDAPAB - view that only stores 9k employees based on two groups (field lookup)

Each of these are used a lot and for different apps and also there are a lot of views written against these views. But there is no updates to them. We do not update employee data. It is just LDAPALL update once a night.

In this circumstance should I create tables from the LDAPALL table instead of views? I could set up jobs to create these tables once a night. What is best practice behind this? Speak in layman's terms because I am a PHP developer made to do all DB admin stuff.

What is the difference between NULL and \N while loading data from a csv using load data local infile

Posted: 18 Aug 2013 10:08 AM PDT

I have a table which has four fields

create table test_hc1(cola int,colb int,colc varchar(20),cold int);  

Now, I am trying to load following data

1,2,\N,4  

and the third field in table is coming out as NULL, which is the expected result.

In another case, I am trying to load following data

1,2,NULL,4  

and it is also loading the third field as NULL.

Here is my question : What is the difference between using the two(\N, NULL) ?

I am using

LOAD DATA LOCAL INFILE 'PATH_TO_DATAFILE'  INTO TABLE TABLE_NAME FIELDS TERMINATED BY x'01';  

Any help is appreciated.

Thanks

Create spfile from pfile

Posted: 18 Aug 2013 03:08 PM PDT

I'm having a problem in creating spfile from pfile.I think I don't have permission to write on Oracle Home. I'm using Windows 8

SQL> startup nomount pfile='initlittle2.ora.txt';  ORACLE instance started.    Total System Global Area  644468736 bytes  Fixed Size                  1376520 bytes  Variable Size             192941816 bytes  Database Buffers          444596224 bytes  Redo Buffers                5554176 bytes  SQL> create spfile from pfile;  create spfile from pfile  *  ERROR at line 1:  ORA-01078: failure in processing system parameters  LRM-00109: could not open parameter file  'A:\ORACLEBASE\PRODUCT\11.2.0\OLTP2\DATABASE\INITLITTLE2.ORA'   

Any workaround for my problem?

The InnoDB log sequence number is in the future

Posted: 18 Aug 2013 07:08 PM PDT

Our site was getting intermittent database errors and I asked my webhost to check what's happening. After some inspection they found that the database has few issues and tried repairing it. Finally I got following message from them-

I have attempted every repair on the InnoDB database and we are still getting the InnoDB log sequence number is in the future. At this point to get the ibdata and the iblogfile to match up once again we will need to restore the MySQL directory (which includes databases) from the backup we have on the server. The process should not take to long but there will be some downtime associated with a restore like this. If this is not the best time to restore the MySQL directory I can schedule this for a different time. Please let me know how you would like to proceed with this.

Can someone tell me what'd be the best way to address this issue. I really don't want to lose out on any data and want the dB to be repaired.

PS: If you need more information, please let me know and I'll get it from our web hosts.

Would really appreciate your help.

Why are there different levels of MySQL collation/charsets?

Posted: 18 Aug 2013 09:08 AM PDT

Looking at the MySQL server variables the server and database collation are set to latin1_swedish_ci but the collaction_connection is utf8_general_ci. Additionally, the collation/charset is set at many levels: server, database, table and column. You also have the charset of the PHP MySQL connection to consider.

My question is fourfold:

  1. Why are there different levels of MySQL collation/charsets? Is it so you can mix up your character sets to suit your needs? Correct me if I'm wrong but utf8 seems to be the best character set for general use so why is it that most of the LAMP setups I've used have latin1 as default?

  2. Should you always ensure your PHP connection matches the charset of the database you're working on?

  3. If you can have different tables that use different character sets do you just use SET NAMES or mysql(i)_set_charset to switch?

  4. If you have a table that has multiple charsets how do you manage that since the connection can only use one charset at a time?

Many thanks.

EDIT:

Regarding the comment:

"You can use either, the former one only to set results charset and the latter one to set either PHP internal encoding for use with mysqli_real_escape_string and results encoding."

I thought you aren't meant to use real_escape_string and SET NAMES together. See: http://www.php.net/manual/en/mysqlinfo.concepts.charset.php

how to verify mysql backup file

Posted: 18 Aug 2013 06:08 PM PDT

Is There any tool to verify the backup taken using mysqldump without manual restoration ?

I have to take full backup daily so it is difficult for me to verify the backups daily by restoring them. As it will take time and also temporary disk space.

Please suggest me how to verify my backup online

Service Broker stops working

Posted: 18 Aug 2013 07:08 AM PDT

I'm new to Service Broker. Our existing Service Broker suddenly stops working and I can't figure out why.

When I try to enable Service Broker (which is already enabled) I get this in SQL error log:

An error occurred in the service broker manager
Error: 3602, State: 25.

Also tried creating new message queues. I am able to see the created queues under sys.service_queues and sys.objects but not in sys.dm_broker_queue_monitors.

Hierarchical Structure

Posted: 18 Aug 2013 05:08 PM PDT

Every single user has say, 3 of GROUP_A, 10 GROUP_B's per GROUP_A, and 20 GROUP_C's per GROUP_B. And each of the 20 GROUP_C's involve lots of inserts/deletes...

I'm not an expert, but I've done research but it's all theoretical at this point of course, and I don't have hands on experience with the implementation that's for sure. I think my options are something like 'adjacency lists' or 'nested sets'?

Any guidance into the right direction would be very much appreciated!

Error AWR Operation failed: CATPROC not valid when creating database with dbca

Posted: 18 Aug 2013 06:08 AM PDT

I have a fresh installation of Oracle 11g enterprise edition on CentOS 6, both 64 bits. I have installed Oracle and so far everything looks fine. I have already added an started a listener. The problem I have now is when I try to create a test db using dbca. Everything goes well until the post creation DB part which popups an error. The error shows the same text as the /opt/app/oracle/cfgtoollogs/dbca/testingDB/postDBCreation.log:

   utl_recomp_begin: 09:34:38      utl_recomp_end: 09:34:42     BEGIN dbms_swrf_internal.cleanup_database(cleanup_local => FALSE); END;      *    ERROR at line 1:    ORA-13516: AWR Operation failed: CATPROC not valid    ORA-06512: at "SYS.DBMS_SWRF_INTERNAL", line 206    ORA-06512: at "SYS.DBMS_SWRF_INTERNAL", line 239    ORA-06512: at line 1  

The creation of the DB cannot continue and it stops here...

I have searched everywhere in internet but not luck so far. I have no clues what it can be. Any suggestions?

Thanks in advance,

Why does that query cause lock wait timeouts?

Posted: 18 Aug 2013 02:08 PM PDT

From time to time, I find a lot of these errors in my PHP error log:

MYSQL.1213: Deadlock found when trying to get lock; try restarting transactionSQL  

The problem persists for about 2 or 3 minutes. Thanks to stackoverflow, the reason was quite easy to find:

------------------------  LATEST DETECTED DEADLOCK  ------------------------  130320 15:53:37  *** (1) TRANSACTION:  TRANSACTION 0 83395751, ACTIVE 35 sec, process no 26405, OS thread id 140507872417536 starting index read  mysql tables in use 3, locked 3  LOCK WAIT 3 lock struct(s), heap size 1216, 2 row lock(s)  MySQL thread id 1163191, query id 199629038 localhost sosci Updating  UPDATE `database`.`table` SET `invalidate`='2013-03-21 03:53:02' WHERE ((token='C7G8X3HABCDEFGH') AND (invalidate IS NULL)) AND (project=26118) LIMIT 1  *** (1) WAITING FOR THIS LOCK TO BE GRANTED:  RECORD LOCKS space id 0 page no 65548 n bits 192 index `PRIMARY` of table `database`.`table` trx id 0 83395751 lock_mode X locks rec but not gap waiting  Record lock, heap no 4 PHYSICAL RECORD: n_fields 12; compact format; info bits 0   0: len 4; hex 000c2591; asc   % ;; 1: len 6; hex 000004e36ace; asc     j ;; 2: len 7; hex 8000000a830110; asc        ;; 3: len 4; hex 80000001; asc     ;; 4: len 4; hex 80006606; asc   f ;; 5: len 1; hex 07; asc  ;; 6: len 16; hex 32455637363853485447444734584252; asc 2EV768SHTGDG4XBR;; 7: SQL NULL; 8: len 30; hex 3935363436362c656e672c616e6e612e63616d706f7265736940676d6169; asc 956466,eng,anna.camporesi@gmai;...(truncated); 9: SQL NULL; 10: len 8; hex 8000124ef477640e; asc    N wd ;; 11: len 8; hex 8000124ef495e88e; asc    N    ;;    *** (2) TRANSACTION:  TRANSACTION 0 83395676, ACTIVE 37 sec, process no 26405, OS thread id 140507856160512 fetching rows, thread declared inside InnoDB 451  mysql tables in use 1, locked 1  4 lock struct(s), heap size 1216, 53 row lock(s), undo log entries 1  MySQL thread id 1163198, query id 199628885 localhost sosci updating  DELETE FROM `database`.`table` WHERE ((action="limit") AND (info='login') AND (creation < DATE_SUB(NOW(), INTERVAL 10 MINUTE)))  *** (2) HOLDS THE LOCK(S):  RECORD LOCKS space id 0 page no 65548 n bits 192 index `PRIMARY` of table `database`.`table` trx id 0 83395676 lock_mode X  Record lock, heap no 4 PHYSICAL RECORD: n_fields 12; compact format; info bits 0   0: len 4; hex 000c2591; asc   % ;; 1: len 6; hex 000004e36ace; asc     j ;; 2: len 7; hex 8000000a830110; asc        ;; 3: len 4; hex 80000001; asc     ;; 4: len 4; hex 80006606; asc   f ;; 5: len 1; hex 07; asc  ;; 6: len 16; hex 32455637363853485447444734584252; asc 2EV768SHTGDG4XBR;; 7: SQL NULL; 8: len 30; hex 3935363436362c656e672c616e6e612e63616d706f7265736940676d6169; asc 956466,eng,anna.camporesi@gmai;...(truncated); 9: SQL NULL; 10: len 8; hex 8000124ef477640e; asc    N wd ;; 11: len 8; hex 8000124ef495e88e; asc    N    ;;    *** (2) WAITING FOR THIS LOCK TO BE GRANTED:  RECORD LOCKS space id 0 page no 65548 n bits 192 index `PRIMARY` of table `database`.`table` trx id 0 83395676 lock_mode X waiting  Record lock, heap no 117 PHYSICAL RECORD: n_fields 12; compact format; info bits 0   0: len 4; hex 000c31d2; asc   1 ;; 1: len 6; hex 000004f884fc; asc       ;; 2: len 7; hex 80000011040110; asc        ;; 3: len 4; hex 80000001; asc     ;; 4: SQL NULL; 5: len 1; hex 06; asc  ;; 6: SQL NULL; 7: len 15; hex 3133322e3139392e3132312e313632; asc 132.199.121.162;; 8: len 5; hex 6c6f67696e; asc login;; 9: len 1; hex 81; asc  ;; 10: len 8; hex 8000124ef49502aa; asc    N    ;; 11: SQL NULL;    *** WE ROLL BACK TRANSACTION (1)  

What I do not understand is: Why? The locked table is very small, only 61 entries (about 30 new and 30 deleted per day, the auto-incremented primary index is near 800.000). No column is especially large.

I use the InnoDB enging for this table (one key refers to another table with approx. 20.000 entries) and the problem occurs from time to time. RAM should not be an issue.The webserver and MySQL server run on the same (virtual) machine that usually does not suffer performance problems. Other transactions (there were thousands during the locked minutes) in large tables (1-2 mio. entries) did not make any trouble.

Thanks for your hints!

SQL Server 2005 not collecting logins in the log file

Posted: 18 Aug 2013 01:08 PM PDT

I am currently running SQL Server 2005 Standard Edition on a Windows 2003 server machine.

I have gone to the properties of the server and confirmed that the Login Auditing is set to both failed and successful logins. For some reason though there is nothing in the logins for fails or successes in the SQL Server logs when I know there have been logins for both.

I have searched out the reason for this and have not been able to come up with anything helpful, so I am hoping that someone here could give me a little direction. This is working on my other SQL Servers, so I know where to look for the results, but there are none there.

SQL Server 2008 R2 corrupted after changing MAXDOP

Posted: 18 Aug 2013 04:08 PM PDT

My SQL Server 2008 R2 doesn't work after changing the MAXDOP parameter on instances.

I have 6 instances located on the same physical server and I changed MAXDOP from 1 to 2 on 5 instances. After that all these 5 instances don't work. After restart of the server any services don't start and I just see the error

The request failed or the service did not respond in a timely fashion. Consult the event log or other application error logs for details

In Event Viewer I didn't find anything useful, also error log wasn't updated when I tried to run instance.

Also I noticed a strange thing. When I open the Configuration Manager and open properties of instance, I don't see the fonts of parameter's name in Advanced tab. In another tabs everything is fine. The screen is attached. Does anybody faced the same problem?

I don't have any idea how repair this...

error

upd: By the way, i tried start sql server manually from command line with -f or -m parameter but nothing.. I just see empty error window

Applying user-defined fields to arbitrary entities

Posted: 18 Aug 2013 08:08 PM PDT

Currently we have an old (rather crude) system that has user-defined fields, which are mapped against rows in arbitrary tables. This was an after-the-fact modification based on a customer request, and it wasn't really designed to scale well. Our system has around 60 different types of entities, which makes things even more complicated. Essentially the implementation looks like this:

USER_DEFINED_FIELDS:

UDF_ID         int  ENTITY_TYPE    int  UDF_NAME       nvarchar(64)  UDF_DATA_TYPE  int  UDF_FORMAT     nvarchar(16)  UDF_MASK       nvarchar(24)  UDF_FLAGS      int  

UDF_VALUES_NUMBER:

UDF_ID         int  ENTITY_ID      int  VALUE          int  MODIFIED       datetime  

UDF_VALUES_TEXT:

UDF_ID         int  ENTITY_ID      int  VALUE          nvarchar(255)  MODIFIED       datetime  

etc...

This gets nice and fun when we generate our own ways to index compound primary keys, but that's another DailyWTF-worthy story.

Obviously this is pretty hideous, and leads to some spectacularly horrific queries being generated, but it's worked alright for now because we limit each entity to a maximum of 5 user-defined fields. As a quick disclaimer, I wasn't with the company when this design decision was made! ;)

Anyway, we're about to start a shiny new project and will inevitably need a better way of doing this, with no restrictions on the number of UDFs we can apply to entities, increased performance, and less horror in the generated query department. If we could run a NoSQL solution like Mongo or Redis I'd be happy and wouldn't need to bother you all, but sadly that's not an option. Instead, I need a way to do this from within SQL Server 2008 R2.

So far, I've come up with the following options:

  • Individual UDF table for each entity type, with identical structures.
    • Benefits: Queries are faster and simpler, solves the compound index problem, can search by UDF.
    • Downsides: Table versioning is more cumbersome, lots of extra tables, difficult to index across similar entity types (e.g. purchase orders and sales orders)
  • Binary blob UDF column in every entity's table.
    • Benefits: Really fast: no fetch overhead, no extra JOINs.
    • Downsides: Can't search by UDF, extra proprietary code required.
  • XML column in every entity's table.
    • Benefits: Fast like binary blobs, seems to be indexable.
    • Downsides: No idea how they work - looks complicated!

So, do any of these ideas have merit? Is there a better alternative?

SQL Azure: More Intermittent Timeouts

Posted: 18 Aug 2013 08:21 AM PDT

(also posted on MSDN forums http://social.msdn.microsoft.com/Forums/en-US/ssdsgetstarted/thread/79058c06-d3c5-4073-8571-c29d1f43f7ae)

Hi guys,

We have a set of 5 online auction systems running on Windows Azure & SQL Azure. Each system consists of a single web worker and one or more web roles. Each system is using ASP.NET MVC 3 and Entity Framework, Repository Pattern and StructureMap.

The worker role is responsible for housekeeping and runs two groups of processes. One group is run every ten seconds, the other every second. Each process will likely run a database query or stored procedure. These are scheduled with Quartz.net

The web role serves the public interface and back office. Among other basic crud functionality, both of these provide screens which, when open, will repeatedly call controller methods which will result in execution of stored procedure read-only queries. The frequency of repetition is about 2-3 seconds per client. A typical use case would be 5 back office windows open, and 25 end user windows open – all hitting the system repeatedly.

For a long time we have been experiencing intermittent SQL timeout errors. Three of the most common ones are:

System.Data.SqlClient.SqlException: A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.)

System.Data.SqlClient.SqlException: A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - The semaphore timeout period has expired.)

System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.

The only predictable scenario is during an auction where a specific controller -> sproc starts to timeout during the event (presumably due to load). All other times the errors appear to be completely random and come in singles, two's, and three's etc. even during periods of user inactivity. For example the system will go 18 hours without an error and then could be 5 – 10 errors from different housekeeping methods, or perhaps a user logged on and viewed their account.

Other info:

I have tried to run the affected queries/sprocs on SQL Azure using both local SSMS and Azure web-based query tool – all seem to execute quickly, 1 second max. Query plans not showing anything too suspicious although I am by no means a SQL query performance expert, or any other kind of expert for that matter J

We have wrapped all affected areas in Azure SQL Transient Fault Handling Blocks – but as is discussed here http://social.msdn.microsoft.com/Forums/en-US/ssdsgetstarted/thread/7a50985d-92c2-472f-9464-a6591efec4b3, they do not catch timeouts, and according to "Valery M" this is for good reason.

We are not storing any session information in the database, although asp.net membership information is stored in the database.

We use 1 "SQL Azure server instance" which hosts all 5 databases, two for staging and three for production. All 5 systems are generally active at the same time although it is unlikely that more than one will be in a state of live load use at any given time. All web roles, worker roles and the SQL Azure server reside in the same Azure Geographical Region.

Any thoughts on where we should be looking? Would it help giving each system it's own SQL Azure server? ... Failing a solution by ourselves - is it possible to get Microsoft to open a support ticket and take a look under the hood at what's going on in with our application – how does one go about this?

Thanks in advance.

Ilan

Import a Oracle DMP file into a Fresh install of oracle

Posted: 18 Aug 2013 04:01 AM PDT

A client sent us a oracle database we need to test against. We don't use oracle or have any oracle expertise in house.

We need to setup the database so we can connect to it and debug a problem.

I did a fresh install of oracle 9 (the version the client is running) and the management tools.

I cannot for the life of me get it to import the data. It cannot be this complicated. I must be getting something wrong.

I've tried:

imp 'sys/password AS SYSDBA' file=c:\database.dmp full=yes log=implog.txt  

and got:

Connected to: Personal Oracle9i Release 9.2.0.1.0 - Production  With the Partitioning, OLAP and Oracle Data Mining options  JServer Release 9.2.0.1.0 - Production    Export file created by EXPORT:V09.02.00 via conventional path    Warning: the objects were exported by SYSTEM, not by you    import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set  . importing SYSTEM's objects into SYS  . importing ABCDE's objects into ABCDE  IMP-00003: ORACLE error 1435 encountered  ORA-01435: user does not exist  Import terminated successfully with warnings.  

But nothing shows up in the manager as far as tables in any schema and I'm at my wits end.

1 comment:

  1. Cause or Reason for MySQL Error 1064 and Solve through MySQL Technical Support
    At whatever point you discovered blunder 1064, which appears that there is a grammar mistake. Because of this issue the MySQL can't comprehend the summon which you are issuing. The fundamental driver is, your charge isn't legitimate one inside the organized inquiry dialect or SQL. The language structure blunder happens because of syntactic oversights and the outcome will be in the failure of parser to comprehend the order and neglect to do anything with it.
    Be that as it may, on the off chance that you need to explain this language structure blunder then you need to connect with MySQL Remote Support or MySQL Remote Service. Improve your MySQL execution with our accomplished specialized specialists.
    For More Info: https://cognegicsystems.com/
    Contact Number: 1-800-450-8670
    Email Address- info@cognegicsystems.com
    Company’s Address- 507 Copper Square Drive Bethel Connecticut (USA) 06801

    ReplyDelete

Search This Blog