Monday, September 16, 2013

[how to] Question on populating from database

[how to] Question on populating from database


Question on populating from database

Posted: 16 Sep 2013 09:01 PM PDT

I am trying to populate a workout from a list in a database.

For example, the Bicep Curl machine has these sets and reps:

bicep curl machine  sets 1  reps 50  bicep curl machine  sets 2  reps 25  bicep curl machine  sets 3  reps 15  bicep curl machine  sets 4  reps 10  bicep curl machine  sets 1  reps 100  

I want to randomly choose one of these rows and push it to my site. Right now there is a SetsRepsWeight table containing set_id, sets, reps, and weight

Then there is a setMaster table containing machine_id and set_id, where machine_id comes from a table with all the machines available.

So if my SetsRepsWeight table looks like this:

enter image description here

setMaster contains:

enter image description here

So bicep curl can be any of the set_id's 1-5. I want to randomly pick the row and populate it into the database but i need a recommendation on setting this up. Right now the sets, reps, and weight are in an array. How can i achieve my goal?

Here is my code now:

$machine_atts = array(  'bicep curl' => array( 'sets' => 1, 'reps' => 50, 'weight' => 25 ),  'cable chest press' => array( 'sets' => 1, 'reps' => 100, 'weight' => 40 ),  'lat pulldown' => array( 'sets' => 2, 'reps' => 25, 'weight' => 20 ),  'tricep extension' => array( 'sets' => 3, 'reps' => 25, 'weight' => 30 ),  );    $sql1 = "SELECT m1.machine_id, m2.* FROM userPlanDetail AS m1 LEFT JOIN machines AS m2 ON m1.machine_id = m2.machine_id WHERE `user_id` = '$user_id1' AND `cardio` = 0";  $retval1 = mysql_query( $sql1, $conn );  $array = array();    $i=0;  while ($row = mysql_fetch_array($retval1, MYSQL_ASSOC)) {    $i++;    $sets = $machine_atts[$row['machine_name']]['sets'];    $reps = $machine_atts[$row['machine_name']]['reps'];    $weight = $machine_atts[$row['machine_name']]['weight'];    $array[] = '<tr><td width="50"><img src="client workout page/round_'.$i.'.jpg"></td><td><img src="'.$row["picture"].'" style="width:200px;height:200px;display:block;margin:20px auto;"/></td><td>'.$row['machine_name'].' <span class="blue">#'.$i.'</span><br>sets <span class="blue">'.$sets.'</span> reps <span class="blue">'.$reps.'</span><br>weight <span class="blue">'.$weight.'</span></td></tr><tr><td></td><td><a class="demo" href="'.$row["link"].'" rel="group1"><img src="client workout page/click to play video.jpg" style="width:186px;height:14px;margin:0 auto;"></a></td></tr>';  }  

db2 database alias not listed via list database directory but still unable to create that database

Posted: 16 Sep 2013 08:42 PM PDT

I am trying to create a database by logging on to the linux server on which the db2 server is installed, as the instance owner

db2 => list database directory     System Database Directory     Number of entries in the directory = 1    Database 1 entry:     Database alias                       = SAMPLE   Database name                        = SAMPLE   Local database directory             = /home/mylogin/sqllib   Database release level               = f.00   Comment                              =   Directory entry type                 = Indirect   Catalog database partition number    = 0   Alternate server hostname            =   Alternate server port number         =    db2 => create database testdb2  SQL1005N  The database alias "TESTDB2" already exists in either the local   database directory or system database directory.      db2 => list database directory on /home/mylogin/sqllib     Local Database Directory on /home/mylogin/sqllib     Number of entries in the directory = 1    Database 1 entry:     Database alias                       = SAMPLE   Database name                        = SAMPLE   Database directory                   = SQL00001   Database release level               = f.00   Comment                              =   Directory entry type                 = Home   Catalog database partition number    = 0   Database member number               = 0  db2 => quit  

I also checked if there are any other instances that exist other than mine, but there aren't.

DB20000I  The QUIT command completed successfully.  -bash-4.1$ db2ilist  mylogin  

While the alias testdb2 is not listed by list database directory , db2 claims that this alias already exists in either the system database directory or the local database directory.

If this is true, how do I get the equivalent of list database directory that lists testdb2 as well?

db2 SQL0930N error on linux - not enough space to process this statement

Posted: 16 Sep 2013 08:51 PM PDT

This is another newbie question on db2, related to space. How to I start diagnosing and then solving what the problem is so that users can connect to this database? This error is displayed when both connecting to an existing database as well as when creating a new database.

db2 => list database directory     System Database Directory     Number of entries in the directory = 1    Database 1 entry:     Database alias                       = SAMPLE   Database name                        = SAMPLE   Local database directory             = /home/mylogin/sqllib   Database release level               = f.00   Comment                              =   Directory entry type                 = Indirect   Catalog database partition number    = 0   Alternate server hostname            =   Alternate server port number         =    db2 => connect to sample  SQL0930N  There is not enough storage available to process the statement.    SQLSTATE=57011  db2 => !uname -a  Linux myhostname 2.6.32-131.0.15.el6.x86_64 #1 SMP Tue May 10 15:42:40 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux  db2 => create database newdb  SQL0930N  There is not enough storage available to process the statement.    SQLSTATE=57011  db2 =>   

Check to see if my relation is bcnf

Posted: 16 Sep 2013 08:40 PM PDT

I have the original relation:

PET-AND-OWNER (PetName, PetType, PetBreed, PetDOB,                 OwnerLastName, OwnerFirstName, OwnerPhone, OwnerEmail)  

And I have converted it to:

PET (Petname, OwnerEmail, PetType, PetBreed, PetDOB)  OWNER (OwnerEmail , OwnerLastName, OwnerFirstName, OwnerPhone)  

With the referential integrity constraint: OwnerEmail in PET must exist in OwnerEmail in Owner.

Would this be correct?

How to get missing values in row order?

Posted: 16 Sep 2013 05:32 PM PDT

In a table, many rows have been deleted, how to get the id of missing rows for next INSERT?

For example

id      col1  1       1  3       3  4       4  5       5  8       8  9       9  

how can I get the value of first available id for next INSERT. Here I want to get the id = 2.

Something like

SELECT id+1 FROM table WHERE CLAUSE // pointing to the least available value  // or x = id +1 (after SELECT)  INSERT INTO table (id, ....) VALUES ('x', ....)  

Probable circular dependency, but query still runs

Posted: 16 Sep 2013 04:19 PM PDT

I built this ER diagram in MySQL Workbench and it will run and generate the database. My concern is that the relationship between the employees, departments, positions, and employment_statuses is a circular relationship.

Because I know people will ask, this is something I was working on in a class. I'm not asking for help with homework, this is something we were working on in class and I took a different route than the professor. I'm asking for clarification about database modelling theory.

enter image description here

Do regular ALTERs weaken the performance?

Posted: 16 Sep 2013 08:39 PM PDT

In a project, we need to frequently ALTER the tables to delete or add columns. I wonder if this weakens the database performance in long term. If it has a considerable negative impact, what is the approach to keep the database healthy and efficient?

I was thinking of the following approaches:

  1. OPTIMIZE the tables

  2. CREATE a new database with the final architecture, and INSERT the entire data

The database is mysql(5.5) with InnoDB.

Table data transfer using SSIS and triggers on destination table

Posted: 16 Sep 2013 05:57 PM PDT

I'm a novoice in db operations. SQL Server 2008 R2 is the database I'm using.

I need to move selected rows from a staging database table to production database table. I'm using SSIS package to do that. I've few triggers on destination table. I'm now confused whether these triggers will be fired or not when I transfer the data from source table to destination table.

If these triggers won't be fired while transferring data from source to destination table, is there anything can be done to fire these triggers during this data transfer?

Giving me some information on the above will be greatly appreciated.

Update: Selected rows will be added to the destination table. Triggers will update data in some other table in the destination server based on newly inserted data. For example:

    CREATE TRIGGER [MyTableChange]          ON  [dbo].[DestinationTable]         AFTER INSERT,UPDATE      AS       BEGIN        declare @newid as int      select @newid=newid  FROM INSERTED         Declare @total as int       declare @lookatme as int      select @total=total, @lookatme from DestinationTable where newid=@newid      update anothertable set total=total+@total where lookwhere=@lookatme      END    GO  

Above trigger is kind of example not exactly same logic which I've in original trigger. But it gives info of what exactly I'm trying to do. I'll be accessing newly inserted row details to update another table value. Please do not take this as correct trigger in terms of syntax and way it is written. I just tried to give essence of original trigger.

why I am getting an error message data type mismatch in criteria expression, error 2950? [migrated]

Posted: 16 Sep 2013 04:32 PM PDT

this is my code:

If Isnull([RECRUITMENT_NR]) Then      Beep  End if    If Not Isnull([RECRUITMENT_NR]) Then    open form  formname frmHR_RECRUITMENT  view Form  where condition =="[RECRUITMENT_NR]=&[RECRUITMENT_NR]    Data mode  Window mode Dialog  

please note we run in access 2010

Error message: Data type mismatch in criteria expression error# 2950

any suggestion is greatly appreciated!

Enforce a constraint problem

Posted: 16 Sep 2013 12:11 PM PDT

This schema is for a case management database. Work is created with a phase with a child first stage and progresses to an end stage where it is closed.
Problem:

  • You can have more than one entry in STAGE where IS_START_STAGE = 1 which violates a business rule
  • You can insert a new entry into STAGE where IS_START_STAGE = 1 and this does not match the corresponding entry in PHASE_FIRST_STAGE
  • the relationship should be something like constraint PHASE_FIRST_STAGE.STAGE_ID can only be in the entries in STAGE where IS_FIRST_STAGE = 1
  • Is there anyway to enforce these business rules?

CREATE TABLE PHASE_FIRST_STAGE  (    PHASE_ID           NUMBER(9)           NOT NULL, --PRIMARY KEY    STAGE_ID           NUMBER(9)           NOT NULL,  --FOREIGN KEY to STAGE table  );  ALTER TABLE PHASE_FIRST_STAGE ADD (CONSTRAINT PFS01    FOREIGN KEY (PHASE_ID)     REFERENCES PHASE (ID),    FOREIGN KEY (STAGE_ID)     REFERENCES STAGE (ID));  COMMENT ON TABLE PHASE_FIRST_STAGE IS 'Contains the default first stages to enter when a phase is entered.';    CREATE TABLE STAGE  (    ID               NUMBER(9)           NOT NULL, --PRIMARY KEY    PHASE_ID         NUMBER(9)           NOT NULL,  --FOREIGN KEY    DISABLED         NUMBER(1)           DEFAULT 0   NOT NULL,  --CHECK IN (0,1)    IS_START_STAGE            NUMBER(1),--CHECK IN (0,1)    IS_END_STAGE              NUMBER(1) --CHECK IN (0,1)  );    COMMENT ON TABLE STAGE IS 'Contains all the stages a phase can have. Each stage must have only one phase. ';  

Install MariaDB and MySQL in CentOS 6 32bit in same server

Posted: 16 Sep 2013 11:56 AM PDT

I'm trying to install MariaDB with MySQL in the same server but got this errors:

Setting up Install Process  Resolving Dependencies  --> Running transaction check  ---> Package MariaDB-client.i686 0:5.5.32-1 will be installed  --> Processing Dependency: MariaDB-common for package: MariaDB-client-5.5.32-1.i686  ---> Package MariaDB-server.i686 0:5.5.32-1 will be obsoleting  ---> Package mysql-server.i686 0:5.5.33-1.el6.remi will be obsoleted  --> Running transaction check  ---> Package MariaDB-common.i686 0:5.5.32-1 will be installed  --> Processing Dependency: MariaDB-compat for package: MariaDB-common-5.5.32-1.i686  --> Running transaction check  ---> Package MariaDB-compat.i686 0:5.5.32-1 will be installed  --> Processing Conflict: mysql-5.5.33-1.el6.remi.i686 conflicts MySQL  --> Finished Dependency Resolution  Error: mysql conflicts with MariaDB-server-5.5.32-1.i686   You could try using --skip-broken to work around the problem   You could try running: rpm -Va --nofiles --nodigest  

I read docs here and here but that doesn't help since those guides are for alone installation I though, how I can install MariaDB and MySQL both in the same server by using yum?

Oracle: Truncate large table takes forever

Posted: 16 Sep 2013 01:29 PM PDT

Problem

  • I'm truncating a very large table ( +210 million rows ).
  • It's been hours since it started and I have no idea when it will be done.
  • There're several tables like that, that I want to truncate.
  • The database is Ora9i.

Question

  • How can I do to make truncates faster ?
  • Also, Is there a way to see the progress of the truncate operation ?

Why does max_connections in my.ini not match concurrent connections in Mysql instance config wizard?

Posted: 16 Sep 2013 10:58 AM PDT

MySQL 5.1.x | Windows Server 2003

Can someone please clarify why max_connections in my.ini is much larger than the manual value I specify in the Concurrent Connections Dialogue?

For example, if I set concurrent connections to 800 in the dialogue window, I see max_connections=1023 in my.ini. Why? Is this normal?

Alter Table for Multiple Databases (same table name)

Posted: 16 Sep 2013 11:40 AM PDT

Is there a good way to Alter a table (add a column) for multiple databases (think like 50 or so). Or is it pretty much manual for all of them?

the Tables are all the same and the column added will be the same as well.

Thanks!

Azure SQL VM Can't import dacpac from Azure SQL Database export

Posted: 16 Sep 2013 10:54 AM PDT

When attempting to set up an Infrastructure as a Service POC for our business, I am encountering problems importing the database exported from our Azure SQL Database implementation. I'm getting messages about trying to DROP objects such as [PRIMARY]. I only get this in our application database, not our error logging database, which imported without error.

Snip of import errors

The database is created, and all objects are there, but there are differences in nearly every table related to constraint checking of the following nature:

Export Source:

ALTER TABLE [USER_MANAGEMENT].[ACTION_PAGE_LK]  ADD CONSTRAINT [FK_ACTIONPAGELK_ICONGUID]      FOREIGN KEY ([ICON_GUID])      REFERENCES [USER_MANAGEMENT].[ICON_LK] ([ICON_GUID])    GO  

Import Target:

ALTER TABLE [USER_MANAGEMENT].[ACTION_PAGE_LK] WITH NOCHECK  ADD CONSTRAINT [FK_ACTIONPAGELK_ICONGUID]      FOREIGN KEY ([ICON_GUID])       REFERENCES [USER_MANAGEMENT].[ICON_LK] ([ICON_GUID])    GO    ALTER TABLE [USER_MANAGEMENT].[ACTION_PAGE_LK] NOCHECK CONSTRAINT [FK_ACTIONPAGELK_ICONGUID]    GO  

How to optimization database for heavy I/O from updates (software and hardware)

Posted: 16 Sep 2013 02:03 PM PDT

The situation I have a postgresql 9.2 database which is quite heavily updated all the time. The system is hence I/O bound, and I'm currently considering making another upgrade, I just need some directions on where to start improving.

Here is a picture of how the situation looked the past 3 months:

enter image description here

As you can see, update operations accounts for most of the disk utilization. Here is another picture of how the situation looks in a more detailed 3 hour window:

enter image description here

As you can see, the peak write rate is around 20MB/s

Software The server is running ubuntu 12.04 and postgresql 9.2. The type of updates are small updated typically on individual rows identified by ID. E.g. UPDATE cars SET price=some_price, updated_at = some_time_stamp WHERE id = some_id. I have removed and optimized indexes as much as I think is possible, and the servers configuration (both linux kernel and postgres conf) is pretty optimized as well.

Hardware The hardware is a dedicated server with 32GB ECC ram, 4x 600GB 15.000 rpm SAS disks in a RAID 10 array, controlled by an LSI raid controller with BBU and a Intel Xeon E3-1245 Quadcore processor.

Questions

  • Is the performance seen by the graphs reasonable for a system of this caliber (read/writes)?
  • Should I hence focus on doing a hardware upgrade or investigate deeper into the software (kernel tweaking, confs, queries etc.)?
  • If doing a hardware upgrade, is the number of disks key to performance?

Update on cascade

Posted: 16 Sep 2013 11:31 AM PDT

I have two tables:

  • service (id_service, libel, status)
  • agent (id_agent, name, surname, id_service, status).

The status field values ​​present in both tables have only two values ​​0 or 1 (0 = inactive and 1 = active).

I want that when the field status of the department table is updated (i.e. if it is set to 0), the status of the agent table field is automatically set to 0 as well.

Why SMO's Transfer tries to login into linked server when scripting?

Posted: 16 Sep 2013 12:28 PM PDT

Consider this code sample:

            var sourceDatabaseConnectionString = "Data Source=MyServer;Initial Catalog=MyDatabase;User Id=MyLogin;Password=MyPassword";              var serverConnection = new ServerConnection              {                  ConnectionString = sourceDatabaseConnectionString              };                var server = new Server(serverConnection);              var builder = new SqlConnectionStringBuilder(sourceDatabaseConnectionString);              var database = server.Databases[builder.InitialCatalog];              var transfer = new Transfer(database);                // setting transfer options              transfer.CopyAllObjects = true;              transfer.CopyAllSynonyms = true;              transfer.CopyData = false;                transfer.Options.WithDependencies = true;              transfer.Options.DriAll = true;              transfer.Options.Triggers = true;              transfer.Options.Indexes = true;              transfer.Options.SchemaQualifyForeignKeysReferences = true;              transfer.Options.ExtendedProperties = true;              transfer.Options.IncludeDatabaseRoleMemberships = true;              transfer.Options.Permissions = true;              transfer.PreserveDbo = true;                // scripting database              var script = transfer.ScriptTransfer();  

It tries to get a blank from existing database.

Some objects in this database depend from objects, located on linked servers. When this code tries to make a script for such objects, ScriptTransfer fails with "Login failed for user 'MyLogin'".

Log analysis shows, that error happens when MyLogin tries to log into liked server. But what is the need for this? After all, I just want to get a bunch of CREATE ... scripts for MyDatabase, I don't want to read any data from linked server.

My SQL Server and SMO version is 2012 SP1.

Sum of different values on the same row in a table

Posted: 16 Sep 2013 08:48 PM PDT

I have a table with numbers from 0-5, under different field names (Score1, Score2 etc. all the way to Score50) that need to be summed at the last column of that same row. Each row represents a different client with its own info.

Is it possible and if it is, some help would be nice.

sql server log shipping: how to monitor that the replica is fresh? (e.g. that log shipping is working)

Posted: 16 Sep 2013 12:18 PM PDT

I want a monitor on my replica (destination) server to message me:

a) once a day, telling me: "Data as of is ### minutes old"

b) any time data is older than N minutes (indicating log shipping has gone down)

I typically have tlogs moving every 5 minutes, on a very active database.

Thank you!

SQL1641N error on linux while running db2start using db2 express c on linux (LUW) but the file system is not mounted using nosuid

Posted: 16 Sep 2013 11:35 AM PDT

I am trying to install and start db2 on my linux machine using the following script:

#!/usr/bin/env bash    MKDIR=/bin/mkdir  RM=/bin/rm  TAR=/bin/tar    DB2_SERVER_PROGRAM_PATH=/home/alok/Documents/Installers  DB2_SERVER_PROGRAM=v10.5_linuxx64_expc.tar.gz    echo "changing to $DB_SERVER_PROGRAM_PATH"  cd ${DB2_SERVER_PROGRAM_PATH}  echo "removing expc"  ${RM} -rf expc  echo "untarring ${DB2_SERVER_PROGRAM}"  ${TAR} zxf ${DB2_SERVER_PROGRAM}    echo "changing to expc"  cd expc    echo "Installing db2 as root"  sudo ./db2_install    echo "Adding user group db2grp1 as root"  sudo groupadd db2grp1  echo "Adding user group dasadm1 as root"  sudo groupadd dasadm1  echo "Adding user group db2fgrp1 as root"  sudo groupadd db2fgrp1    echo "Adding user db2inst1. Please provide password for new user db2inst1."  sudo useradd -g db2grp1 -G dasadm1 -m db2inst1  sudo passwd db2inst1    echo "Adding user dasusr1. Please provide password for new user dasusr1."  sudo useradd -g dasadm1 -G db2grp1 -m dasusr1  sudo passwd dasusr1    echo "Adding user db2fenc1. Please provide password for new user db2fenc1."  sudo useradd -g db2fgrp1 -m db2fenc1  sudo passwd db2fenc1    cd /opt/ibm/db2/V10.5/instance    echo "Creating DAS as root via user dasusr1."  sudo ./dascrt -u dasusr1    echo "Creating database instance as root via user db2fenc1 and db2inst1."  sudo ./db2icrt -u db2fenc1 db2inst1    echo "Setting communication protocol to tcpip as user db2inst1."  sudo -u db2inst1 ../adm/db2set DB2COMM=tcpip  echo "Setting SVCENAME 50000 for dbm cfg as user db2inst1."  sudo -u db2inst1 ../bin/db2 update dbm cfg using SVCENAME 50000    echo "starting db2 as user db2inst1."  sudo chmod +x ../adm/db2start  sudo -u db2inst1 ../adm/db2start    netstat -an | grep 50000  

Everything works fine, till I get to the penultimate line:

sudo -u db2inst1 ../adm/db2start    SQL1641N  The db2start command failed because one or more DB2 database  manager program files was prevented from executing with root   privileges by file system mount settings.  

Now I checked /etc/fstab and it says:

proc            /proc           proc    nodev,noexec,nosuid 0       0  /dev/sda1       /               ext4    errors=remount-ro,user_xattr 0       1  UUID=d4d794eb-1e68-4ddf-a9ec-249a22f084fb none            swap    sw             0       0  /swapfile swap swap defaults 0 0  

My installation path, /opt/ibm/db2/V10.5/adm is basically on /dev/sda1 and it's not mounted using the nosuid option.

Here is the output of mount:

alok@hostname:~$ mount  /dev/sda1 on / type ext4 (rw,errors=remount-ro,user_xattr)  proc on /proc type proc (rw,noexec,nosuid,nodev)  sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)  none on /sys/fs/fuse/connections type fusectl (rw)  none on /sys/kernel/debug type debugfs (rw)  none on /sys/kernel/security type securityfs (rw)  udev on /dev type devtmpfs (rw,mode=0755)  devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)  tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)  none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)  none on /run/shm type tmpfs (rw,nosuid,nodev)  binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev)  shmfs on /dev/shm type tmpfs (rw,size=2048m)  gvfs-fuse-daemon on /home/alok/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=alok)  

This again indicates that / is not mounted in nosuid mode. So why am I getting error SQL1641N? Is it a different problem throwing out this message instead?

Improve speed of index rebuild on SQL server

Posted: 16 Sep 2013 08:50 PM PDT

I'm importing a large amount of data into an empty database, and before I start I disabled all non-unique non-clustered indexes to see if I could improve the performance of the import.

Now I want to re-enable the indexes, and I'm wondering if there is anything that I can do to optimize this.

There are > 100 tables and almost 2,000 indexes to be rebuilt. The database is 200GB in size.

The key section of the script I'm running is this:

declare c_toggle_index cursor FORWARD_ONLY READ_ONLY for      select  'alter index ' + QUOTENAME(i.name) + ' on ' + o.name + ' rebuild'      from    sys.indexes as i      Inner Join sys.objects o      On o.object_id = i.object_id      Where o.is_ms_shipped = 0      And i.index_id >= 1      and i.type > 1      and i.is_disabled = 1  

I considered setting ONLINE=OFF for the alter index statement, but as the indexes start out disabled I wasn't sure that this setting would have any effect. I also considered setting SORT_IN_TEMPDB = ON, but as the tempdb files are on the same drive as the .mdf files of the databases I assumed that there was also no benefit to doing that.

Whilst running the rebuild script I have noticed that I have a lot of CXPACKET wait types. I don't really understand why that would be or if it's a problem that I should be looking to address.

One final point that may be relevant: my entire server is currently inactive other than this import of data into the database. There is no other user activity to consider or worry about; my only concern is importing the data into the database in the shortest possible time.

How to loop through AS400 table with cursor

Posted: 16 Sep 2013 12:31 PM PDT

I have a AS400 table that contains multiple rows per person. Each row contains data concerning a transaction by the person including points assigned to each transaction. I read through all rows for one person ordered by the points. The first row with the highest points, I leave alone. On subsequent rows the point values are cut in half and I do an update. I am currently using a cursor to open the table and doing a Loop to read through the rows. I am getting an -508 error stating :

An UPDATE or DELETE statement with a WHERE CURRENT OF CL was attempted, but the cursor is not positioned on a row or is positioned on a row, but the row is not locked because a COMMIT HOLD or ROLLBACK HOLD statement released the lock on the row. A FETCH statement must be issued to position the cursor on a row and lock the row. Recovery . . . : Issue a FETCH statement to position the cursor on a row and lock the row; then, try the request again.

Part of my code is below:

DECLARE V_LNAME CHAR ( 30 ) ;   DECLARE V_LNAMEHOLD CHAR ( 30 ) ;   DECLARE V_FNAME CHAR ( 15 ) ;   DECLARE V_FNAMEHOLD CHAR ( 15 ) ;   DECLARE V_DOB DATE ;   DECLARE V_DOBHOLD DATE ;   DECLARE V_TRANSNMBR CHAR ( 9 ) ;   DECLARE V_TRANSNMBRHOLD CHAR ( 9 ) ;   DECLARE V_POINTS NUMERIC ( 5 ) ;   DECLARE V_POINTSHOLD NUMERIC ( 5 ) ;   DECLARE V_POINTSEQ NUMERIC ( 5 ) ;   DECLARE FIRSTRECORD CHAR ( 1 ) ;   DECLARE CL CURSOR FOR   SELECT LNAME , FNAME , DOB , TRANSNCNMBR , TOPOINTS   FROM DB_TRANSDATA   ORDER BY LNAME ASC , FNAME ASC , DOB ASC , TOPOINTS DESC ;   DECLARE CLHLD CURSOR FOR   SELECT LNAME , FNAME , DOB , TRANSNCNMBR , TOPOINTS   FROM DB_TRANSDATA   ORDER BY LNAME ASC , FNAME ASC , DOB ASC , TOPOINTS DESC ;     OPEN CLHLD ;   FETCH CLHLD INTO V_LNAMEHOLD , V_FNAMEHOLD , V_DOBHOLD , V_TRANSNMBRHOLD ;   close clhld;    OPEN CL ;   SET FIRSTRECORD = 'Y' ;   LOOP   FETCH CL INTO V_LNAME , V_FNAME , V_DOB , V_TRANSNMBR , V_POINTS , V_POINTSEQ ;   IF TRIM ( CHAR ( V_LNAME ) ) = TRIM ( CHAR ( V_LNAMEHOLD ) ) AND TRIM ( CHAR ( V_FNAME ) ) = TRIM ( CHAR ( V_FNAMEHOLD ) ) AND V_DOB = V_DOBHOLD AND V_TRANSNMBR = V_TRANSNMBRHOLD AND FIRSTRECORD = 'N' THEN   SET V_POINTSEQ = V_POINTS * .5 ;     UPDATE DB_TRANSDATA   SET POINTSEQ = V_POINTSEQ   WHERE CURRENT OF CL ;     SET V_LNAMEHOLD = V_LNAME ;   SET V_FNAMEHOLD = V_FNAME ;   SET V_DOBHOLD = V_DOB ;   SET V_TRANSNMBRHOLD = V_TRANSNMBR ;   ELSE     UPDATE DB_TRANSDATA   SET POINTSEQ = V_POINTS   WHERE CURRENT OF CL ;     SET V_LNAMEHOLD = V_LNAME ;   SET V_FNAMEHOLD = V_FNAME ;   SET V_DOBHOLD = V_DOB ;   SET V_TRANSNMBRHOLD = V_TRANSNMBR ;   SET FIRSTRECORD = 'N' ;   END IF ;     END LOOP ;     CLOSE CL;      END  ;   

Bulk insert with format file: 0 rows affected

Posted: 16 Sep 2013 12:21 PM PDT

When using the BULK INSERT command in SQL Server 2008 it returns:

(0 row(s) affected)  

I am using this command to carry out the bulk insert:

BULK INSERT Test      FROM 'C:\DataFiles\Tests.dat'      WITH (FORMATFILE = 'C:\DataFiles\FormatFiles\TestFormat.Fmt');  GO  

Tests.dat contains:

b00d23fe-580e-42dc-abd4-e8a054395126,48dd5dd6e3a144f7a817f234dd51469c,452eb8ce-6ae2-4e7a-a389-1097882c83ab,,, ,,,,Aria,,,160,,,86400,,2004-04-03 23:23:00.000,,2012-07-06 13:26:31.633,2012-07-06 13:27:44.650,3,,,,51B7A831-4731-4E2E-ACEC-06636ADC7AD3,,0,,0,,Field Name 1,,Field Name 2,,Field Name 3,,Field Name 4,

and the format file TestFormat.fmt contains:

9.0  39  1       SQLCHAR       0       37      ","      1     Key                                              ""  2       SQLCHAR       0       37      ","      2     TestType                                         ""  3       SQLCHAR       0       37      ","      3     CaseKey                                          ""  4       SQLCHAR       0       30      ","      4     Height                                           ""  5       SQLCHAR       0       30      ","      5     Weight                                           ""  6       SQLCHAR       0       128     ","      6     PacemakerType                                    Latin1_General_CI_AI  7       SQLCHAR       0       0       ","      7     Diary                                            Latin1_General_CI_AI  8       SQLCHAR       0       0       ","      8     Indication                                       Latin1_General_CI_AI  9       SQLCHAR       0       0       ","      9     Medication                                       Latin1_General_CI_AI  10      SQLCHAR       0       37      ","      10    RecorderType                                     ""  11      SQLCHAR       0       100     ","      11    RecorderSerial                                   Latin1_General_CI_AI  12      SQLCHAR       0       0       ","      12    Comments                                         Latin1_General_CI_AI  13      SQLCHAR       0       12      ","      13    Status                                           ""  14      SQLCHAR       0       0       ","      14    AdditionalData                                   Latin1_General_CI_AI  15      SQLCHAR       0       37      ","      15    OrderKey                                         ""  16      SQLCHAR       0       12      ","      16    Duration                                         ""  17      SQLCHAR       0       12      ","      17    Age                                              ""  18      SQLCHAR       0       24      ","      18    RecordingStartDateTime                           ""  19      SQLCHAR       0       128     ","      19    Ward                                             Latin1_General_CI_AI  20      SQLCHAR       0       24      ","      20    CreatedDateTime                                  ""  21      SQLCHAR       0       24      ","      21    UpdatedDateTime                                  ""  22      SQLCHAR       0       21      ","      22    UserGroupBits                                    ""  23      SQLCHAR       0       24      ","      23    LastArchive                                      ""  24      SQLCHAR       0       128     ","      24    PointOfCare                                      Latin1_General_CI_AI  25      SQLCHAR       0       128     ","      25    Bed                                              Latin1_General_CI_AI  26      SQLCHAR       0       37      ","      26    DownloadFacilityKey                              ""  27      SQLCHAR       0       37      ","      27    AnalysisFacilityKey                              ""  28      SQLCHAR       0       12      ","      28    Priority                                         ""  29      SQLCHAR       0       37      ","      29    FacilityKey                                      ""  30      SQLCHAR       0       12      ","      30    PacemakerTypeStandard                            ""  31      SQLCHAR       0       128     ","      31    TestTypeName                                     Latin1_General_CI_AI  32      SQLCHAR       0       128     ","      32    UserDefined1Name                                 Latin1_General_CI_AI  33      SQLCHAR       0       128     ","      33    UserDefined1Value                                Latin1_General_CI_AI  34      SQLCHAR       0       128     ","      34    UserDefined2Name                                 Latin1_General_CI_AI  35      SQLCHAR       0       128     ","      35    UserDefined2Value                                Latin1_General_CI_AI  36      SQLCHAR       0       128     ","      36    UserDefined3Name                                 Latin1_General_CI_AI  37      SQLCHAR       0       128     ","      37    UserDefined3Value                                Latin1_General_CI_AI  38      SQLCHAR       0       128     ","      38    UserDefined4Name                                 Latin1_General_CI_AI  39      SQLCHAR       0       128     "\r\n"   39    UserDefined4Value                                Latin1_General_CI_AI  

I cannot figure out why this isn't working. Other people have had similar problems because they had more fields than actual columns in their database. Or using .csv files which are not supported apparently.

This works fine on every other table in the database I am importing with no errors so I can't understand why it doesn't work here.

What are Measures and Dimensions in Cubes

Posted: 16 Sep 2013 01:17 PM PDT

I'm very new to Microsoft Sql Server Business Intelligence and Analysis Service(but I'm programming for years with SQL Server).Can any one describe Measures and Dimensions in Cubes in Simple words(If it's possible with images)?

thanks

MySQL Full Text search increase relevance for exact matches?

Posted: 16 Sep 2013 01:21 PM PDT

I have a MySQL database of ~10,000 organisation names that I want to be able to search. I would like to use a full text search because this would enable me to find "institute of doobry" by searching "doobry institute" etc.

The problem I have is that I have lots of entries like "institute of doobry canteen" and "institute of doobry alumni association".

MATCH (names) AGAINST ("doobry institute")  

will return all of these records and will not score institute of doobry higher than the canteen.

I'd sort of like to say: score it higher if the word count is similar.

Is there any way to achieve this?

Registry error while trying to install Oracle 11g on Win 2008 R2

Posted: 16 Sep 2013 03:57 PM PDT

I am new to Oracle and trying to install Oracle database 11g Enterprise Edition on Windows 2008 Server R2 but getting this error all the time...

It gives me error when updating registry key HKEY_LOCAL_MACHINE/software/oracle/OracleMTSRecoveryServiceProtid_0..

It says The specified home name was not found in the registry

Could anyone please help me out to resolve this..

Replication on MySQL server

Posted: 16 Sep 2013 04:21 PM PDT

I had to stop the slave server to test something. After I started the server again there is a problem with replication on MySQL server

On the problematic server

mysql> SHOW SLAVE STATUS\G  *************************** 1. row ***************************                 Slave_IO_State: Connecting to master                    Master_Host: servera                    Master_User: replica                    Master_Port: 3306                  Connect_Retry: 60                Master_Log_File: servera-bin.000024            Read_Master_Log_Pos: 808459481                 Relay_Log_File: serverb-relay-bin.000071                  Relay_Log_Pos: 4          Relay_Master_Log_File: servera-bin.000024               Slave_IO_Running: No              Slave_SQL_Running: Yes                Replicate_Do_DB:            Replicate_Ignore_DB:             Replicate_Do_Table:         Replicate_Ignore_Table:        Replicate_Wild_Do_Table:    Replicate_Wild_Ignore_Table:                     Last_Errno: 0                     Last_Error:                   Skip_Counter: 0            Exec_Master_Log_Pos: 808459481                Relay_Log_Space: 106                Until_Condition: None                 Until_Log_File:                  Until_Log_Pos: 0             Master_SSL_Allowed: No             Master_SSL_CA_File:             Master_SSL_CA_Path:                Master_SSL_Cert:              Master_SSL_Cipher:                 Master_SSL_Key:          Seconds_Behind_Master: NULL  Master_SSL_Verify_Server_Cert: No                  Last_IO_Errno: 1129                  Last_IO_Error: error connecting to master 'replica@servera:3306' - retry-time: 60  retries: 86400                 Last_SQL_Errno: 0                 Last_SQL_Error:  

on the problematic server:

SELECT user, host FROM mysql.user WHERE Repl_slave_priv = 'Y';  +---------+-----------+  | user    | host      |  +---------+-----------+  | root    | localhost |  | root    | serverb   |  | root    | 127.0.0.1 |  | replica | servera   |  | replica | serverb   |  +---------+-----------+  

on the main server:

SELECT user, host FROM mysql.user WHERE Repl_slave_priv = 'Y';  +---------+-----------+  | user    | host      |  +---------+-----------+  | root    | localhost |  | root    | servera   |  | root    | 127.0.0.1 |  | replica | servera   |  | replica | serverb   |  +---------+-----------+  

according to what I've read, there is a need to execute the following command om the main server:

mysql> FLUSH HOSTS;   

What will happen then? if there is any application connected to it - will it disconnect it also?

Merge Replication identity field issues

Posted: 16 Sep 2013 03:21 PM PDT

One of our clients is using our software with merge replication of a database on a SQL Server 2008 R2 machine. There are two production environments in separate geographical locations only one of which is live at any one time, so basically one and live one on standby. Only the live database is updated by teh applications. Every couple of months they failover between the datacentres and the standby environment become the live centre. There is an instance of SQL Server 2008 in each datacentre and merge replication is used to keep them in sync. This was all working ok until the beginning of the year when we started getting replication errors with some lTID columns in various tables that have the Identity property set.

The errors were like this one:

The insert failed. It conflicted with an identity range check constraint in database 'GateMain', replicated table 'dbo.tGateCalcsLog', column 'lTID'. If the identity column is automatically managed by replication, update the range as follows: for the Publisher, execute sp_adjustpublisheridentityrange; for the Subscriber, run the Distribution Agent or the Merge Agent.

Then after the last failover we noticed we had an issue with the lTID values in one specific table. Our application relies on the lTID value always having incremented in order such that the highest lTID value is always the newest entry in the table. We've found that due to how the identity ranges are being managed by replication that when the system is failed over that the lTID range of the now live database server may have a range of values that are lower than those already present in the table. Is there a way to manage this in merge replication so we can guarantee that the next identity value allocated to the lTID column in greater than any lTID currently in the table? Or do we need to use a different type of replication or possibly mirroring?

MySQL Slaves lag behind master

Posted: 16 Sep 2013 05:21 PM PDT

I have one master and four slaves. Sometimes all my slaves lag behind the master. I have implemented the heartbeat for monitoring replication lag. Now I am trying to find why the slaves are lagging behind the master.

I saw the slow queries (for today) on the master and I found that the slowest query (DML) was taking 138 seconds. But the slaves were lagging about 1400 seconds and there were also no slow queries on the slaves for DML (update, delete, insert, etc.).

Points to be taken into consideration:

  1. All tables are InnoDB.
  2. 68 GB of RAM (Master as well as slaves).
  3. Data size about 1 TB.
  4. Master and slave are running from a long.

What may be the reason for lag?

No comments:

Post a Comment

Search This Blog