Saturday, April 6, 2013

[how to] Doubt about conceptual tables in Access

[how to] Doubt about conceptual tables in Access


Doubt about conceptual tables in Access

Posted: 06 Apr 2013 08:32 PM PDT

I wanted to take a conceptual question: I have a table and a subtable * Budgets detOrçamentos ** where the 1st is the 2nd parent, ie a ratio of 1 w / many. I would generate a * contracts with detContratos, so that the contract is exactly what turned Budget Agreement. I ask, I better make a table contracts separately and play inside the budget, or I can leave the budget and put a tag calling him a contract? I ask this as a matter of good practice, matters of future implementation. Detail, nothing in the Contract can be added or deleted, just changed something, or everything that can be done with user permission. What do you recommend?

Can't change Oracle sqlnet.ora runtime in the client program?

Posted: 06 Apr 2013 08:08 PM PDT

I am trying to setup Kerberos authentication for my Oracle database and I have a situation where I need to maintain different versions of sqlnet.ora for each kerberos user connection. My application is using OCI driver (a single unix process does more than one user connection to the same Oracle database). And I am not able switch sqlnet.ora between user connections using setenv(TNS_ADMIN) call programmatically. I am assuming once the contents of sqlnet.ora gets loaded to process, it remains unchanged for the rest of its life.

Is my case a practical one, has anyone encountered?

When it comes to kerberos, I am not sure why Oracle client restricts the sqlnet.ora to one principal kerberos user. Maybe Oracle does not have a clean support for kerberos as I dont see OCI API's?

Or can I just merge all the key tab files into one and maintain one version of sqlnet.ora? How will my connection string change from providing only tns_alias name? Should I also provide username when I merge all the keytab files? Please help

Object-oriented programming CRUD design pattern

Posted: 06 Apr 2013 07:54 PM PDT

At first I have posted this question at http://cstheory.stackexchange.com/questions/17170/object-oriented-programming-crud-design-pattern - that's because I think it's a subect question, and it can not answered by "yes" or "no" or with a single answer.

I don't know if here it's the better place for this question, but here we go. I have shared a common question for DB designers - imagine the following SELECT:

 SELECT       P.id,       P.legalName,       P.type,       L.tradeName,       L.foundation,       L.EIN,       N.SSN,       N.gender,       N.birthDay,       A.zipCode,       A.address,       A.number,       A.district,       A.city,       A.state   FROM       Person as P     LEFT JOIN       LegalEntity AS L ON L.person = P.id AND P.type = 'L'     LEFT JOIN       NaturalPerson as N ON N.person = P.id AND P.type = 'N'     LEFT JOIN       Address as A ON A.person = P.id AND E.legalAddress = 1  

The above SELECT has a "maintaining" level relatively high by the following factors:

  • CREATE - When a Person are inserted in the database the next insert will be based on Person.type (L => Legal or N => Natural) that defines if will create a new record on LegalEntity or NaturalPerson with Person last inserted id as the foreign key.
  • READ - To provide a consistent data, we use JOINS with aditional term "AND T.type" for LegalEntity and NaturalPerson to return only relative data to Person.type FLAG. For eg. if Person it's a LegalEntity than we have EIN (Employer Identification Number or Federal Tax Id), if not, it's a NaturalPerson than have SSN (Security Social Number)
  • UPDATE - When you perform an update is needed to test whether there is specialization. This is because a record could originally have been created as type = N (Natural) and later want to upgrade it to Corporate (L = Legal). An INSERT to create original NaturalPerson was made, and now with the exchange of nature type is necessary to exclude logically or physically the NaturalPerson record and then create the new record of LegalEntity.
  • DELETE - WITHOUT aditional complexities.

Imagining all this complexity is also necessary. Now think that be need to run in 3 differente SGDBs (MySQL, Oracle and Postgres) - the quoted example are created in MySQL.

  • What are the best practices that we can adopt in this scenario?
  • Maintenance of data with the triggers assist for the 3 SGDS?
  • Create a CRUD with appropriate treatments implemented via application?

Thank you in advance for your time.

Reinsert data in new table with new IDs

Posted: 06 Apr 2013 02:30 PM PDT

I have the following two tables:

buynsell

CREATE TABLE `buynsell` (      `id` VARCHAR(20) NULL DEFAULT NULL,      `msg` VARCHAR(255) NULL DEFAULT NULL,      `date` VARCHAR(25) NULL DEFAULT NULL,      `fid` BIGINT(20) NOT NULL AUTO_INCREMENT,      PRIMARY KEY (`fid`),      INDEX `fid_index` (`fid`)  )  COLLATE='utf8_general_ci'  ENGINE=MyISAM  AUTO_INCREMENT=6523;  

bsmsgs

CREATE TABLE `bsmsgs` (      `id` VARCHAR(20) NULL DEFAULT NULL,      `msg` MEDIUMTEXT NULL,      `date` VARCHAR(25) NULL DEFAULT NULL,      `subfid` BIGINT(20) NULL DEFAULT NULL,      `fid` BIGINT(20) NOT NULL AUTO_INCREMENT,      PRIMARY KEY (`fid`),      INDEX `fid_subfid_index` (`fid`, `subfid`)  )  COLLATE='utf8_general_ci'  ENGINE=MyISAM  AUTO_INCREMENT=11408;  

The field bsmsgs.subfid is referenced to bunsell.fid. I was fed up using these tables, and hence created the following two new ones(Engine is still MYISAM):

buynsell

CREATE TABLE `buynsell` (      `id` INT(10) NOT NULL AUTO_INCREMENT,      `type` ENUM('B','S','D','H','T') NOT NULL DEFAULT 'B',      `msg` VARCHAR(250) NOT NULL,      `nick` VARCHAR(32) NOT NULL,      `dated` DATETIME NOT NULL,      PRIMARY KEY (`id`),      UNIQUE INDEX `msg_nick` (`msg`, `nick`),      INDEX `type` (`type`),      FULLTEXT INDEX `msg` (`msg`)  )  COMMENT='The table storing information about users selling/buying/hiring things etc.'  COLLATE='utf8_general_ci'  ENGINE=MyISAM;  

replies

CREATE TABLE `replies` (      `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,      `bns_id` INT(10) UNSIGNED NOT NULL,      `msg` VARCHAR(250) NOT NULL,      `nick` VARCHAR(32) NOT NULL,      `dated` DATETIME NOT NULL,      PRIMARY KEY (`id`),      UNIQUE INDEX `bns_id_msg` (`bns_id`, `msg`),      INDEX `bns_id` (`bns_id`),      FULLTEXT INDEX `msg` (`msg`)  )  COMMENT='Replies to buynsell messages from interested users.'  COLLATE='utf8_general_ci'  ENGINE=MyISAM;  

Now, my problem is even though the AUTO_INCREMENT values are so high, there are actually less than 2000 entries in buynsell table and around 8000 entries in bsmsgs table.

What I can do is use the same entries from previous tables in my new ones but I want an elegant design and hence, would like to insert from scratch.

I can reinsert data from older buynsell by skipping to AUTO_INCREMENT key, fid and generate new rows with ids starting from 1 to 2000.

What I am facing trouble with is referencing the subfid field in bsmsgs to new id values in buynsell while correctly referencing and storing them into bns_id of replies table.

NOTE

If it can not be achieved by purely MySQL, I can use LuaSQL for this task.

How to move folder of MS SQL Server LocalDB instances from the default location?

Posted: 06 Apr 2013 02:07 PM PDT

I get multiple errors with LocalDB (startup error, can't create DB from SQL Server Management Studio and also multiple issues when trying to restore a backup) and none of the fixes I found are helping anything. It seems to me that all errors have their root in permissions not set up correctly.

Now a simple DDL table creation script works fine from SQL Server Management Studio and creates the DB in the root of my user account's folder (as explained here). The Instances folder however (located at D:\Users[My name]\AppData\Local\Microsoft\Microsoft SQL Server Local DB\Instances despite when installing the SQL Server Express with Advanced Services I specified the DATA folder in another location - where only standard SQL Server Express mdf files are stored, but not LocalDB instances) seems problematic regarding permissions (in contrary to the account root) therefore and also for having my DB files in a location among my projects I'd like to move the Instances folder to another place. Is this possible? I haven't even found a corresponding registry entry.

I'm using SQL Server Express 2012 and also SSMS of the same version (latest updates installed).

Any help would be appreciated.

How to enforce full integer (no floating point)?

Posted: 06 Apr 2013 05:51 PM PDT

Why is the following allowed

INSERT INTO c VALUES (123.2,'abcdefghijklmnopqrstuvwxyzabcdef', 'abcdefghijklmnop');  

When table contains

CustomerID      NUMBER(3,0) -- That's the 123.2 entry  

In other words, total of 3 digits with 0 after floating point?

If number is not a way to go, how would you enforce full integer only (no floating point)

I need help in Nested Cursors in SQL server

Posted: 06 Apr 2013 12:04 PM PDT

I need to use Member table, Member_Account table and Class table to calculate Monthly Fee of any member, i've created a procedure and nested cursors for this purpose, please help if anyone understands my problem

create procedure CalcFee(@Member_Id int)    as    Declare  @Class_Price decimal(18,2),  @Total decimal(18,2),  @Class_Id int;    Begin  Set @Class_Id=(Select Class_Id from Member_Account where Member_Id=@Member_Id);    Set @Class_Price=(Select Class_Price from Class where Class_Id=@Class_Id);    Set @Total= @Total + @Class_Price;    update Member set Monthly_Fee=@Total where Member_Id=@Member_Id;    end;        Declare firstcursor Cursor For    Select Member_Id from Member    declare    @Member_1 int;    Begin    Open firstcursor;    Fetch next from firstcursor Member_Id into @Member_1;    while @@FETCH_STATUS=0    Begin    Declare secondcursor Cursor For    Select Member_Id from Member_Account    declare    @Member_2 int;    Begin    Open secondcursor;    Fetch next from firstcursor Member_Id into @Member_2;    while select * from Member_Account where Member_Id=@Member_2    Begin    exec CalcFee @Member_Id = @Member_2;    Fetch next from firstcursor Member_Id into @Member_2;    end;    close secondcursor;    deallocate secondcursor;    end    Fetch next from firstcursor Member_Id into @Member_1;    end;    close firstcursor;    deallocate firstcursor;    end;  

How to grant only functions while revoking tables?

Posted: 06 Apr 2013 11:04 AM PDT

I am a newbie to PostgreSQL, and configuring a new database.

I wrote several functions for indirect access(read/write) to data, and I want to prohibit all direct access to table.

GRANT CONNECT ON DATABASE db1 TO role1;  GRANT USAGE ON SCHEMA schema1 TO role1;    REVOKE ALL ON table1 FROM role1;  GRANT ALL ON ALL FUNCTIONS IN SCHEMA schema1 TO role1;  

If I don't REVOKE table privilege, direct table access will be allowed. If I REVOKE table privilege, and GRANT functions, Users access the functions but see table access restriction error.

psql:./test1.psql:6: ERROR:  permission denied for relation table1  

How can I make user role1 can access data using only functions while disallowing all direct access to table? I think I missed something, but I can't figure it out.

P.S. I am using Postgres 9.2.x.

Get Heirarchial Data in Self-Referencing Table (Parents , Childs) [closed]

Posted: 06 Apr 2013 10:09 AM PDT

In the following Fiddle,

sqlfiddle

I will pass a sno and I need all the parents of that sno in a table. and all the Childs of that sno in another table

Please refer to this Question Since both are related. they use the same table.

I need to Display top 10 Levels in a Table.

sql file system for users

Posted: 06 Apr 2013 11:35 AM PDT

i need to design a way to save files of users websites, each file contain 3 strings. for example:

user1:

-index.html  -about.html  

user2:

-index.html  -menu.html  -help.html  -contact us.html  -about.html  

how should i design the db? all files in one table?

user1 | index.html | fileseg1 | fileseg2 | fileseg 3  user1 | about.html | fileseg1 | fileseg2 | fileseg 3  user2 | index.html | fileseg1 | fileseg2 | fileseg 3  ....  

in this way if each user has 10 files aprox, 1mil users have 10mil lines in this table.

sql server full text search - fuzzy searching?

Posted: 06 Apr 2013 05:40 AM PDT

Is there a way to configure fuzzy searches in sql server full text search. Meaning if I search for a term called POWDER, I must get matches (i.e. strings) which contain any variations of it within an allowable distance, like for e.g. the matches can be strings which can contain the following variations of the previously mentioned word:

PWDR (distance = 2)  PWDER (distance = 1)  WDER (distance = 2)  

I remember reading somewhere you can do this kind of thing, however this specifically requires you to specify the allowable edit distance. How or where do you do this?

Moving MSDB database

Posted: 06 Apr 2013 09:53 AM PDT

When I am trying to move msdb database from it's default location (C:\Program Files\Microsoft SQL Server\MSSQL10.SQLEXPRESS\MSSQL\DATA) to another drive(E:). After moving the database I am getting error as shown below.(the error I am getting while expanding database node in SSMS and now I am not able to access any database.)

Failed to retrieve data for this request. (Microsoft.SqlServer.Management.Sdk.Sfc)

For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&LinkId=20476

------------------------------ ADDITIONAL INFORMATION:

An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)


Database 'msdb' cannot be opened due to inaccessible files or insufficient memory or disk space. See the SQL Server errorlog for details. (Microsoft SQL Server, Error: 945)

For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.00.2531&EvtSrc=MSSQLServer&EvtID=945&LinkId=20476


The procedure I followed to shift msdb database is as follows:

  1. For each file to be moved, run the following statement.

ALTER DATABASE msdb MODIFY FILE ( NAME = MSDBData , FILENAME = 'E:\MSDBData.mdf' )

ALTER DATABASE msdb MODIFY FILE ( NAME = MSDBLog , FILENAME = 'E:\MSDBLog.ldf' )

  1. Stop the instance of SQL Server to perform maintenance.
  2. Move the file or files to the new location.
  3. Restart the instance of SQL Server or the server. Also i did confirm the path by running following query

SELECT name, physical_name AS CurrentLocation, state_desc FROM sys.master_files WHERE database_id = DB_ID(N'msdb');

And output was showing current location of both MSDBdata and MSDBLog as E:\

This error happened while I did it for testing server. I want to do it for a production server. So please help me on this error. How exactly to move msdb database?

Login failed for user sa ssis 2008

Posted: 06 Apr 2013 06:23 AM PDT

I have a problem with ssis2008

I built a package that have a connection with SQL Server authentication (username : sa).

When I use BI, it works fine but when I build a job that used this package it doesn't work.

I tried several solution for example change the package protection level or using xml configuration file to save password but none of them worked. Can some body help me?

Errors are:

Log in failed for user sa

or

Can not decrypt the password

thanks

How to deny access to see the logins in a database

Posted: 06 Apr 2013 07:17 AM PDT

I create a login in SQL Server 2008 R2, and when that user connects to a database, it can see all the logins in the security logins. Can I deny access to watch the logins of the database?

SWITCHOFFSET() not working in SQL Server 2008 R2

Posted: 06 Apr 2013 06:24 AM PDT

We have been encountering problem in the builtin function switchoffset() in SQL Server 2008 R2. The error says:

The Timezone provided to built-in function switchoffset() is invalid.; 4200.

Our code works fine in SQL Server 2012, so I don't think we have problems in syntax. Does anyone here know any relative CU or SP's of SQL Server 2008 R2 that tends to fix this issue? Actually we have development machines that the code is working fine and it is running in SS 2008 R2 but it has update. We have tried to replicate the CU and SP's of that machine but we are very unfortunate.

Thanks for the help.

Deployment Manager Vs DBA Cage Match On Pushing Releases To Prod

Posted: 06 Apr 2013 09:31 AM PDT

Our company has now hired a full time release engineer for the MS Windows side and we were thinking of a release management process for MS SQL Server.

IF we have confidence in their abilities to deploy SQL Server scripts, is it common to just have the release manager do it or is it something that the DBAs typically do? Our enterprise is growing fast and our DBA's are somewhat overloaded (surprise surprise what else is new) but we can hire more, but that's another matter.

How do you guys manage this? Do you allow a release manager to have access to prod to roll out changes? Do you take away rights and turn them on when they need to release? I'm thinking I will give them access to a sproc that gives them prod access for an hour, but it logs who calls it.

Or am I totally off, and this is something a DBA should always manage?

Any thoughts would be appreciated!

Edit:

Update: Also what happens when we encounter anomoloies? For example, a dev stated that 'these tables should match this other environment (by environment I mean customer prod environment, not qa/stage/etc.)'. Usually they would do a spot check. I did a checksum and noticed issues which ended up being just whitespace issues. In a case like this, do we push it back to the release manager/qa person to fix after doing basic troublehshooting?

Another example: We have scripts by about 20 developers, sometimes they are dependent on each other. The ordering of the script was wrong. I can't keep up with 20 developers work and also manage the data, but after some troubleshooting we discovered the issue and changed the order. Is this something the DBA should typically be deeply involved in or is it fair after basic testing and look over, we send it back to the release manager and devs to fix?

Select query having count and variable

Posted: 06 Apr 2013 07:43 AM PDT

I have a table like the one below.

Main Table

I need to form a select query. I'm going to pass a RefID like

SELECT SNo  FROM Register WHERE RefID = 1  

And beside the SNo in the Statement, I need to have another column which contains the total COUNT of refID belonging to that SNo. For example:

Required Output

The Precise Question as @JNK Specified:

Show me all the Sno values with a RefID of 1, and for each Sno show me also a count of how many times that Sno value is itself used as a RefID anywhere else in the table.

Defining constraints in `CREATE TABLE` statements

Posted: 06 Apr 2013 07:26 AM PDT

Recently I have been using a Database Abstraction Layer built by a Python web-framework called web2py (click for their DAL syntax). They include the option to include your constraints within the CREATE TABLE statement.

Whilst taking Stanford's "Introduction to Databases" MOOC, the SQL Standard was mentioned as supporting any query within the CREATE TABLE statement as constraints (essentially replacing a major use-case for triggers).

What is best practice?

Below is a simple example of including constraints in CREATE TABLE statements; rather than through ALERT TABLE and/or CREATE TRIGGER statements:

CREATE TABLE Place (      address VARCHAR2(40),      CONSTRAINT place_pk          PRIMARY KEY (address)  );    CREATE TABLE Company (      c_name VARCHAR2(40),      CONSTRAINT company_pk          PRIMARY KEY (c_name)  );    CREATE TABLE Employee (      e_name VARCHAR2(40),      tax_no NUMBER,      salary NUMBER(19,4),      sex CHAR,      birthdate DATE,      address VARCHAR2(40),      CONSTRAINT employee_pk          PRIMARY KEY (tax_no),      CONSTRAINT address_fk          FOREIGN KEY (address) REFERENCES Place(address),      CHECK (address IS NOT NULL)  );    CREATE TABLE CompanyEmployee (      employee_id NUMBER,      company_id VARCHAR2(40),      CONSTRAINT unique_employee_id          UNIQUE(employee_id),      CONSTRAINT employee_id_fk          FOREIGN KEY (employee_id) REFERENCES Employee(tax_no),      CONSTRAINT company_id_fk          FOREIGN KEY (company_id) REFERENCES Company(c_name),      CONSTRAINT company_employees_pk          PRIMARY KEY (employee_id, company_id)  );  

BTW: You'll note that I'm using CAPS for keywords, upper CamelCase for table names and lower under_score for attribute and trigger names. Is this good practice? - Feel free to critique my indentation and whitespace usage styles also :)

Overview of how MongoDB uses its various threads

Posted: 06 Apr 2013 12:48 PM PDT

On one instance I have MongoDB using ~85 threads. In lieu of having time to investigate directly, I am curious:

  1. What are the threads used for? Do they fall into a few high-level usage categories?
  2. How can I control/influence the number of threads used? E.g. max connection params, etc.

upgrade mysql 5.1 to 5.5 on Ubuntu 11.10

Posted: 06 Apr 2013 07:00 PM PDT

I currently have mysql server 5.1 installed via apt-get on my production Ubuntu 11.10 server

root@Ubuntu-1110-oneiric-64-minimal$ dpkg --get-selections | grep sql-server     mysql-server                    install   mysql-server-5.1                install   mysql-server- core-5.1          install  

I would like to upgrade this to 5.6, but the mysql docs seem to suggest upgrading to 5.5 first, and from there to 5.6.

I've seen various lengthy guides describing how to upgrade from 5.1 to 5.5, but they all seem to describe how to upgrade by installing the tarball rather than using the package manager. Is there a simpler to upgrade using the package manager if the current version was installed using apt-get.

Obviously I want my existing configuration and databases to be retained after the upgrade and I will be sure to backup my databases using mysqldump before performing the upgrade.

MySQL information_schema doesn't update

Posted: 06 Apr 2013 08:01 PM PDT

I have a database, say abc, in mysql server. It has only one table named test. test uses innodb engine and I've set innodb_file_per_table to true.

After I run the query delete from abc.test, I want to calculate the database size of abc. Here is the query I use:

SELECT      table_schema "name",      sum( IF(engine = "MyISAM", data_length + index_length -  data_free,      data_length + index_length)) "size"  FROM information_schema.TABLES  where table_schema like "abc";  

The strange thing is that I find the database size doesn't decrease at all, however the data in "test" is gone.

I've done this kind of test many times, this strange behavior happens sometimes.

I'm using percona mysql server 5.5.29-rel29.4.

Can anybody tell me what is wrong?

Update:

Actually, I use another thread to check the database size periodically.

How to succesfully run a batch file in an SQL Agent job?

Posted: 06 Apr 2013 05:00 PM PDT

I have a SQL Agent Job which generates a specific report in PDF-file and then copies the PDF to a network directory and then deletes the PDF file in the source directory.

The SQL Jobs consists of 2 steps: 1. Generate the report 2. Copy the report to the network location.

For step 2 I made a bat-file which handles the copying and removal of the pdf file.

The bat-file is as follows:

set source_dir=C:\Reports\Energie\Uur  set dest_dir=\\KWS2-WEB-SERVER\Share\Reports\Uur    C:\Windows\System32\Robocopy.exe %source_dir% %dest_dir% /MOV /Z  

However, when I run my the Job, it hangs on the second step. The status just stays on "Executing".

This is the line which I stated in the step (location of the bat-file to execute):

cmd.exe /c "C:\Reports\rapport_uur_copy.bat"  

My job-settings are as follows:

Step 1

Type: Operating system (CmdExec) On Success: Go to the next step

On Failure: Quit the job reporting failure

Step 2

Type: Operating system (CmdExec)

On Success: Quit the job reporting success

On Failure: Quit the job reporting failure

Some facts:

  • I have read/write permissions on the network directory
  • I run the steps as the Administrator-account (the logged in user, default)
  • Step 1 succeeds
  • I run Windows Server 2008 R2 Standard
  • I have SQL Server 2008 R2
  • When I run the script from the cmd manually, it works (under Administrator account).

MySQL partitioned tables?

Posted: 06 Apr 2013 01:00 PM PDT

I have a database that supports a web application with several large tables. I'm wondering if partitioned tables will help speed up certain queries. Each of these tables has a colum called client_id. Data for each client_id is independent from every other client_id. In other words, web queries will always contain a where clause with a single client_id. I'm thinking this may be a good column on which to partition my large tables.

After reading up on partitioned tables, I'm still a little unsure as to how best to partition. For example, a typical table may have 50 million rows distributed more or less evenly across 35 client_ids. We add new client_ids periodically but in the short term the number of client_ids is relatively fixed.

I was thinking something along these lines:

CREATE TABLE foo (      id INT NOT NULL PRIMARY KEY,      ... more column defs here...      client_id int      )  PARTITION BY KEY(client_id)  PARTITIONS 35;  

My question. Is this an optimal strategy for partitioning these types of tables? My tests indicate a considerable speedup over indexing on client_id, but can I do better with some other form of partitioning (i.e. hash or range)?

Using MySQL EXPLAIN to analyze the performance of a function

Posted: 06 Apr 2013 04:00 PM PDT

You can precede a SELECT statement in MySQL with EXPLAIN, but how would you explain a function? I have a function that does a lot of work and I would like to analyze the performance of the function as a whole.

Limit memory used for mongoDb

Posted: 06 Apr 2013 09:00 AM PDT

Is there any way to limit using RAM for mongodb on Debian? I'm looking for a solution fo about 8 hours, but have no results.

MySQL user defined rollback procedure

Posted: 06 Apr 2013 02:00 PM PDT

I'm attempting to write my own mini-rollback procedure. I have a table that tracks any updates or deletes to another table using a trigger. I am attempting to make it possible to restore one or more of these tracked changes through the use of a procedure. However, I'm receiving a syntax error with the following:

-- UNDO_CHANGES_PROCEDURE - This copies the values of the work log track table back into the relevant record in the work log table if an accidental edit or deletion is made.  DROP PROCEDURE IF EXISTS UNDO_CHANGES_PROCEDURE;    DELIMITER $$    CREATE PROCEDURE UNDO_CHANGES_PROCEDURE(ID INT(6))  BEGIN  DECLARE var_trig CHAR(8);    SET var_trig = (SELECT TRIGGER_TYPE FROM WORK_LOG_TRACK WHERE WORK_LOG_TRACK.WORK_LOG_EDIT_NUMBER = ID);    IF var_trig = 'Update' THEN  UPDATE WORK_LOG SET ENTRY_NUMBER = WORK_LOG_TRACK.ENTRY_NUMBER_FK, EMPLOYEE_ID_FK = WORK_LOG_TRACK.EMPLOYEE_ID_FK, WORK_ORDER_NUMBER_FK = WORK_LOG_TRACK.WORK_ORDER_NUMBER_FK, ENTRY_TIME = WORK_LOG_TRACK.ENTRY_TIME, WORK_DONE = WORK_LOG_TRACK.WORK_DONE WHERE WORK_LOG_TRACK.WORK_LOG_EDIT_NUMBER = ID AND WORK_LOG.ENTRY_NUMBER = WORK_LOG_TRACK.ENTRY_NUMBER_FK;  ELSE  INSERT INTO WORK_LOG(ENTRY_NUMBER, EMPLOYEE_ID_FK, WORK_ORDER_NUMBER_FK, ENTRY_TIME, WORK_DONE) VALUES (WORK_LOG_TRACK.ENTRY_NUMBER_FK, WORK_LOG_TRACK.EMPLOYEE_ID_FK, WORK_LOG_TRACK.WORK_ORDER_NUMBER_FK, WORK_LOG_TRACK.ENTRY_TIME, WORK_LOG_TRACK.WORK_DONE) WHERE WORK_LOG_TRACK.WORK_LOG_EDIT_NUMBER = ID;  END IF;  END;  $$    DELIMITER ;  

The syntax error comes in in regards to my update statement, any help or suggestions would be appreciated.

Same query taking 0.2 sec in Oracle database but 4 mins in SQL Server

Posted: 06 Apr 2013 07:00 AM PDT

SELECT A.Field1 FROM Table1 A INNER JOIN Table1 B   ON A.Field2 = B.Field2   AND A.Field3 = B.Field3   AND A.Field4 = B.Field4   AND B.Field1 = "string1"  AND B.Field2 = "string2"   AND A.Field5 = (SELECT MAX(C.Field5) FROM Table1 C INNER JOIN Table2 D                    ON                    (C.Field5 < B.Field5                    AND B.Field6 = D.Field6                    AND C.Field2 = B.Field2                    AND C.Field3 = B.Field3                   AND C.Field4 = B.Field4                   AND D.Field7 = 'Y')                 OR                    (C.Field5 < B.Field5                   AND B.Field6 = D.Field6                    AND C.Field2 = B.Field2                    AND C.Field3 = B.Field3                   AND C.Field4 = B.Field4                   AND D.Field7 = 'N')                 )  

MSSQL ActionPlan

This query takes 0.2 sec in an Oracle database but 4 mins in SQL Server database.

Both the databases have the same indexes, the same records structure and the same column structure.

Can anyone tell me why?

Pgpool, Postgresql and Apache tuning (1000 concurrent spatial queries)

Posted: 06 Apr 2013 08:00 AM PDT

I'm trying to configure a load balancing system. I've a python script, invoked through mod_wsgi on Apache, that generates a query and executes it on pgpool: request-> wsgi python -> pgpool -> postgresql. Pgpool is configured as load balancer using 4 servers with 24GB ram and 350GB ssh hd. Our db is about 150GB and a query takes about 2 seconds. These are the configurations:

Pgpool

  • num_init_children 500
  • max_pool 2
  • child_life_time 300 seconds

Apache (mpm_prefork)

  • StartServers 100
  • MinSpareServers 20
  • MaxSpareServers 45
  • ServerLimit 2000
  • MaxClients 100
  • MaxRequestsPerChild 1000

PostgreSQL

  • max_connections = 1000
  • shared_buffers = 6GB
  • work_mem = 4GB

It seems not working When I try to submit more than 150 concurrent queries, although pgpool log file doesn't have any errors I get this error from the python script:

OperationalError: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.

Any ideas?

No comments:

Post a Comment

Search This Blog