Sunday, June 2, 2013

[how to] Transforming natural language to SQL [closed]

[how to] Transforming natural language to SQL [closed]


Transforming natural language to SQL [closed]

Posted: 02 Jun 2013 03:33 PM PDT

So i have this example.

E ≡ O (f.codf,COUNT(DISTINCT f.codp)) I (furnizeaza f) W(E1) G(f.codf)  E1 ≡ 2 = E2  E2 ≡ O (COUNT(s.cods)) I (sectii s) W ((s.cods,f.codf) τ furnizeaza f2[f2.cods,f2.codf]    SELECT f.codf, COUNT(DISTINCT f.codp) AS numar      FROM furnizeaza f      WHERE 2 = (SELECT COUNT(s.cods)                  FROM sectii s                  WHERE (s.cods,f.codf) IN (SELECT f2.cods,f2.codf         FROM furnizeaza f2))      GROUP BY f.codf  

It demonstrates a technique that helps create SQL queries from natural language.I tried finding some more information on this subject, but all i found is just libraries for automation of this process. Can anyone direct me to a book or a tutorial that describes the exact step by step process ,so i could do something like that manually ?

Optimize multiple self-JOINs or redesign DB?

Posted: 02 Jun 2013 03:55 PM PDT

I'm looking for advice on either optimizing multiple self-joins, or a better table/DB design.

One of the tables looks as follows (relevant cols only):

CREATE TABLE IF NOT EXISTS CountryData (    countryDataID INT PRIMARY KEY AUTO_INCREMENT,    dataID INT NOT NULL REFERENCES DataSources (dataID),    dataCode VARCHAR(30) NULL,    countryID INT NOT NULL REFERENCES Countries (countryID),    year INT NOT NULL ,    data DEC(20,4) NULL,    INDEX countryDataYear (dataID, countryID, year));  

The data column has values for a few hundred indicators, 90 countries, and 30 years for ~1mn rows total. A standard query requires selecting N indicators for a particular year and C countries, yielding a CxN table for 90 rows max.

With all values in a single column, self-joins seemed like the way to go. So I have experimented with various suggestions to speed those up, including indexing and creating new (temp) tables. At 9 self-joins, the query takes a little under 1 min. Beyond that, it spins forever.

The new table from where the self-joins take place has only about 1,000 rows, indexed on what seem to be the relevant variables - creation takes about 0.5 sec:

CREATE TABLE Growth      SELECT dataID, countryID, year, data      FROM CountryData      WHERE dataID > 522 AND year = 2017;    CREATE INDEX growth_ix       ON Growth (dataID, countryID);  

The SELECT query then arranges up to XX indicators in the results table, with XX unfortunately <10:

SELECT       Countries.countryName AS Country,         em01.em,      em02.em,      em03.em      ...      emX.em  FROM          (SELECT          em1.data AS em,          em1.countryID      FROM Growth AS em1      WHERE      em1.dataID = 523) as em01      JOIN       (SELECT          em2.data AS em,          em2.countryID      FROM Growth AS em2      WHERE      em2.dataID = 524) as em02      USING (countryID)      JOIN      (SELECT          em3.data AS em,          em3.countryID      FROM Growth AS em3      WHERE      em3.dataID = 525) as em03      USING (countryID)      ...      JOIN      (SELECT          emX.data AS em,          emX.countryID      FROM Growth AS em5      WHERE      emX.dataID = 527) as emXX      USING (countryID)      JOIN Countries       USING (countryID)  

I'd actually like to retrieve a few more variables, plus potentially join other tables. Now I'm wondering whether there's a way to run this more efficiently, or whether I should take an altogether different approach, such as using wide tables with indicators in different columns to avoid self-joins.

Am I wrong in table design or wrong in selected index when made the table?

Posted: 02 Jun 2013 05:08 PM PDT

I've build web application as a tool to eliminate unnecessary data in peoples table, this application mainly to filter all data of peoples who valid to get an election rights. At first, it wasn't a problem when the main table still had few rows, but it is really bad (6 seconds) when the table is filled with about 200K rows (really worse because the table will be up to 6 million rows).

I have table design like below, and I am doing a join with 4 tables (region table start from province, city, district and town). Each region table is related to each other with their own id:

CREATE TABLE `peoples` (                        `id` mediumint(8) unsigned NOT NULL AUTO_INCREMENT,                        `id_prov` smallint(2) NOT NULL,                        `id_city` smallint(2) NOT NULL,                        `id_district` smallint(2) NOT NULL,                        `id_town` smallint(4) NOT NULL,                        `tps` smallint(4) NOT NULL,                        `urut_xls` varchar(20) NOT NULL,                        `nik` varchar(20) NOT NULL,                        `name` varchar(60) NOT NULL,                        `place_of_birth` varchar(60) NOT NULL,                        `birth_date` varchar(30) NOT NULL,                        `age` tinyint(3) NOT NULL DEFAULT '0',                        `sex` varchar(20) NOT NULL,                        `marital_s` varchar(20) NOT NULL,                        `address` varchar(160) NOT NULL,                        `note` varchar(60) NOT NULL,                        `m_name` tinyint(1) NOT NULL DEFAULT '0',                        `m_birthdate` tinyint(1) NOT NULL DEFAULT '0' ,                        `format_birthdate` tinyint(1) NOT NULL DEFAULT '0' ,                        `m_sex` tinyint(1) NOT NULL DEFAULT '0' COMMENT ,                        `m_m_status` tinyint(1) NOT NULL DEFAULT '0' ,                        `sex_double` tinyint(1) NOT NULL DEFAULT '0',                        `id_import` bigint(10) NOT NULL,                        `id_workspace` tinyint(4) unsigned NOT NULL DEFAULT '0',                        `stat_valid` smallint(1) NOT NULL DEFAULT '0' ,                        `add_manual` tinyint(1) unsigned NOT NULL DEFAULT '0' ,                        `insert_by` varchar(12) NOT NULL,                        `update_by` varchar(12) DEFAULT NULL,                        `mark_as_duplicate` smallint(1) NOT NULL DEFAULT '0' ,                        `mark_as_trash` smallint(1) NOT NULL DEFAULT '0' ,                        `in_date_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,                        PRIMARY KEY (`id`),                        KEY `ind_import` (`id_import`),                        KEY `ind_duplicate` (`mark_as_duplicate`),                        KEY `id_workspace` (`id_workspace`),                        KEY `add_manual` (`add_manual`),                        KEY `il` (`stat_valid`,`mark_as_trash`,`in_date_time`),                        KEY `region` (`id_prov`,`id_city`,`id_district`,`id_town`,`tps`),                        KEY `name` (`name`),                        KEY `place_of_birth` (`place_of_birth`),                        KEY `ind_birth` (`birthdate`(10)),                        KEY `ind_sex` (`sex`(2))                      ) ENGINE=MyISAM AUTO_INCREMENT=1 DEFAULT CHARSET=latin1;    CREATE TABLE `test_prov` (            `id` smallint(2) NOT NULL,            `name_prov` varchar(60) NOT NULL,            `head_manager` varchar(60) DEFAULT NULL,            `handprint` blob,            `ibu_kota` varchar(60) DEFAULT NULL,            `ketua_kpu` varchar(60) DEFAULT NULL,            PRIMARY KEY (`id`),            UNIQUE KEY `name` (`name_prov`)          ) ENGINE=MyISAM DEFAULT CHARSET=latin1        CREATE TABLE `test_city` (            `id` smallint(2) NOT NULL,            `id_prov` smallint(2) NOT NULL,            `name_city` varchar(60) NOT NULL,            `head_manager` varchar(60) DEFAULT NULL,            `ibu_kota` varchar(60) DEFAULT NULL,            `ketua_kpu` varchar(60) DEFAULT NULL,            PRIMARY KEY (`id_prov`,`id`),            KEY `name_city` (`name_city`)          ) ENGINE=MyISAM DEFAULT CHARSET=latin1    CREATE TABLE `test_district` (            `id` smallint(2) NOT NULL,            `id_city` smallint(2) NOT NULL,            `id_prov` smallint(2) NOT NULL,            `name_district` varchar(60) NOT NULL,            `head_manager` varchar(60) DEFAULT NULL,            `handprint` blob ,            `ppk_1` varchar(60) DEFAULT NULL,            `ppk_2` varchar(60) DEFAULT NULL,            `ppk_3` varchar(60) DEFAULT NULL,            `ppk_4` varchar(60) DEFAULT NULL,            `ppk_5` varchar(60) DEFAULT NULL,            PRIMARY KEY (`id_prov`,`id_city`,`id`),            KEY `name_district` (`name_district`)          ) ENGINE=MyISAM DEFAULT CHARSET=latin1    CREATE TABLE `test_town` (                        `id` smallint(4) NOT NULL,                        `id_district` smallint(2) NOT NULL,                        `id_city` smallint(2) NOT NULL,                        `id_prov` smallint(2) NOT NULL,                        `name_town` varchar(60) NOT NULL,                        `handprint` blob,                        `pps_1` varchar(60) DEFAULT NULL,                        `pps_2` varchar(60) DEFAULT NULL,                        `pps_3` varchar(60) DEFAULT NULL,                        `tpscount` smallint(2) DEFAULT NULL,                        `pps_4` varchar(60) DEFAULT NULL,                        `pps_5` varchar(60) DEFAULT NULL,                        PRIMARY KEY (`id_prov`,`id_city`,`id_district`,`id`),                        KEY `name_town` (`name_town`)                      ) ENGINE=MyISAM DEFAULT CHARSET=latin1;  

and the query like

SELECT `E`.`id`, `E`.`id_prov`, `E`.`id_city`, `E`.`id_district`, `E`.`id_town`,   `B`.`name_prov`,`C`.`name_city`,`D`.`name_district`, `A`.`name_town`, `E`.`tps`,     `E`.`urut_xls`,   `E`.`nik`,`E`.`name`,`E`.`place_of_birth`, `E`.`birth_date`, E.age, `E`.`sex`,  `E`.`marital_s`, `E`.`address`,`E`.`note`   FROM peoples E    JOIN test_prov B ON  E.id_prov = B.id    JOIN test_city C ON E.id_city = C.id and (C.id_prov=B.id)    JOIN test_district D ON E.id_district = D.id and ((D.id_city = C.id) and (D.id_prov= B.id))    JOIN test_town A ON E.id_town = A.id and ((A.id_district = D.id)       and (A.id_city = C.id) and (A.id_prov = B.id)) AND E.stat_valid=1 AND E.mark_as_trash=0  

mark_as_trash is a mark column which only contain 1 and zero just to know if the data has been mark as a deleted record, and stat_valid is the filtered result value - if value is 1 then the data is valid to get the rights of election.

I've tried to see the explain but no column is used as an index lookup. I believe that's the problem why the application so slow in 200K rows. The query above only shows two conditions, but the application has a feature to filter by name, place of birth, birth date, age with ranges and so on.

How can I make this perform better?

Looking for SQL Training Options? [closed]

Posted: 02 Jun 2013 10:43 AM PDT

I am interested in getting an entry level job working with SQL and I'm wondering if anyone has any recommendations for training programs. I am looking for something that would provide a certificate that employers would value. I'm in the age old position of no job without experience, no experience without a job. I've done some online training, but I am interested in doing something a little more in-depth that would give me some kind of credentials, but not to the extent of going through a 2 or 4 year program.

HOW to "SUM" and "JOIN" for single query of a poll results?

Posted: 02 Jun 2013 10:54 AM PDT

I have a poll database as

CREATE TABLE poll_answer  answer_id,  answer varchar(255),  poll_id int(11),  FOREIGN KEY(poll_id) REFERENCES polls(poll_id) ON DELETE CASCADE,  PRIMARY KEY(answer_id)  ) ENGINE=InnoDB    CREATE TABLE poll_results  vote_id int(11),  user_id int(11),  answer_id int(11),  poll_id int(11),  FOREIGN KEY(answer_id) REFERENCES poll_answers(answer_id) ON DELETE CASCADE,  FOREIGN KEY(poll_id) REFERENCES polls(poll_id) ON DELETE CASCADE,  PRIMARY KEY(vote_id)  ) ENGINE=InnoDB  

How can I run a query to sum the votes for each answer to get results for the list of answers in the poll?

SELECT ... FROM poll_results ... WHERE poll_results.poll_id='x'    answer_id  |  answer  |  SUM of Votes  |  % of votes  

Linked server available to users without permissions

Posted: 02 Jun 2013 02:08 PM PDT

I have users seeing a linked server which they are not supposed to see.

The linked server is defined so that only I will have access to it, but everybody can see and use it.

I have created the new linked server using the following steps:

  1. Connect SSMS Object Explorer to the SQL Server instance
  2. Expand Server Objects, right click Linked Servers and left-click New Linked Server...
  3. Under General tab choose SQL Server for Server type and write the Server's name
  4. Under Security tab in the upper part click Add, choose "sa" as Local Login, enter Remote User's name and Remote Password
  5. Under Security tab in the lower part (under For a login not defined in the list above, connections will:) choose the first option: Not be made
  6. Click OK and start testing

Now the only person supposed to see the linked server is me ("sa"), but somehow other users can see it and use it.

Note 1: The users who can use the linked server have permissions on the remote server, they are not seeing data that they should not see, they are just able to access it from the linked server when they should not be able to.

Note 2: I am the only sysadmin on both instances.

unloading data from oracle dbf files [duplicate]

Posted: 02 Jun 2013 10:47 AM PDT

This question is an exact duplicate of:

I have a list of oracle dbf files and I need to restore the information inside these files, there are no control files and anything else. only DBF file http://s9.postimg.org/mxmx0g73j/Untitled.jpg

Is there any parsing tool or recovery tool that can open DBF files and unload the information in it..

Restore oracle database from DBF files [closed]

Posted: 02 Jun 2013 05:40 AM PDT

I have a list of DBF oracle files and I need to rebuild a database from these files, but I am a little newbie in oracle db administration

http://s9.postimg.org/mxmx0g73j/Untitled.jpg

postgres service suddenly not working after the computer crashed while working

Posted: 02 Jun 2013 09:37 AM PDT

I've been working with postgres on a linux server for the last couple of months. Last week, half of our servers crashed and my one with them. I was in the middle of uploading data to the postgres database with osm2pgsql when this happened. After we revived the server, postgres has stopped working.

commands:

service postgresql start (Alternatively, /etc/rc.d/init.d/postgresql start, same thing happens)

Starting postgresql service: [ OK ]

service postgresql status

postmaster dead but pid file exists

Trying to start the postmaster using

su postgres

postmaster -D /var/lib/pgsql/data

yields nothing either.


As for the pid file, /var/run/postmaster.5432.pid exists, and when I remove it then the status changes to "postmaster is stopped", but if I try to start it again:

service postgresql start

Starting postgresql service: [FAILED]

If anyone has any idea what I could do to get it working, where the problem might be etc, I would be overjoyed to have an answer. What really gets me is that oit was working until last week, I even rebooted a backup and nothing seems to have changed, I'm really at my wuit's end with this problem!

Unable to create a new listener in oracle 11g

Posted: 02 Jun 2013 12:44 PM PDT

In net manager i get an error message "A valid logfile is required" when i click on listener and choose '+' button. and then it keeps on giving me that error when i click on any other tab or button in net manager.

Format function returning minutes instead of months?

Posted: 02 Jun 2013 05:56 AM PDT

While exploring the new function FORMAT I have come across the following issue, would like to know if any one else has faced a similar problem and found the fix for this, or if it is a bug?

SELECT FORMAT(GETDATE(),'mm/dd/yyyy','en-US');  SELECT GETDATE();  

Output:

  1. 02/02/2013
  2. 2013-06-02 12:02:54.067

How to import a text file with '|' delimited data to PostgreSQL database?

Posted: 02 Jun 2013 06:19 PM PDT

I have a text file with | delimited data that I want to import to a table in PostgreSQL database. PgAdminIII only exports CSV files. I converted the file to a CSV file but still was unsuccessful importing data to PostgreSQL database. It says an error has occurred:

Extradata after last expected column.   CONTEXT: COPY <file1>, line1:  

What I am doing wrong here?

SSIS - How do I execute a single OLE DB Command for mutiple paths?

Posted: 02 Jun 2013 02:54 AM PDT

I have an SSIS package (2008 R2) that I'm working on and I find myself duplicating the same OLE DB commands repeatedly and whenever there is a change its a nightmare to go back through and update all the duplicate OLE DB command tasks.

Is there a way to execute a single OLE DB command for multiple paths?

enter image description here

Does a SQL Server job run on the server or the local machine?

Posted: 02 Jun 2013 09:38 AM PDT

Does a SQL Server job always run on the server, or will it run in the context of the local machine, similar to a DTS package run from Enterprise Manager on the user's machine?

The job in question calls a DTS package. The package succeeds when run locally; the job fails whether run manually or scheduled.

SQL server 2012 upgrade causing reindexing to take a long time / fill up log file

Posted: 02 Jun 2013 12:55 AM PDT

I have a customer who has upgraded their SQL Server from SQL Server 2005 to SQL Server 2012. According to them, no other changes were made. They have an ERP database which does a full reindex once a week (the SQL just locates every index in the DB and rebuilds it) and it completed normally in 3 hours before the upgrade.

The DB is 178GB. After the upgrade, suddenly performance tanked. They performed a reindex and it was taking several hours and before being half way finished the log file grew to close to 500 GB. They ran out of disk space, so they dumped the log and did the reindex in pieces.

Would an upgrade to SQL Server 2012 change how indexes need to be rebuilt? Did something about the upgrade change the indexes? Do upgrades like this change indexes? We are going to monitor this on the next go around and see if it drops back to 3 hours or if it takes a huge amount of time and space again. Any imaginative thoughts would be appreciated as well, since I'm getting a lot of this information second hand from my customer :)

How to see what is cached in memory in SQL server 2008?

Posted: 02 Jun 2013 11:10 AM PDT

Is there a way how to find out what is cached in SQL Server 2008 R2? I have found the following nice article: http://blog.sqlauthority.com/2010/06/17/sql-server-data-pages-in-buffer-pool-data-stored-in-memory-cache . However, I would like to know how much data (e.g. in percentage and KB) are stored of each table and index. Is there some simple way how to obtain such data?

Re enable Windows Authetication in SQL Server

Posted: 02 Jun 2013 05:38 PM PDT

My old employee has disabled Windows Authentication in our server. Now I'm not able to access the SQL Server even though I have Administrator access to the server. I need to reset the sa password.

I tried logging in using single user mode as Remus described but I get the following error:

Login failed for user 'SERVER\Administrator'.
Reason: The account is disabled.
(Microsoft SQL Server, Error: 18470)

Configuring PostgreSQL for read performance

Posted: 02 Jun 2013 12:52 PM PDT

Our system write a lots of data (kind of Big Data system). The write performance is good enough for our needs but the read performance is really too slow.

The primary key (constraint) structure is similar for all our tables: timestamp(Timestamp) ; index(smallint) ; key(integer)

A table can have millions of row, even billion of rows, and a read request is usually for a specific period (timestamp / index) and tag. It's common to have a query that return around 200k lines. Currently, we can read about 15k lines per second but we need to be 10 times faster. Is this possible and if so, how?

Note: PostgreSQL is packaged with our software, so the hardware is different from one client to another.

[Edit] Added details below, performance was better for this test because I don't have access to the real setup right now. I will update as soon as I can access the setup.

[Edit2] Applied "dezso" suggestions, see configuration changes below and the specs of the server used for testing. Yes it's a VM used for testing, the VMs host is a Server 2008 R2 x64 with 24.0 GB of ram.

Server Spec (Virtual Machine VMWare)

Server 2008 R2 x64  2.00 GB of memory  Intel Xeon W3520 @ 2.67GHz (2 cores)  

postgresql.conf optimisations

shared_buffers = 512MB (default: 32MB)  effective_cache_size = 1024MB (default: 128MB)  checkpoint_segment = 32 (default: 3)  checkpoint_completion_target = 0.9 (default: 0.5)  default_statistics_target = 1000 (default: 100)  work_mem = 100MB (default: 1MB)  maintainance_work_mem = 256MB (default: 16MB)  

Table Definition

CREATE TABLE "AnalogTransition"  (    "KeyTag" integer NOT NULL,    "Timestamp" timestamp with time zone NOT NULL,    "TimestampQuality" smallint,    "TimestampIndex" smallint NOT NULL,    "Value" numeric,    "Quality" boolean,    "QualityFlags" smallint,    "UpdateTimestamp" timestamp without time zone, -- (UTC)    CONSTRAINT "PK_AnalogTransition" PRIMARY KEY ("Timestamp" , "TimestampIndex" , "KeyTag" ),    CONSTRAINT "FK_AnalogTransition_Tag" FOREIGN KEY ("KeyTag")        REFERENCES "Tag" ("Key") MATCH SIMPLE        ON UPDATE NO ACTION ON DELETE NO ACTION  )  WITH (    OIDS=FALSE,    autovacuum_enabled=true  );  

Query

The query take about 30 seconds to execute in pgAdmin3, but we would like to have the same result under 5 seconds if possible.

SELECT       "AnalogTransition"."KeyTag",       "AnalogTransition"."Timestamp" AT TIME ZONE 'UTC',       "AnalogTransition"."TimestampQuality",       "AnalogTransition"."TimestampIndex",       "AnalogTransition"."Value",       "AnalogTransition"."Quality",       "AnalogTransition"."QualityFlags",       "AnalogTransition"."UpdateTimestamp"  FROM "AnalogTransition"  WHERE "AnalogTransition"."Timestamp" >= '2013-05-16 00:00:00.000' AND "AnalogTransition"."Timestamp" <= '2013-05-17 00:00:00.00' AND ("AnalogTransition"."KeyTag" = 56 OR "AnalogTransition"."KeyTag" = 57 OR "AnalogTransition"."KeyTag" = 58 OR "AnalogTransition"."KeyTag" = 59 OR "AnalogTransition"."KeyTag" = 60)  ORDER BY "AnalogTransition"."Timestamp" DESC, "AnalogTransition"."TimestampIndex" DESC  LIMIT 500000;  

Explain (Edit2: Updated)

"Limit  (cost=0.00..125668.31 rows=500000 width=33) (actual time=2.193..3241.319 rows=500000 loops=1)"  "  Buffers: shared hit=190147"  "  ->  Index Scan Backward using "PK_AnalogTransition" on "AnalogTransition"  (cost=0.00..389244.53 rows=1548698 width=33) (actual time=2.187..1893.283 rows=500000 loops=1)"  "        Index Cond: (("Timestamp" >= '2013-05-16 01:00:00-04'::timestamp with time zone) AND ("Timestamp" <= '2013-05-16 15:00:00-04'::timestamp with time zone))"  "        Filter: (("KeyTag" = 56) OR ("KeyTag" = 57) OR ("KeyTag" = 58) OR ("KeyTag" = 59) OR ("KeyTag" = 60))"  "        Buffers: shared hit=190147"  "Total runtime: 3863.028 ms"  

In my latest test, It took 7 minutes to select my data!!! See below

Explain (Edit3)

"Limit  (cost=0.00..313554.08 rows=250001 width=35) (actual time=0.040..410721.033 rows=250001 loops=1)"  "  ->  Index Scan using "PK_AnalogTransition" on "AnalogTransition"  (cost=0.00..971400.46 rows=774511 width=35) (actual time=0.037..410088.960 rows=250001 loops=1)"  "        Index Cond: (("Timestamp" >= '2013-05-22 20:00:00-04'::timestamp with time zone) AND ("Timestamp" <= '2013-05-24 20:00:00-04'::timestamp with time zone) AND ("KeyTag" = 16))"  "Total runtime: 411044.175 ms"  

Thanks a lot for help!!

Mysql Connection Pooling Similar To PostgreSQL's PgBouncer

Posted: 02 Jun 2013 04:32 PM PDT

I'm looking for a pooling solution for mysql that is independent of the language. I am using PGBouncer as an example because it

  1. Resides on the server the database is on
  2. Just change the connection string to use the pooling connector
  3. No modification to the code required

Is there something similiar in MySQL where I can using pooling by installing a 3rd party connector and just connect through that port?

What's are the different ways to keep track of active and archived data?

Posted: 02 Jun 2013 07:23 PM PDT

I'm looking for different ways to keep track of both active and archived data so I can pro and con them.

The system: I have a computer with a database on it. The database has several tables in it; one of which contains a list of users that can use the computer; and several tables for auditing (user 1 did this, user 2 did that, etc). This database is a slave of a master database in which a Content Management System is used to say, add a new user and see reports on what user did what.

Example: As stated above, I have a table (lets call it users) that keeps track of all the users that are allowed to use the computer. As time goes by users will be added and removed. The problem is the audit tables keep track of a user ID so if the user is removed I lose the user information because the rows can't be joined. One idea I had was to use MySql's triggers so that if a user is added, an insert trigger is triggered and inserts a copy of the data to an 'archived' user table (lets call it users_archive). That way the computer can use users to determine if the user has permission to use it and reports can use users_archive for reports.

This seems like the easiest and most simple way to do it, but I can't find any other ways via google search to see if there are any other ways to do something like this.

Database design - do I need another table?

Posted: 02 Jun 2013 08:23 PM PDT

I am trying to make a database that follows a form that the company uses. When a client walks in the membes of staff have to fill in a form and the data is recorded. The form on paper is 10 pages long. The first time a client is seen the entire form is filled in and the client gets a clientID.

I have split the form into sections that make sense like accommodation and employment. I know I can link these tables together with the clientsID. Thats the simple part.

Now when a client returns the form comes out again but this time only certain parts are filled in, what ever the clients needs are. The records for most parts don't need updating but a new record needs inserting. what would be the best way around this.

So at the moment I have for example a table called client with an id and name another table called accommodation with clientid and address and another table employment with clientid and employer.

But how do I go about it when a client comes in to let us know he has a new employer. I cant update the current one as that is needed but I would need to add new record for the client.

Would this mean I would have to add a look up table for all my current tables?

event scheduler not called

Posted: 02 Jun 2013 02:17 PM PDT

I had created one event scheduler which looks like this

mysql> show create event event1      -> ;  +--------+----------+-----------+--------------------------------------------------------------------------------------------+  | Event  | sql_mode | time_zone | Create Event  | character_set_client | collation_connection | Database Collation |  +--------+----------+-----------+-----------------------------------------------------------------------+----------------------+----------------------+    | event1 |          | SYSTEM    | CREATE DEFINER=`root`@`localhost` EVENT `event1` ON SCHEDULE EVERY 1 MONTH STARTS '2013-02-02 00:00:00' ON COMPLETION NOT PRESERVE ENABLE DO BEGIN   update samp set col1 =col1  + 1; END | utf8                 | utf8_general_ci      | latin1_swedish_ci  |  +--------+----------+-----------+-----------------------------------------------------------------------+----------------------+----------------------+---------  -----------+  1 row in set (0.00 sec)  

This events has not called on 1st of month. So i tried show processlist\g to find it is runnung or not; it showed following o/p

mysql> show processlist\g;  +-----+-----------------+-----------------+---------------+---------+------+---     | Id  | User            | Host            | db            | Command | Time | State                       | Info             |  +-----+-----------------+-----------------+---------------+---------+------+---  | 136 | event_scheduler | localhost       | NULL          | Daemon  | 1855 | Waiting for next activation | NULL    |  

so NULL in db col means that no DB is assigned to it?

Please help me to solve it.

How can I improve my table design for different types of an entity?

Posted: 02 Jun 2013 06:23 PM PDT

Consider an accounting system as an example. I have an Entity called Client. Client can be of different types, with different fields applicable to different types. I consider creating separate tables for different types of Client, each having fields applicable to the respective type and have one master table referencing all of them and have fields applicable to all types.

Currently, I come up with the following design:

enter image description here

But I don't think my design is efficient enough (or even correct and free of errors). What would you suggest? Also, if this is important in any way, I am planning to utilize MariaDB.

FETCH API_CURSOR causing open transaction in tempDB

Posted: 02 Jun 2013 04:00 PM PDT

A select statement run from Oracle 11gR1 to SQL Server 2005 using Gateway leaves an open transaction in tempdb. The user is a datareader in Database XYZ. Not sure why the open tran is in tempDB when the statement run is SELECT.

Any one had seen this issue before ?

Thanks in advance sekhar

Ubuntu 12, PostgreSQL 9.2, PostGIS 2.0

Posted: 02 Jun 2013 03:17 AM PDT

At the current moment, March 4th 2013, can PostGIS2.0 be install with PostgreSQL 9.2?

I checked their website and to my understanding it is not possible...

I hope that's not the case. Can anyone tell me and point out the instructions on how to install PostGIS 2.0 on PostgreSQL 9.2 on Ubuntu?

Cannot find MySQL Server Config Wizard for MySQL 5.6 build

Posted: 02 Jun 2013 01:17 PM PDT

Not sure if this is the right Stack Exchange site for this but...

I am trying to reconfigure my mySQL instance but cannot find the config wizard. I looked here: http://dev.mysql.com/doc/refman/5.0/en/mysql-config-wizard-starting.html

As one user pointed out, the config .exe file is not included in version above 5.5. Does anyone know how to reconfigure the MySQL server instance?

Viewing MySQL Account Resource Limits

Posted: 02 Jun 2013 02:16 AM PDT

Is there any way of viewing an account's remaining resources that are allocated to it? I setup an account that's allowed 7200 queries an hour. At any point, could I then run a query to find out how many remaining queries it's allowed?

MySQL must be storing this information somewhere as FLUSH USER_RESOURCES; will reset the counters however, I tried a few variants such as SHOW USER_RESOURCES and they don't seem to display anything. I've also hunted around information_schema and mysql tables.

Is it just not possible to retrieve that information?

No comments:

Post a Comment

Search This Blog