Tuesday, May 28, 2013

[how to] Repairing Broken Binary Replication in PostgreSQL 9.0

[how to] Repairing Broken Binary Replication in PostgreSQL 9.0


Repairing Broken Binary Replication in PostgreSQL 9.0

Posted: 28 May 2013 08:25 PM PDT

I have a binary replication that was stopped for so long that the WALs were removed and as a result, it ended up being old. I'm trying to reestablish the replication and the best way I've found so far is following the steps on the PostgreSQL wiki:

  • Issue "select pg_start_backup('clone',true);" on master
  • rsync everything except for pg_xlog from master to slave
  • Issue "select pg_stop_backup();" on master
  • rsync pg_xlog

But the database is too big (300 GB), my connection is not really amazing (like 800 kB/s) and the files in base keep changing. So I was wondering if there's a more optimal way to do this.

Two nullable columns one required to have value

Posted: 28 May 2013 07:20 PM PDT

No-Explanation Question:

Is there anyway to have a constrain of 2 null values that always requires 1 to have value ? For example two date columns both null but having at least 1 that requires to have a value

Problem Description:

Let's say I have table called Expense

and have 2 dates :

prevision_expense_expiration_date DATE NULLABLE expense_payment_date DATE NULLABLE

the logic of those 2 columns is the following:

I made a purchase of something and I know I have to pay for it, some date, like a phone bill. I will enter this as an expense with a expense_payment_date. This date is the supposed date I should pay but not the actual date of the payment, like the expiration date of the invoice.

In other situation I sell a gift card of some provider for it's service. I may have the expense of buying to my provider the service transfered to my client only if the client redeem the card. Hence the gift card has an expiration date, I want to do a prevision for that 'expense' without inserting as an expense for the time the gift card is valid, if the gift card expires, that 'expense' should not enter into the account system.

I know I can have 2 equally tables called prevision_expense and confirmed_expense but doesn't sounds right so I have in the same table, 2 dates, nullable, but I want to constrain or something so that one is always required.

There's another posible strategy:

payment_date DATE NOT NULL is_prevision_date BOOL NOT NULL

So, in this case, if the date is prevision bool value would be 1, otherwise will be 0. No null values, all is good. except that I want the option to store BOTH values when first I have a prevision date and THEN (lets says two days later) have a confirmed date for that expense, in which case with strategy 2 i won't have that option.

Am I doing everything wrong in the database design ? :D

Executing local scripts against remote server?

Posted: 28 May 2013 05:30 PM PDT

I am in a situation where I would like to execute a number of sql scripts that are on my local machine against a remote MySQL instance. I found this question which seems to point me in the right direction. After reading this I tried the following:

$ ssh -L 12341:127.0.0.1:3306 root@myServer &  $ mysql -h 127.0.0.1 -p 12341  

But it's been throwing an "Access denied..." error for 'abe'@'localhost'. What I would like to do is connect as root from the destination server. So instead of running as abe@localhost I'd like to run as root@myServer. I tried:

$ mysql -h 127.0.0.1 -p 12341 -u root@myServer  

but this only returns the same error for user, 'root@myserver.com'@'local'. I also tried -u root and got access denied again.

Is there any way for me to specify that I want to connect at root on the destination server in my above example?

How quickly repiar mongo database 1TB on production

Posted: 28 May 2013 08:50 PM PDT

help me, production server is down every 15 min, i don't know how can fix it without lock

Tue May 28 14:26:17.803 [conn7] Assertion: 10334:BSONObj size: 0 (0x00000000) is invalid. Size must be between 0 and 16793600(16MB) First element: EOO  0xdcf361 0xd90a1b 0xd90f5c 0x6eaf99 0x81119c 0xa7bbdf 0xa7f48a 0xa78c81 0xa7e03e 0xa8114e 0x9f44d4 0x9f57e2 0x6e747a 0xdbbb7e 0x7f77078bee9a 0x7f7706bceccd   /usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0xdcf361]   /usr/bin/mongod(_ZN5mongo11msgassertedEiPKc+0x9b) [0xd90a1b]   /usr/bin/mongod() [0xd90f5c]   /usr/bin/mongod(_ZNK5mongo7BSONObj14_assertInvalidEv+0x5a9) [0x6eaf99]   /usr/bin/mongod(_ZN5mongo11BtreeCursor7currentEv+0x5c) [0x81119c]   /usr/bin/mongod(_ZNK5mongo21ResponseBuildStrategy7currentEbPNS_13ResultDetailsE+0x11f) [0xa7bbdf]   /usr/bin/mongod(_ZN5mongo20OrderedBuildStrategy11handleMatchEPNS_13ResultDetailsE+0xaa) [0xa7f48a]   /usr/bin/mongod(_ZN5mongo20QueryResponseBuilder8addMatchEv+0xb1) [0xa78c81]   /usr/bin/mongod(_ZN5mongo23queryWithQueryOptimizerEiRKSsRKNS_7BSONObjERNS_5CurOpES4_S4_RKN5boost10shared_ptrINS_11ParsedQueryEEES4_RKNS_12ChunkVersionERNS7_10scoped_ptrINS_25PageFaultRetryableSectionEEER$   /usr/bin/mongod(_ZN5mongo8runQueryERNS_7MessageERNS_12QueryMessageERNS_5CurOpES1_+0x1b3e) [0xa8114e]   /usr/bin/mongod() [0x9f44d4]   /usr/bin/mongod(_ZN5mongo16assembleResponseERNS_7MessageERNS_10DbResponseERKNS_11HostAndPortE+0x392) [0x9f57e2]   /usr/bin/mongod(_ZN5mongo16MyMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortEPNS_9LastErrorE+0x9a) [0x6e747a]   /usr/bin/mongod(_ZN5mongo17PortMessageServer17handleIncomingMsgEPv+0x42e) [0xdbbb7e]   /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a) [0x7f77078bee9a]enter code here   /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f7706bceccd]  Tue May 28 14:26:17.807 [conn7] assertion 10334 BSONObj size: 0 (0x00000000) is invalid. Size must be between 0 and 16793600(16MB) First element: EOO ns:cache.topics query:{ slug: "automobiles/autos-for-s$  Tue May 28 14:26:17.807 [conn7]  ntoskip:0 ntoreturn:-1  

What's the best database for analysis/research applications, i.e. involving complex queries but few transactions?

Posted: 28 May 2013 03:42 PM PDT

I'd like to start a discussion on what people think what the best database environment (MySQL, MS SQL, PostgreSQL, ...) is for data analysis and research applications like encountered in empirical academic work or business analysis? Such applications are the opposite of web applications, as they involve very few transactions (like only 1 or 2 people querying the data at the same time) but complex queries that can (INNER) JOIN 20-50 tables, including derived tables.

It can be assumed that the queries are optimized in the sense that tables have the right primary keys.

Still, from my experience with MySQL 5.5 and 5.6 running on an Intel i7 Quad core machine with 16 GB of RAM and the data being stored on an Intel SSD, the most complex queries can take several minutes up to 20 or 30 minutes. My .ini file was optimized using Percona tools, i.e., that buffers are increased to 8 or 10 GBs, etc. (I post it below.)

Even though MySQL supports multi-core processors, one transaction seems to be only computed by one core, which is why increasing the number of cores does not have any benefit.

I wonder if other (preferably Windows-based) databases provide better performance in such a setting? Can I expect an improvement by moving to MS SQL, that is known to be a more full-fledged database of the stature of Oracle?

I post my MySQL 5.6 my.ini file for completeness: (I override some Percona recommendations as indicated by #'s)

# Generated by Percona Configuration Wizard (http://tools.percona.com/) version REL5-20120208  # Configuration name MySQL_Server generated for xxx@mail.com at 2013-05-19 23:32:27    [mysql]    # CLIENT #  port                           = 3306  socket                         = C:\ProgramData\MySQL\MySQL Server 5.6\data\mysql.sock    [mysqld]  #performance_schema=off  # GENERAL #  user                           = mysql  default_storage_engine         = InnoDB  socket                         = C:\ProgramData\MySQL\MySQL Server 5.6\data\mysql.sock  pid_file                       = C:\ProgramData\MySQL\MySQL Server 5.6\data\mysql.pid    # MyISAM #  key_buffer_size                = 32M  myisam_recover                 = FORCE,BACKUP    # SAFETY #  max_allowed_packet             = 16M  max_connect_errors             = 1000000  skip_name_resolve  sql_mode                       = STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_AUTO_VALUE_ON_ZERO,NO_ENGINE_SUBSTITUTION,NO_ZERO_DATE,NO_ZERO_IN_DATE#,ONLY_FULL_GROUP_BY  sysdate_is_now                 = 1    # DATA STORAGE #  datadir                        = C:\ProgramData\MySQL\MySQL Server 5.6\data\    # BINARY LOGGING #  log_bin                        = C:\ProgramData\MySQL\MySQL Server 5.6\data\mysql-bin  expire_logs_days               = 14  sync_binlog                    = 1    # CACHES AND LIMITS #  tmp_table_size                 = 9G #32M  max_heap_table_size            = 9G #32M  query_cache_type               = 0  query_cache_size               = 0  max_connections                = 500  thread_cache_size              = 50  open_files_limit               = 65535  table_definition_cache         = 1024  table_open_cache               = 2048    # INNODB #  innodb_log_files_in_group      = 2  innodb_log_file_size           = 256M  innodb_flush_log_at_trx_commit = 2  innodb_file_per_table          = 1  innodb_buffer_pool_size        = 9G    # LOGGING #  log_error                      = C:\ProgramData\MySQL\MySQL Server 5.6\data\mysql-error.log  log_queries_not_using_indexes  = 1  slow_query_log                 = 1  slow_query_log_file            = C:\ProgramData\MySQL\MySQL Server 5.6\data\mysql-slow.log      # CUSTOM #    optimizer_search_depth = 0 #meant to cure the "statistics state" bug in some queries    enable-named-pipe  

Failed copy job deletes all users

Posted: 28 May 2013 08:25 PM PDT

So, since the progression of this was apparently somewhat difficult to follow the first time around:

I attempted a completely boring, been-done-a-thousand-times-before copy of a database using the copy database wizard with the detach/reattach method.

The copy failed. The log indicates that it was unable to execute a CREATE QUERY action for a particular view, because the datasource for the view did not exist. This is interesting in its own right, as the source most certainly exists, and the view(s) in question are fully functional in the source database. I'm not really clear, just yet, on how significant this is, as I've yet to figure out precisely why this generated an error.

This resulted in the deletion of all non-system user associations from the source database, leaving me with users dbo, information_schema, sys, and guest. Non-system roles were also deleted. Schemas were unaffected.

I have since restored the damaged database from backup. Academically, however, I would like to know the following:

  1. Why would a failed copy operation strip the user associations from the source database?
  2. Is there any sort of maintenance/rebuild/repair operation that could be performed on the source database to repair it?
  3. The loss of the users was immediately obvious, but given the rather mysterious nature of a failed copy job apparently damaging the source database, how concerned should I be about less obvious effects? In theory, I would expect restoring from backup would alleviate this concern, but do I have any cause to worry about, e.g., the master database?

This is entirely repeatable. I've made a handful of copies (manually) for the sake of experimenting with this particular issue, and in each case, the failed copy job obliterates the users and roles from the source database.

Removing the views that generated errors allows the copy to complete, and, as one would expect, produces a copy with identical data, users, etc., in addition to leaving the source database unaltered.

If it's important, I've tried rebuilding the indexes of the system databases, as well as the damaged database, to no appreciable effect.

Is there any way to connect to one server and utilize a linked server using Windows authentication and NTLM?

Posted: 28 May 2013 08:30 PM PDT

I currently have MSSQLS 2005 (Server A) that has a SSAS 2005 linked server (Server B). The powers that be do not want to enable Kerberos authentication, so I'm stuck with NTLM. The problem is that when developers connect from their local workstation to Server A and try to run queries against the linked server, Server B, they are not able to. They are only able to if they RDP into Server A and then run queries against the linked server, Server B. My understanding is that this is due to constraints of NTLM. Is there any way at all I can enable the developers to connect to Server A on their local workstation and be able to run queries against the linked server?

To clarify, the user has access to both the SQL Server instance containing the linked server, as well as the SSAS catalog that the linked server points to.

Addtional configuration when deploying Data Tier Applications?

Posted: 28 May 2013 02:20 PM PDT

I'm starting a few new database projects and I'm attempting to create them at Data Tier Applications. There are two items I'm not able to find documentation for. I would like to set the db owner to SA and set the initial filesize and growth rate. Even if those items are outside the scope of the app, I would expect that there would some way to specify that at publish time, either in SSDT or SSMS. I can find no documentation either way. Is this the case?

Is a bad practice to create a transaction always?

Posted: 28 May 2013 04:32 PM PDT

Is a bad practice to create a transaction always?

I mean is a good practice to create a transaction only for one simple select.

how much is the cost of creating a transaction when is not really necessary?

Even if you are using an isolation level read_uncomitted. Is a bad practice? because it shouldn't have problems with locking.

Restoring database backup file gives access error

Posted: 28 May 2013 07:35 PM PDT

I am running SQL server 2008 and trying to restore from a backup file.

Note: the backup file and folder and read/write access to "everyone". Running the command as "master"

Running

 RESTORE FILELISTONLY FROM DISK = 'D:\path'  

Gives the following error:

Msg 3201, Level 16, State 2, Line 1  Cannot open backup device 'D:\path'. Operating system error 5(Access is denied.).  Msg 3013, Level 16, State 1, Line 1  RESTORE FILELIST is terminating abnormally.  

Thanks, Bruce

SQL Server: subscriber as publisher and hierarchical replication

Posted: 28 May 2013 01:53 PM PDT

In Oracle Streams one way replication from one DB to another is a basic block for many replication topologies (N-way, hierarchical, combined and so on), and changes could be captured and applied for the same table at the same time.

But I can't find anywhere in SQL Server documentation whether table (article) could be a source (publisher) and destination (subscriber) simultaneously. Is it possible, for example, to setup bidirectional replication using two pairs of publisher->subscriber transactional replication?

Naturally I am more interested in multi-level hierarchy: for example one table is replicated from root node through intermediate nodes to leaves, second one - from leaves to root, and third one - bidirectionally, and for all tables intermediate nodes could perform DMLs to be replicated too. With Oracle Streams it is easy to achieve, and even more sophisticated configurations are possible, but are they with SQL Server?

UPDATE: It seems it is with use of merge replication and republishing (http://msdn.microsoft.com/en-us/library/ms152553.aspx), but what about transactional replication?

How to select nodes where all children is satisfied?

Posted: 28 May 2013 02:46 PM PDT

I have a tree structure of light bulbs. I want to turn on all the light bulbs starting from the leafs of the tree. A light bulb cannot be turned on unless all its immediate children are turned on.

The relation between the nodes in the tree structure is represented by table A:

Table A:

node_id integer
child_node_id integer

Table B represents the nodes in the tree:

Table B:

id integer
state boolean

The state of table B represents the states true = on and false = off.

Question:
I would like to select all the light bulbs which are turned off AND has all immediate children turned on.

This is probably very simple, but I can't seem to get my head around it.

Configuring PostgreSQL for read performance

Posted: 28 May 2013 04:31 PM PDT

Our system write a lots of data (kind of Big Data system). The write performance is good enough for our needs but the read performance is really too slow.

The primary key (constraint) structure is similar for all our tables: timestamp(Timestamp) ; index(smallint) ; key(integer)

A table can have millions of row, even billion of rows, and a read request is usually for a specific period (timestamp / index) and tag. It's common to have a query that return around 200k lines. Currently, we can read about 15k lines per second but we need to be 10 times faster. Is this possible and if so, how?

Note: PostgreSQL is packaged with our software, so the hardware is different from one client to another.

[Edit] Added details below, performance was better for this test because I don't have access to the real setup right now. I will update as soon as I can access the setup.

[Edit2] Applied "dezso" suggestions, see configuration changes below and the specs of the server used for testing. Yes it's a VM used for testing, the VMs host is a Server 2008 R2 x64 with 24.0 GB of ram.

Server Spec (Virtual Machine VMWare)

Server 2008 R2 x64  2.00 GB of memory  Intel Xeon W3520 @ 2.67GHz (2 cores)  

postgresql.conf optimisations

shared_buffers = 512MB (default: 32MB)  effective_cache_size = 1024MB (default: 128MB)  checkpoint_segment = 32 (default: 3)  checkpoint_completion_target = 0.9 (default: 0.5)  default_statistics_target = 1000 (default: 100)  work_mem = 100MB (default: 1MB)  maintainance_work_mem = 256MB (default: 16MB)  

Table Definition

CREATE TABLE "AnalogTransition"  (    "KeyTag" integer NOT NULL,    "Timestamp" timestamp with time zone NOT NULL,    "TimestampQuality" smallint,    "TimestampIndex" smallint NOT NULL,    "Value" numeric,    "Quality" boolean,    "QualityFlags" smallint,    "UpdateTimestamp" timestamp without time zone, -- (UTC)    CONSTRAINT "PK_AnalogTransition" PRIMARY KEY ("Timestamp" , "TimestampIndex" , "KeyTag" ),    CONSTRAINT "FK_AnalogTransition_Tag" FOREIGN KEY ("KeyTag")        REFERENCES "Tag" ("Key") MATCH SIMPLE        ON UPDATE NO ACTION ON DELETE NO ACTION  )  WITH (    OIDS=FALSE,    autovacuum_enabled=true  );  

Query

The query take about 30 seconds to execute in pgAdmin3, but we would like to have the same result under 5 seconds if possible.

SELECT       "AnalogTransition"."KeyTag",       "AnalogTransition"."Timestamp" AT TIME ZONE 'UTC',       "AnalogTransition"."TimestampQuality",       "AnalogTransition"."TimestampIndex",       "AnalogTransition"."Value",       "AnalogTransition"."Quality",       "AnalogTransition"."QualityFlags",       "AnalogTransition"."UpdateTimestamp"  FROM "AnalogTransition"  WHERE "AnalogTransition"."Timestamp" >= '2013-05-16 00:00:00.000' AND "AnalogTransition"."Timestamp" <= '2013-05-17 00:00:00.00' AND ("AnalogTransition"."KeyTag" = 56 OR "AnalogTransition"."KeyTag" = 57 OR "AnalogTransition"."KeyTag" = 58 OR "AnalogTransition"."KeyTag" = 59 OR "AnalogTransition"."KeyTag" = 60)  ORDER BY "AnalogTransition"."Timestamp" DESC, "AnalogTransition"."TimestampIndex" DESC  LIMIT 500000;  

Explain (Edit2: Updated)

"Limit  (cost=0.00..125668.31 rows=500000 width=33) (actual time=2.193..3241.319 rows=500000 loops=1)"  "  Buffers: shared hit=190147"  "  ->  Index Scan Backward using "PK_AnalogTransition" on "AnalogTransition"  (cost=0.00..389244.53 rows=1548698 width=33) (actual time=2.187..1893.283 rows=500000 loops=1)"  "        Index Cond: (("Timestamp" >= '2013-05-16 01:00:00-04'::timestamp with time zone) AND ("Timestamp" <= '2013-05-16 15:00:00-04'::timestamp with time zone))"  "        Filter: (("KeyTag" = 56) OR ("KeyTag" = 57) OR ("KeyTag" = 58) OR ("KeyTag" = 59) OR ("KeyTag" = 60))"  "        Buffers: shared hit=190147"  "Total runtime: 3863.028 ms"  

In my latest test, It took 7 minutes to select my data!!! See below

Explain (Edit3)

"Limit  (cost=0.00..313554.08 rows=250001 width=35) (actual time=0.040..410721.033 rows=250001 loops=1)"  "  ->  Index Scan using "PK_AnalogTransition" on "AnalogTransition"  (cost=0.00..971400.46 rows=774511 width=35) (actual time=0.037..410088.960 rows=250001 loops=1)"  "        Index Cond: (("Timestamp" >= '2013-05-22 20:00:00-04'::timestamp with time zone) AND ("Timestamp" <= '2013-05-24 20:00:00-04'::timestamp with time zone) AND ("KeyTag" = 16))"  "Total runtime: 411044.175 ms"  

Thanks a lot for help!!

What does the size of a SQL table depend on?

Posted: 28 May 2013 05:35 PM PDT

I mean SQL here can be any SQL like database such as SQL server, My SQL, SQLite, even MS Access. I want to know what the size of a table depends on, it depends on the actual rows in the table with a fixedly designed structure or it also depends on the content of cells in the table. For example:

I have a table which has the fixedly designed structure like this:

create table SampleTable (     ID int primary key,     Message varchar(500)  )  

And here is the table with 1 row:

ID    |     Message  1           I love .NET     --11 characters for Message  

and here is also a table with 1 row:

ID    |     Message  1           I also love Java     --16 characters for Message  

If the size depends on the number of rows, the 2 tables above would have the same size, if it also depends on the cells' content, the second table would have the larger size. I would like to know which is larger? I care about this because, in some case, I really want to maximize the maximum number of characters for a field (8000 in SQL server), to make user free from inputing almost anything s/he wants, but I'm afraid of making my database file too large (unnecessarily, costly).

Your help would be highly appreciated!

How to migrate SQL Server to MySQL

Posted: 28 May 2013 05:13 PM PDT

I'm trying to migrate a SQL Server db to MySQL, and I'm having a tough time. I've tried:

  • MySQLWorkbench -- migration tool fails because my login only shows views in SQL Server, not the tables themselves, and so I can't get past the "Schemata Selection" screen.

  • sqlcmd and bcp -- both fail because they can't export csv properly. They don't quote strings with commas in them. Plus, they don't create table schemas in MySQL.

  • SQLyog -- just fails. Creates a MySQL table from a MS SQL view ok, but doesn't import the data. Also takes a really long time on small tables, and I have to import millions of rows.

Anyone have suggestions?

Unique Non Clustered Column in Partitioned table

Posted: 28 May 2013 12:54 PM PDT

We have partitioned one of our SQL Server tables on a Clustered Index datetime column, we have a Primary key Non-clustered index column with Identity data type and this column is setup as a non-aligned index. There is one more column xyz which is UniqueIdentifier datatype and xyz column also has Unique Non-clustered index.

The problem is most of our queries use the XYZ column to retrieve data as XYZ is guid and unique.

At this point we are not sure what will be the impact of setting XYZ column (uniqueIdentifier) with an Aligned Index strategy vs Non-aligned Index strategy. Please advise.

Pattern similarity in time-series [closed]

Posted: 28 May 2013 12:14 PM PDT

I have a PostgreSQL database and my data-set include time-series of temperature for two stations. I would like to look their pattern's similarity in similar time sequences.

For station number 1:

+------------+------------------+  |temperature |dt                |  +------------+------------------+  | 4.22       |9/15/2007 12:12:12|                    | 5.11       |9/15/2007 13:14:16|  | 6.16       |9/15/2007 14:16:02|  | 6.01       |9/15/2007 15:18:23|  | 7.09       |9/15/2007 16:21:01|  +------------+------------------+  

For station number 2:

+------------+------------------+  |temperature |dt                |  +------------+------------------+  | 3.12       |9/15/2007 12:12:12|                    | 4.15       |9/15/2007 13:14:16|  | 5.26       |9/15/2007 14:16:02|  | 6.15       |9/15/2007 15:18:23|  | 7.23       |9/15/2007 16:21:01|  +------------+------------------+  

I would like to ignore true values and compare time-series with respect to the pattern. Moreover, 1 time slice by 1 time slice comparison is enough for my work. Please help me on discrete Fourier transformation or slope to compare the patterns.

Can I use a foreign key index as a shortcut to getting a row count in an INNODB table?

Posted: 28 May 2013 03:13 PM PDT

I have a table that has a large number of rows in it.

The primary key (an auto-incrementing integer) is, by default, indexed.

While waiting for a row count to be returned I did an EXPLAIN in another window and the the results were as follows:

mysql> SELECT COUNT(1) FROM `gauge_data`;  +----------+  | COUNT(1) |  +----------+  | 25453476 |  +----------+  1 row in set (2 min 36.20 sec)      mysql> EXPLAIN SELECT COUNT(1) FROM `gauge_data`;  +----+-------------+------------+-------+---------------+-----------------+---------+------+----------+-------------+  | id | select_type | table      | type  | possible_keys | key             | key_len | ref  | rows     | Extra       |  +----+-------------+------------+-------+---------------+-----------------+---------+------+----------+-------------+  |  1 | SIMPLE      | gauge_data | index | NULL          | gauge_data_FI_1 | 5       | NULL | 24596487 | Using index |  +----+-------------+------------+-------+---------------+-----------------+---------+------+----------+-------------+  1 row in set (0.13 sec)  

Since the primary key is guaranteed to be unique, can I just take the number of rows from the EXPLAIN and use that as the row count of the table?

BTW, I believe the difference in numbers is due to the fact that more data is continually being added to this table.

Database setup/design for multiple services

Posted: 28 May 2013 02:13 PM PDT

I am working on a new MySQL database that will store orders/customer information for around 15 different services. 7 of the services are similar in regards to the information that is stored with a few of the 7 services requiring an additional piece of information. There other 7-8 services are similar to each other as well but not as similar the the first 7. So my question is how should I break this down into a MySQL database?

Should each service have its own table? If this is true, what is the best way to link tables? Any advice is greatly appreciated!

Also, here is a sample of the type of info that will be stored:

enter image description here

How to connect multiple users/schemas in Oracle 11g?

Posted: 28 May 2013 12:54 PM PDT

I'm almost new user to databases, so I think I'm doing something wrong.

  • I have the database created.
  • I created 3 users, 1 it's an Admin, 2 are just users.
  • I created 2 roles, 1 for the Admin and 2 for the other users.

I'm learning that the schemas in Oracle are different from SQL Server. In SQL Server I created tables assigned to a schema, for example: CREATE TABLE schema1.table1 and then I was be able to see them all with users asigned to that DB; with Oracle, I think a schema is the sum of the sequence, synonyms, etc that a user has. (If not, please correct me)

I want to achieve that in Oracle, so I created the users/schemas; after that I got connected with user1 and created the tables with that user; later, I got connected with user2 and created the tables related to the user.

Now, when I'm trying to create the alter relating user1 tables and user2 tables with admin, it says I don't have enough privileges.

ALTER I'm trying to do:

ALTER TABLE user1.PhoneTable      ADD (CONSTRAINT C_001 FOREIGN KEY (Status)               REFERENCES user2.ClientTable (Status) ON DELETE SET NULL);  

I created those users because if I wanna see all the tables related to the Phone(user1), I got connected to that user and that should be all, I just wanna have the DB with some order.

Admin privileges:

GRANT       CREATE SESSION,      UNLIMITED TABLESPACE,      CREATE TABLE,      DROP ANY TABLE,      CREATE CLUSTER,      CREATE SYNONYM,      CREATE PUBLIC SYNONYM,      CREATE VIEW,      CREATE SEQUENCE,      CREATE DATABASE LINK,      CREATE PROCEDURE,      CREATE TRIGGER,      CREATE MATERIALIZED VIEW,      CREATE ANY DIRECTORY,      DROP ANY DIRECTORY,      CREATE TYPE,      CREATE LIBRARY,      CREATE OPERATOR,      CREATE INDEXTYPE,      CREATE DIMENSION,      CREATE ANY CONTEXT,      SELECT ANY DICTIONARY,      CREATE JOB,      ALTER ANY TABLE,  TO myAdmin;  

User privileges:

GRANT       CREATE session,       CREATE table,       CREATE view      CREATE procedure,       CREATE synonym,      ALTER ANY table,       ALTER view,       ALTER procedure,       ALTER synonym,      DROP table,       DROP view,       DROP procedure,       DROP synonym  TO myUsers;  

mysql join by 2 ways - how it works? - which gives performance difference

Posted: 28 May 2013 12:54 PM PDT

I had 2 different ways to query the content which showed performance difference when executed. The 1st way is

EXPLAIN select SOME_COLUMNS  from  ( select *    from A    where CONDITION  ) p  inner join  ( select *    from B  )st  on p.id = st.id;  

and the output of this query returned:

"id"   "select_type"   "table"    "type"   "possible_keys"   "key"    "key_len"   "ref"   "rows"   "Extra"     1       PRIMARY     derived3  ALL       NULL                 NULL    NULL    NULL      25607     "   "   1        PRIMARY    derived2  ALL        NULL                NULL     NULL    NULL      21037   Using where; Using join buffer   3        DERIVED             A    ALL          NULL               NULL      NULL    NULL      23202    "   "   2        DERIVED             B    ref        IDX_A_TYPE_ID   IDX_A_ID  98      "   "        12411  Using where  

The other way is

EXPALIN SELECT SOME_COLUMNS  FROM A p, B s  WHERE p.id = s.id  AND p.CONDITION;  

The output of this looks like this:

id       select_type       table       type       possible_keys       key     key_len    ref     rows   Extra   1       SIMPLE           p            ref       PRIMARY,IDX_A_TYPE_ID      IDX_A_TYPE_ID    98    const    12411    Using where   1      SIMPLE            s           ref             PRIMARY       PRIMARY     4   local_db.p.entity_id    1    

Why is there such a difference in the query execution plan between the 2 and number of rows being fetched in each step of query execution is more in the 1st approach? Please explain it.

How to search a MySQL database with encrypted fields

Posted: 28 May 2013 12:43 PM PDT

Suppose I need to encrypt certain table-fields of a MySQL database. Additionally, I need to search some of those fields I did encrypt.

How would one search those fields anyway?

Decrypting each record step by step is no option: Suppose I have multiple of thousands of records. It would take too much time and space to decrypt each record and check if each single record matches the search.

UPDATE 2012-09-07

Adding further details to the database schema would be OK, since I'm about to implement a new application. Furthermore, I need to extend applications currently running in production. But even for those application, adding further details would be OK.

UPDATE 2012-09-08

Encryption is the kernel of this question.

Access restrictions, as proposed by some answers, already apply - but do not fit the formal requirement to encrypt data.

This formal requirement is not Payment Card Industry Data Security Standard [PCI].

How to load more than one MS-Access-db tables with data in SQL-Server-DB

Posted: 28 May 2013 12:36 PM PDT

I'm just getting into SQL-server 2008 R2 ssis, so forgive me if this is a basic question. I'm building an application using integration service.

One of my computer folders contains more than one MS-Access db, want to load all of this access tables with data in SQL-Server db. To load all tables I want to use ssis. Which control of ssis will help me to load all access tables on my SQL-Server db?

I want to create (SSIS) package for importing an MS Access table into SQL Server, then running some stored procedures on this database and finally exporting the result to Excel.

I am comfortable in writing queries and working in Management Studio but I don't know how to actually design an SSIS package. Need helping hand to do this?

If have any query please ask.

mysql 5.5 strange bug

Posted: 28 May 2013 03:16 PM PDT

I am trying to create a database in MySQL Server 5.5. The database is created, I know this as when I do

SHOW DATABASES;   

It shows me my database in the list of all databases from the commandline.

But when i go inside

C:\Program Files\MySQL\MySQL Server 5.5\data\

my database is not visible there. In fact with

SHOW DATABASES;   

it shows me 6 databases in the list but inside

C:\Program Files\MySQL\MySQL Server 5.5\data\

shows only two databases.

Its really strange error I m experiencing, the database is created but not actually created feels like the commandline is cheating with me!

thanks for the help in advance!

how to send mail from gmail smtp server by using the ssis

Posted: 28 May 2013 01:34 PM PDT

I'm just getting into sql-server SSIS, so forgive me if this is a basic question. Use the SSISwant to send mail from gmail smtp server.

Smtp host =smtp.gmail.com   port=587 ,  From_Mail_ID=xyz@gmail.com,  To_Mail_ID=abc@gmail.com,  

Using the above information I want to send mail from SSIS. How do I use SSIS to do this?

If have any questions please ask.

how to get unique records

Posted: 28 May 2013 12:38 PM PDT

Table structure

CREATE TABLE [dbo].[Order Details2](      [OrderID] [int] NOT NULL,      [ProductID] [int] NOT NULL,      [UnitPrice] [money] NOT NULL   DEFAULT (0),      [Quantity] [smallint] NOT NULL   DEFAULT (1),      [Discount] [real] NOT NULL   DEFAULT (0),   CONSTRAINT [PK_Order_Details2] PRIMARY KEY CLUSTERED   (      [OrderID] ASC,      [ProductID] ASC  )WITH (PAD_INDEX  = OFF, IGNORE_DUP_KEY = OFF) ON [PRIMARY]  ) ON [PRIMARY]  

Table values

+-------------+-----------+------------+----------+----------+  |     OrderID | ProductID | UnitePrice | Quantity | Discount |  +-------------+-----------+------------+----------+----------+  |       10248 |        11 | 14         |       12 | 0        |  |       10248 |        42 | 9.8        |       10 | 0        |  |       10248 |        72 | 34.8       |        5 | 0        |  |       10249 |        14 | 18.6       |        9 | 0        |  |       10249 |        51 | 42.4       |       40 | 0        |  |       10250 |        41 | 7.7        |       10 | 0        |  |       10250 |        51 | 42.4       |       35 | 0.15     |  |       10250 |        65 | 16.8       |       15 | 0.15     |  |       10251 |        22 | 16.8       |        6 | 0.05     |  |       10251 |        57 | 15.6       |       15 | 0.05     |  |       10251 |        65 | 16.8       |       20 | 0        |  |             |           |            |          |          |  +-------------+-----------+------------+----------+----------+  

Required output

+-------------+-----------+------------+----------+----------+  |     OrderID | ProductID | UnitePrice | Quantity | Discount |  +-------------+-----------+------------+----------+----------+  |       10248 |        72 | 14         |       12 |        0 |  |       10249 |        51 | 18.6       |        9 |        0 |  |       10250 |        65 | 7.7        |       10 |        0 |  +-------------+-----------+------------+----------+----------+  

My table has two primary keys, I want to get the unique record from this table. The required output contain column orderid and max productid row (just one row) only. I need helping hand to solve this issue.

If have any query please ask.

How to get server event notification

Posted: 28 May 2013 12:37 PM PDT

I'm just getting into sql-server, so forgive me if this is a basic question. I'm building an application using .NET 4.0 and VS 2010 and ms-server2008.

For my application I need to detect the server response, suppose: in db-server I need to detect which table insert/update/delete records, and which is the affected record is?

Is there any mechanism or technique to detect the server response.

If have any questions please ask.

Examples of SQL transaction procedures for sales tracking or a financial database

Posted: 28 May 2013 12:54 PM PDT

I am making a database for an accounting/sales type system similar to a car sales database and would like to make some transactions for the following real world actions:

  1. salesman creates new product shipped onto floor (itempk, car make, year, price).
  2. salesman changes price.  
  3. salesman creates sale entry for product sold (salespk, itemforeignkey, price sold, salesman).  
  4. salesman cancels item for removed product.  
  5. salesman cancels sale for cancelled sale.

       The examples I have found online are too generic, like "this is a transaction", I would like something resembling what I am trying to do to understand it.

Anybody have some good similar or related sql examples I can look at to design these? Do people use transactions for sales databases? Or if you have done this kind of sql transaction before could you make an outline for how these could be made?

My (closed as not a real question) thread so far on stack overflow: Need example SQL transaction procedures for sales tracking or financial database

Latest update, user will send new inputs/ changes /and cancellations from a c# application. Application data:

  • Products On Display (this is the parent node which has 3 child nodes)

  • Sales(child node of Products On Display)

  • Product Custom Features(child node of Products On Display)

  • Product Price / current status (child node of Products On Display)

C# App will package that data into XML format and then execute some SQL stored procedures with transactions holding together the xml to table conversions into the SQL Tables designed with the same parent/child node structure using something like what is described by the answers to this related question on Stack Overflow: http://stackoverflow.com/q/2756773/613799

I wish there was a book on designing multi-user sales databases, and the stored procedure transactions that will be used by the related user apps from scratch app->xml->database. Please let me know if you know of a good one, or a chapter of a book.

No comments:

Post a Comment

Search This Blog