Sunday, July 28, 2013

[how to] Database design for user information in mySQL

[how to] Database design for user information in mySQL


Database design for user information in mySQL

Posted: 28 Jul 2013 09:00 PM PDT

I'm trying to design an app where users take a new survey everyday (that's not the main part of app but it's an important part). I've never designed a database before so what I'm trying to do might not make any sense. I'm using MySQL and I have a table for user information (user_id, name, password, etc). What I plan on doing is making a new table for each new survey and linking it with the user_id. So basically, I would have one table having a relationship with hundreds of other tables. Is that how it would work? I thought about adding a column to the user table for every survey but I read that it's bad design to add columns often.

How can I drop a stored procedure whose name is in a variable?

Posted: 28 Jul 2013 07:17 PM PDT

Given a variable that contains a stored procedure name:

DECLARE @stored_procedure_name sysname;  SET @stored_procedure_name = 'some_stored_procedure_name';  

How can I drop the stored procedure?

Dual column Primary Key [on hold]

Posted: 28 Jul 2013 02:41 PM PDT

Order ID Customer ID Inventory ID Number ordered

Number ordered would depend on Order ID and Inventory ID right? Because an order ID could contain several Inventory IDs. So would Order ID and Inventory ID need to be my primary Key, and if so, would they need to be next to each other? Also is this in 3nf? At first I was thinking that order ID would be my primary key and customer id and inventory id would be foreign keys. Can Inventory ID be part of the primary key and a foreign key? Thanks in advance for any help!

Max_packet_allowed setting for Windows 7

Posted: 28 Jul 2013 06:33 PM PDT

We have windows xp machine with older version of mysql. So when we need to import huge data we just set the max_allowed_packet=1500M in the C:\Program Files\MySQL\MySQL Server 5.1 . Now we loaded the latest mysql 5.6.12. We are trying to import the same date and we find there are few my.cnf file on is in the program file and another in programme data both change yet we cant import the same data. Where else to change even after the change we started the service by going into the admistrative tool.

Why is my Amazon RDS instance so slow?

Posted: 28 Jul 2013 08:41 AM PDT

I currently run a Windows Server with SQL Server Express. I'd like follow best practice and separate the application from the database, so I've created an RDS instance and exported my database.

But... it's painfully slow. I'm running a test query from my local machine that selects 12000 rows - it takes less than 1 second against my original setup but 16 seconds against RDS. Surely it can't be question of specification; my small RDS instance actually has a better spec. than my Windows Server.

What could be going on here?

Running out of Transaction Log space during Alter Table

Posted: 28 Jul 2013 01:10 PM PDT

I'm running an alter table, alter column on a table with nearly 30 million rows and SQL Azure fails after approximately 18 minutes saying that The session has been terminated because of excessive transaction log space usage. Try modifying fewer rows in a single transaction.

I'm guessing that it's not possible to break this down into modifying fewer rows at a time, so I'm wondering what my options are for making this change to the database. SQL Azure will not let me change the size of the transaction log (limited to 1gb).

I'm guessing that my best bet is going to be creating a new table with the new layout, migrating the data across into that one, deleting the original table and then renaming the new table to match the name of the old table. If that's the case, how is it best to structure those commands?

Scheduled downtime for our system is not an issue currently so this operation can take as long as it needs to.

When MySQL table gets too large

Posted: 28 Jul 2013 12:07 PM PDT

I have a really large MySQL table counting more than 12 billion rows at the moment (table scheme: InnoDB). From time to time the problems get more and more. The biggest problem is that altering the table takes a really long time (more than 48h) and every minute downtime is one minute too much. So whenever I need to add a new index or add a new column, the problem with the altering is there.

Are 12 billion rows too much for MySQL to handle? What would you suggest to do?

Adding an index to a system catalog in Postgres

Posted: 28 Jul 2013 08:47 PM PDT

I'm having a situation very similar to the one described here:

I've got a SaaS situation where I'm using 1000+ schemas in a single database (each schema contains the same tables, just different data per tenant). I used schemas so that the shared app servers could share a connection to the single database for all schemas. Things are working fine.

and, while the application itself appears to be working fine, some queries involving system catalogs are very slow. Also, psql's auto-completion is totally useless and \dt is very slow.

In particular, I need to calculate the on-disk size of each schema using something like this:

SELECT sum(pg_total_relation_size(c.oid)) AS size  FROM   pg_namespace n  JOIN   pg_class     c ON c.relnamespace = n.oid  WHERE  n.nspname = 'abbiecarmer'  AND    c.relkind = 'r';  

which is quite slow. Looking at the query plan, I see

Aggregate  (cost=136903.16..136903.17 rows=1 width=4) (actual time=1024.420..1024.420 rows=1 loops=1)                 ->  Hash Join  (cost=8.28..136902.86 rows=59 width=4) (actual time=143.247..1016.749 rows=60 loops=1)                     Hash Cond: (c.relnamespace = n.oid)                                                                                 ->  Seq Scan on pg_class c  (cost=0.00..133645.24 rows=866333 width=8) (actual time=0.045..943.029 rows=879788 loops=1)                                                                                                    │                Filter: (relkind = 'r'::"char")                                                                                     Rows Removed by Filter: 2610112                                                                               ->  Hash  (cost=8.27..8.27 rows=1 width=4) (actual time=0.032..0.032 rows=1 loops=1)                                      Buckets: 1024  Batches: 1  Memory Usage: 1kB                                                                        ->  Index Scan using pg_namespace_nspname_index on pg_namespace n  (cost=0.00..8.27 rows=1 width=4)(actual time=0.029..0.030 rows=1 loops=1)                                                                        │                      Index Cond: (nspname = 'abbiecarmer'::name)                                                     Total runtime: 1024.476 ms                                                                                          

Which, if I understand it right, tells that 90% of the query time is spent sequentially scanning pg_class relation.

I stopped postmaster, ran the backend in single-user mode and added the following indexes:

create index pg_class_relnamespace_index on pg_class(relnamespace);  REINDEX INDEX pg_class_relnamespace_index;    create index pg_class_reltablespace_index on pg_class(reltablespace);  REINDEX INDEX pg_class_reltablespace_index;  

(I've also got thousands of tablespaces too). Now the query is ~100 times faster and the plan looks much nicer:

Aggregate  (cost=846.91..846.92 rows=1 width=4) (actual time=10.609..10.610 rows=1 loops=1)                           ->  Nested Loop  (cost=0.00..846.61 rows=60 width=4) (actual time=0.069..0.320 rows=60 loops=1)                           ->  Index Scan using pg_namespace_nspname_index on pg_namespace n  (cost=0.00..8.27 rows=1 width=4) (actual time=0.023..0.024 rows=1 loops=1)                                                                              │                Index Cond: (nspname = 'abbiecarmer'::name)                                                                  ->  Index Scan using pg_class_relnamespace_index on pg_class c  (cost=0.00..837.59 rows=75 width=8) (actual time=0.043..0.271 rows=60 loops=1)                                                                             │                Index Cond: (relnamespace = n.oid)                                                                                 Filter: (relkind = 'r'::"char")                                                                                    Rows Removed by Filter: 102                                                                          Total runtime: 10.696 ms                                                                                           

However, in the above thread, Tom Lane, who is one of Postgres core contributors, says:

There are a lot of gotchas here, notably that the session in which you create the index won't know it's there (so in this case, a reindex on pg_class would likely be advisable afterwards). I still think you'd be nuts to try it on a production database, but ...

I'm also worried by the fact that modification of system catalogs seemed to be completely disabled in Postgres 9.0 and 9.1 (I'm using 9.2) - I suppose it was done for a reason?

So, the question is: what are the gotchas in adding an index to a system catalog in Postgres and will I be nuts if I (eventually) do that on a production system?

Replication error

Posted: 28 Jul 2013 09:03 AM PDT

We have a slave server that has stopped replication due to the following error:

Slave SQL: Query caused different errors on master and slave.  

What could be the cause of this error? And what would be a way to fix it?

Version of both master and slave is MySQL 5.5.30

130726 23:55:45 [Note] C:\Program Files\MySQL\MySQL Server 5.5\bin\mysqld: Shutdown complete    130726 23:58:39 [Note] Plugin 'FEDERATED' is disabled.  130726 23:58:39 [Warning] C:\Program Files\MySQL\MySQL Server 5.5\bin\mysqld: ignoring option '--innodb-file-per-table' due to invalid value 'ON'  130726 23:58:39 [Note] Plugin 'InnoDB' is disabled.  130726 23:58:39 [Note] Server hostname (bind-address): '0.0.0.0'; port: 3306  130726 23:58:39 [Note]   - '0.0.0.0' resolves to '0.0.0.0';  130726 23:58:39 [Note] Server socket created on IP: '0.0.0.0'.  130726 23:58:39 [Note] Slave SQL thread initialized, starting replication       in log 'mysql-bin.000234' at position 1065421256,       relay log '.\slave-relay-bin.000917' position: 1065421402  130726 23:58:39 [Note] Slave I/O thread: connected to master 'replication@191.5.3.4:3306',      replication started in log 'mysql-bin.000235' at position 166680598  130726 23:58:39 [Note] Event Scheduler: Loaded 0 events  130726 23:58:39 [Note] C:\Program Files\MySQL\MySQL Server 5.5\bin\mysqld:       ready for connections.  Version: '5.5.30-log'  socket: ''  port: 3306  MySQL Community Server (GPL)  130726 23:59:04 [ERROR] Slave SQL: Query caused different errors on master and slave.      Error on master: message (format)='Incorrect key file for table '%-.200s';       try to repair it' error code=1034 ;       Error on slave: actual message='no error', error code=0.       Default database: 'shared'.       Query: 'CREATE TEMPORARY TABLE tmp_grades (                          vehsysid INT(11),                          grade INT(1),                          dt TIMESTAMP,                          INDEX(vehsysid),                          INDEX(grade),                          INDEX(dt)                      ) SELECT vehsysid, Grade, MAX(dt) AS dt                          FROM shared.tbl_valuations                           GROUP BY vehsysid, grade', Error_code: 0  130726 23:59:04 [ERROR] Error running query, slave SQL thread aborted. Fix the problem,     and restart the slave SQL thread with "SLAVE START".      We stopped at log 'mysql-bin.000234' position 1065421256  

What else I can't figure out is how a temporary table would lead to such an error (in master):
'Incorrect key file for table '%-.200s'; try to repair it' error code=1034

The last lines from master error log:

130725 23:15:57 [Warning] Warning: Enabling keys got errno 120 on shared.tmp_grades, retrying  130726 23:15:58 [Warning] Warning: Enabling keys got errno 137 on shared.tmp_grades, retrying  

Additional info:

  • both Master and Slave run on Windows (I don't know if that's relevant.)
  • the disks at both have plenty of space.
  • replication format is MIXED
  • innodb is skipped in all instances, master and slaves. MyISAM is the default.

What's a good way to model user authorization to hierarchical data?

Posted: 28 Jul 2013 07:44 PM PDT

I'm wanting to implement something that's basically an ACL in SQL Server (2008, we'll be moving to 2012). We're working with farm production information. We will have users who need to be authorized to view production records, but only at certain vertical levels in a hierarchy.

Our data hierarchy looks like this:

System  - Farm    - Group      - Animal  

The idea is that some users will have access at the System level, and can see records for all Farms, Groups, and Animals within that System. Likewise, some users will have permission starting at the Farm level, and need access only linked to that Farm and all Groups (and animals) within it.

Each table contains a primary key column, and a foreign key column linking it to the parent record (along with whatever other attributes each entity requires).

What I've implemented in the past is two-table system for linking users to the appropriate items they're allowed to see. Implemented here, it would look like this:

Table:  Authorizations          Table:  FullAuthorizations  Columns:    Id (PK)             Columns:    Id (PK)              UserId                          UserId              ObjectId                        SystemId              ObjectType                      FarmId                                              GroupId                                              AnimalId  

The application inserts a record into Authorizations, with the user to authorize, the record id (System id, Farm id, etc), and the type of record (System, Farm, etc). The FullAuthorizations table is used to denormalize the farm hierarchy for easier/faster filtering of data. A trigger is used on the Authorizations table (and each of the farm, etc, tables) to update FullAuthorizations. I considered using a View here, in a previous project with additional levels of entities, and the performance was quite poor once we began getting several hundred thousand records.

The queries would look something like:

SELECT *  FROM dbo.Animals a  WHERE EXISTS (      SELECT 1      FROM dbo.FullAuthorizations fa      WHERE fa.UserId = 1 AND fa.AnimalId = a.Id  )  

In the other project where we're doing this, the solution is performant, but feels like a bit of a hack, and I especially don't like that we can't maintain referential integrity on Authorizations with the associated objects. I'd appreciate feedback on some other possible solutions. I've been looking at things like Nested Sets, but not sure something like that fits this particular problem.

Need advice regarding RMAN's working

Posted: 28 Jul 2013 05:43 AM PDT

I have configured a RMAN script which first takes the backup of the database and then deletes the obsolete backupsets according to the present retention policy.Now is there any possibility of the given scenario

Scenario: The backup process starts and failed abruptly dues to some reason creating incomplete backupset.Then the backup deletion part of script runs and deletes the latest full/working backup(not the one that failed but the one which the script created on the previous run)of the database provided that the retention policy is redundancy 1.Hence I lose the only available/working backup of my database.

Can it happen?

Oracle shared memory exception ORA-04031

Posted: 28 Jul 2013 01:43 PM PDT

I'm trying to establish an Oracle database connection. It throws the following exception:

ORA-04031: unable to allocate 4048 bytes of shared memory     ("shared pool","unknown object","sga heap(1,0)","kglsim heap")  

I have tried connecting the database through JDBC as well as SQL Developer, however it throws the exception in both case.

How can I fix this?

Memcached plugin on MariaDB?

Posted: 28 Jul 2013 02:43 PM PDT

I'd like to try new NoSQL feature in MySQL 5.6 but I am using MariaDB 10.0.2 on Debian 6 and don't fancy coming back to MySQL.

I'm wondering whether the memcached plugin has been added to MariaDB? And if not whether one can still use it as an addon?

And if not, can I use the existing Cassandra plugin of MariaDB to the same effect?

How to embed a sub-prodecure call in a SELECT statement in an Oracle 11g PL/SQL stored procedure

Posted: 28 Jul 2013 03:43 PM PDT

I need to figure out a way to embed an Oracle PL/SQL sub-procedure call in a SELECT statement, within another procedure in the same package.

I am using SQLDeveloper 3.0 with an Oracle 11g database.

I have already developed a stored procedure 'acctg_detail_proc()' that generates a detailed list of accounting transactions within a specified period. What I am trying to do is create a summary report procedure 'acctg_summary_proc()' using the detailed data returned by acctg_detail_proc().

Since I am new to PL/SQL, all I have learned so far is how to retrieve the detailed data via a ref cursor, then LOOP through it, FETCHing each individual detail row. I want to figure out how acctg_summary_proc() can substitute that call to acctg_detail_proc() for a table name in a SELECT statement with a GROUP-BY clause. Here is the source code for an anonymous block where I tried to test it:

SET SERVEROUTPUT ON;  DECLARE      start_date VARCHAR2(50) := '04/01/2012';      end_date VARCHAR2(50) := '04/30/2012';      c_acctg_refcur    SYS_REFCURSOR;  BEGIN    acctg_rpt_pkg.acctg_detail_proc(start_date, end_date, c_acctg_refcur);    SELECT       date_posted,      debit_acct,      credit_acct,      SUM(dollar_amt)    FROM c_acctg_refcur    GROUP BY      date_posted,      debit_acct,      credit_acct;    CLOSE c_acctg_refcur;  END;  

When I try to execute this code, I get the following error:

PL/SQL: ORA-00942: table or view does not exist

I realize I could use a nested SELECT statement instead of a table name, but I want to avoid duplication of source code. Is there any way to 'alias' a ref cursor so I can reference its data in a SELECT statement?

Here is some further background info: The called sub-procedure has ~600 lines of code, and selects 40 columns of data from a de-normalized VIEW. The corporate DBAs will not let me create any VIEWs that contain WHERE clauses, so that is not an option for me.

Thanks in advance, Ken L.

Oracle database link via ODBC to SQL Server does not recover automatically

Posted: 28 Jul 2013 08:43 AM PDT

I have a test environment with an Oracle 10XE database which is linked to an SQL-Server 2005 via an ODBC data source. For reference here I named the connection MSSQL_LINK. The production environment will have an Oracle 11g and an SQL-Server 2008. The connection once set up works fine.

As a test of error conditions in production we unplugged the network cable. This broke the connection. A subsequent executed statement (e.g. select * from test_table@MSSQL_LINK;) failed with an error ORA-28500, which is a wrapper for the original ODBC error:

[Generic Connectivity Using ODBC][S1000] [9013]General error in nvITrans_BeginT - rc = -1. Please refer to the log file for details.  ORA-02063: preceding 2 lines from MSSQL_LINK  

This was somewhat expected. Unfortunately plugging the cable back into the machine does not lead to an automatic reconnect of the link. The problem persists until I drop the link and create it anew with the commands

drop database link MSSQL_LINK;  create database link MSSQL_LINK connect to "user" identified by "passwd" using 'MSSQL_LNK';  

Also the same happens if I just disconnect the cable for a short time and reconnect it before I execute any statements over the link. The interim breakup is still detected by the ODBC driver but not fixed on its own.

As mentioned by Ste, creating a link will tell Oracle only about the connection parameters like host, user name, password, but not create a connection by itself. The link can be created without the network cable connected to the Oracle machine. If I create the link without the cable connected, then connect it to and execute a simple select on the linked server, the same Oracle error occurs.

Obviously in production mode this is an unacceptable situation which needs to be fixed, preferably by an automatic procedure.

Three questions do I have about this:

  1. Is there a way to test a database link other than e.g. a select on a test table which resides on the linked server?
  2. Is dropping an re-creating the database link on my oracle machine the best option I have when this problem occurs?
  3. What is the best automatic procedure for recovering from this situation.

Repairing Broken Binary Replication in PostgreSQL 9.0

Posted: 28 Jul 2013 12:43 PM PDT

I have a binary replication that was stopped for so long that the WALs were removed and as a result, it ended up being old. I'm trying to reestablish the replication and the best way I've found so far is following the steps on the PostgreSQL wiki:

  • Issue select pg_start_backup('clone',true); on master
  • rsync everything except for pg_xlog from master to slave
  • Issue select pg_stop_backup(); on master
  • rsync pg_xlog

But the database is too big (300 GB), my connection is not really amazing (like 800 kB/s) and the files in base keep changing. So I was wondering if there's a more optimal way to do this.

Failed copy job deletes all users

Posted: 28 Jul 2013 05:43 PM PDT

Since the progression of this was apparently somewhat difficult to follow the first time around:

I attempted a completely boring, been-done-a-thousand-times-before copy of a database using the copy database wizard with the detach/reattach method.

The copy failed. The log indicates that it was unable to execute a CREATE VIEW action for a particular view, because the datasource for the view did not exist. This is interesting in its own right, as the source most certainly exists, and the view(s) in question are fully functional in the source database. I'm not really clear, just yet, on how significant this is, as I've yet to figure out precisely why this generated an error.

This resulted in the deletion of all non-system user associations from the source database, leaving me with users dbo, information_schema, sys, and guest. Non-system roles were also deleted. Schemas were unaffected.

I have since restored the damaged database from backup. Academically, however, I would like to know the following:

  1. Why would a failed copy operation strip the user associations from the source database?
  2. Is there any sort of maintenance/rebuild/repair operation that could be performed on the source database to repair it?
  3. The loss of the users was immediately obvious, but given the rather mysterious nature of a failed copy job apparently damaging the source database, how concerned should I be about less obvious effects? In theory, I would expect restoring from backup would alleviate this concern, but do I have any cause to worry about, e.g., the master database?

This is entirely repeatable. I've made a handful of copies (manually) for the sake of experimenting with this particular issue, and in each case, the failed copy job obliterates the users and roles from the source database.

Removing the views that generated errors allows the copy to complete, and, as one would expect, produces a copy with identical data, users, etc., in addition to leaving the source database unaltered.

If it's important, I've tried rebuilding the indexes of the system databases, as well as the damaged database, to no appreciable effect.

The error generated:

1:00:25 PM,5/28/2013 1:00:25 PM,0,0x,ERROR : errorCode=-1073548784 description=Executing the query "CREATE VIEW [Sourcing].[PermittedArrProducts]  AS  SELECT     dbo.tblArrProducts.ArrProductID, dbo.tblArrProducts.ArrangementID, dbo.tblArrProducts.ContainerTypeID, dbo.tblArrProducts.Quantity  FROM         Sourcing.PermittedArrangements INNER JOIN                        dbo.tblArrProducts ON Sourcing.PermittedArrangements.ArrangementID = dbo.tblArrProducts.ArrangementID    " failed with the following error: "Invalid object name 'Sourcing.PermittedArrangements'.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.  

Validate each parent intermediary is also a parent to itself

Posted: 28 Jul 2013 07:43 AM PDT

I want to validate where each parent intermediary is also a parent to itself. So this query should return invalid parents in SELECT if table has any.

I need to do is first of all get all parents records. This should be done by joining Intermediries to itself on ParentIntermediaryID = IntermediaryID (inner join). Now, from all the parents return those parents that don't have any rows with ParentIntermediaryID = IntermediaryID.

I hope the requirement is clear. Can you give me query for this?

Create the test table

CREATE TABLE Intermediary      (      IntermediaryPK INT ,      IntermediaryID NVARCHAR(20),      IntermediaryName NVARCHAR(200),      IntermediaryTypeID NVARCHAR(1),      ParentIntermediaryID NVARCHAR(20),      IntermediaryTypePK TINYINT,      ParentIntermediaryPK INT      GrandParentIntermediaryPK INT      GrandParentIntermediaryID NVARCHAR(20)      )  

Insert the test data into the test table

INSERT INTO Intermediary     (IntermediaryPK, IntermediaryID, IntermediaryName, IntermediaryTypeID, ParentIntermediaryID,      IntermediaryTypePK, ParentIntermediaryPK, GrandParentIntermediaryPK, GrandParentIntermediaryID)  SELECT 552, '200244584261', 'A', '1', '201841943403', 1, 6459, 6459, '201841943403' UNION ALL  SELECT 553, '200983879092', 'B', '1', '200707895681', 1, 6462, 6459, '200707895681' UNION ALL  SELECT 554, '200925413387', 'C', '1', '200707895681', 1, 6462, 6462, '200707895681' UNION ALL  SELECT 555, '200472620781', 'D', '1', '200707895681', 1, 6462, 6462, '200707895681' UNION ALL  SELECT 556, '201902784325', 'E', '1', '200707895681', 1, 6462, 6462, '200707895681' UNION ALL  SELECT 557, '201874832909', 'F', '1', '200707895681', 1, 566 , 6462, '200707895681' UNION ALL  SELECT 558, '201264024229', 'G', '1', '200707895681', 1, 566 , 6462, '200707895681' UNION ALL  SELECT 559, '201725870455', 'H', '1', '201062751762', 1, 566 , 6462, '200707895681'  

I have tried with this query:

SELECT  *   FROM    Intermediary AS I1   WHERE   ParentIntermediaryPK IS NOT NULL   AND     NOT EXISTS (               SELECT  *               FROM    Intermediary AS I2               WHERE   I2.IntermediaryPK = I2.ParentIntermediaryPK               AND I1.ParentIntermediaryPK = I2.IntermediaryPK )  

Azure SQL Administrator Can't Access Master Database

Posted: 28 Jul 2013 09:43 AM PDT

I created an SQL server and database. I can log in to the database manage page with my administrator account, and manage the database I created, but when I try to look at the master database, it says:

Failed to create context. Details: The current user does not have VIEW DATABASE STATE permissions on this database.

What am I doing wrong? I couldn't find anything else like this on the internet.

I log in with the account that the sql server page on the azure managing portal says is the "Administrator Login".

SQL Server 2005 Replication

Posted: 28 Jul 2013 08:44 PM PDT

I am in the process of creating Replication between 2 Remote Servers, server 1 is the Distributor and Publisher and server 2 is the Subscription.

server 1 windows 2003 server 192.168.10.1 connected by vpn SQL Server 2005 domain1.local

server 1  windows 2003 server  192.168.10.1 connected by vpn  SQL Server 2005  domain1.local  

server 2 windows 2003 server 192.168.10.6 connected by vpn SQL Server 2005 domain2.local

server 2  windows 2003 server  192.168.10.6 connected by vpn  SQL Server 2005  domain2.local  

When I setup up Replication everything looked fine until I looked at the sync status and it said:

The Agent could not be started    An exception occurred while executing a transact-sql statement or batch    sqlserveragent error request to run job  server1-username blah blah blah  

From user sa refused because the job is already running from a request by user sa changed database context to technical error 22022.

I have cleared jobs in the server agent as well as restarted the service.

Could this be something to do with authentication between two non trusted domains as I can browse and even control each sql server via SQL studio but just not setup replication?

Yes I can manage each SQL Server in SSMS and we are using merge with snapshot.

Constructing an SQL query with time intervals

Posted: 28 Jul 2013 06:43 AM PDT

I've a simple but large table in an SQL database which has four fields (id(int PK), time (unix timestamp, not null), value (double, not null) and an ID_fk (integer foreign key).

Data is recorded every minute, but at a seemingly random second for subsets of sensors, each sensor has a foreign key value though.

My question is this, I need to write a query that will tell me when, over all two minute intervals in the dataset, a series of sensors have a value greater than 0.9.

I was thinking for trying to create a temporary table and do it that way, simplifying the time by only storing it at minute resolution?

Any advice would be greatly received, Thanks, James

Multiple parents and multiple children in product categories

Posted: 28 Jul 2013 11:43 AM PDT

I am making a ecommerce site. In this site I want to categorise the items into three different layers

primary category             sub category           sub category    >>electronic             >>smart phone          samsung    cameras                       tablets              nokia                                    laptop               apple                                  headphone  

In the above table, I want to display the sub category after the customer selects the primary one. At the same time the 'samsung' also comes under 'camera'. Like this a parent can have any number of children and one child can have many parents. In the future we may change the primary and secondary.

What is the best solution for this? Which model will adopt our category: tree or nested?

Mysqldump tables excluding some fields

Posted: 28 Jul 2013 06:43 PM PDT

Is there a way to mysqldump a table without some fields?

Let me explain:
I have a MySQL database called tests. In tests I have 3 tables: USER, TOTO and TATA. I just want to mysqldump some fields of table USER, so excluding some fields like mail, ip_login, etc.

How can I do this?

Sybase SQL Anywhere 12 - Get all indexes which are unique -> ambigious sysindexes error

Posted: 28 Jul 2013 10:43 AM PDT

we are using a Sybase SQL Anywhere 12 db.

In the db there are indices, which are unique, but shouldn't be unique.

Therefore I search for a quick way to list all tables with unique primary keys.

I tried

SELECT z.name FROM sysobjects z JOIN sysindexes ON (z.id = i.id) WHERE type = 'U'  

The result was an error message: Anweisung konnte nicht ausgeführt werden. Tabellenname 'sysindexes' ist mehrdeutig SQLCODE=-852, ODBC 3-Status="42S13" Zeile 1, Spalte 1

Roughly translated: sysindex is ambiguous.

I found on internet the query:

select 'Table name' = object_name(id),'column_name' = index_col(object_name(id),indid,1),  'index_description' = convert(varchar(210), case when (status & 16)<>0 then 'clustered' else 'nonclustered' end  + case when (status & 1)<>0 then ', '+'ignore duplicate keys' else '' end  + case when (status & 2)<>0 then ', '+'unique' else '' end  + case when (status & 4)<>0 then ', '+'ignore duplicate rows' else '' end  + case when (status & 64)<>0 then ', '+'statistics' else case when (status & 32)<>0 then ', '+'hypothetical' else '' end end  + case when (status & 2048)<>0 then ', '+'primary key' else '' end  + case when (status & 4096)<>0 then ', '+'unique key' else '' end  + case when (status & 8388608)<>0 then ', '+'auto create' else '' end  + case when (status & 16777216)<>0 then ', '+'stats no recompute' else '' end),  'index_name' = name  from sysindexes where (status & 64) = 0  order by id  

Which looked what i wanted. But there was still the same result of ambigious sysindexes.

What dows ambigious indexes mean in this context? Will/Can this cause any error in future?

As workaround I used sybase central (which by the way opens always on first monitor, not on the one where it was closed - ugly behaviour), and found that a item indices showed what i searched for.

But I still want to know how a programmatically solution looks like.

Why do we need to rebuild and reorganize Indexes in SQL Server

Posted: 28 Jul 2013 07:41 PM PDT

After searching the internet i couldn't find the reason for

  1. Why do we need to rebuild and reorganize indexes in SQL Server ?

  2. what does internally happens when we rebuild and reorganize ?

An article on a site says :

Index should be rebuild when index fragmentation is great than 40%. Index should be reorganized when index fragmentation is between 10% to 40%. Index rebuilding process uses more CPU and it locks the database resources. SQL Server development version and Enterprise version has option ONLINE, which can be turned on when Index is rebuilt. ONLINE option will keep index available during the rebuilding.

I couldn't understand this, though it says that WHEN to do this, but I would like to know WHY do we need to rebuild and reorganize indexes ?

Mongo connection failing with 'Transport endpoint is not connected'

Posted: 28 Jul 2013 04:43 PM PDT

I have a 2 server installation.

A web server with Apache and a DB server with MongoDB.

I am load testing it, and on ~300 RPS I am getting this error:

PHP Fatal error: Uncaught exception 'MongoConnectionException' with message 'Transport endpoint is not connected'.

The only thing I am noticing is that right before the fail, I am getting a lots of connections on Mongo:

insert  query update delete getmore command flushes mapped  vsize    res faults locked % idx miss %     qr|qw   ar|aw  netIn netOut  conn       time        0    659      0      0       0       1       0   208m  1.28g    40m      0        0          0       0|0     0|0    62k   217k   486   03:57:20   

Almost 500 connections here... but never more than that!

Mongo is 2.0.3. PHP is 5.3.x (latest of Debian install...)

Help!

Copy Database option not found in SQL Server Management Studio 2008 R2

Posted: 28 Jul 2013 08:58 AM PDT

I am unable to see the Copy Database option in SQL Server Management Studio 2008 R2.

Can anyone tell me whats going on please? Copy Database not found Image

First time running a replication on SQL Server 2008. "The process could not connect to subscriber 'SERVER2'"

Posted: 28 Jul 2013 05:53 PM PDT

I've set up Replication between two remote servers, Using Peer to Peer & Transactional updates. The subscription is all set, but the actual replication doesn't take place.

When viewing the Synchronization Status of the subscription, i get an error, saying "The process could not connect to Subscriber 'SERVER2\DBINSTANCE'."

Clicking on start, this is the error message I get:


TITLE: View Synchronization Status

The agent could not be started.

For help, click: go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.50.1600.1&EvtSrc=Microsoft.SqlServer.Management.UI.ReplUtilitiesErrorSR&EvtID=CantStartAgent&LinkId=20476


ADDITIONAL INFORMATION:

An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)


SQLServerAgent Error: Request to run job SERVER\DBINSTANCE-DBNAME-DBPUBLICATION-SERVER2\DBNAME-3 (from User sa) refused because the job is already running from a request by Start Sequence 0. Changed database context to 'DBNAME'. (Microsoft SQL Server, Error: 22022)

For help, click: go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.50.1600&EvtSrc=MSSQLServer&EvtID=22022&LinkId=20476


Viewing the Agent History, trying to get more specific into the error, i get the following Error:


Error messages: The process could not execute 'sp_replcmds' on 'SERVER\DBNAME'. (Source: MSSQL_REPL, Error number: MSSQL_REPL20011) Get help: help/MSSQL_REPL20011 Cannot execute as the database principal because the principal "dbo" does not exist, this type of principal cannot be impersonated, or you do not have permission. (Source: MSSQLServer, Error number: 15517) Get help: help/15517 The process could not execute 'sp_replcmds' on 'SERVER\DBNAME'. (Source: MSSQL_REPL, Error number: MSSQL_REPL22037) Get help: help/MSSQL_REPL22037


I have looked each and every one of the errors online, went through all this kind of explanations and changes, and nothing worked. Many thanks to anyone who can pour some light on the issue.

How to the get current date without the time part

Posted: 28 Jul 2013 06:40 PM PDT

In SQL Server 2005 how do I get the current date without the time part? I have been using GETDATE() but would like it to have a time of 00:00:00.0

No comments:

Post a Comment

Search This Blog