Wednesday, May 29, 2013

[how to] Software for generating ODL from UML?

[how to] Software for generating ODL from UML?


Software for generating ODL from UML?

Posted: 29 May 2013 09:06 PM PDT

Are there any software which allow you to generate ODL from UML, much like software which is used to generate SQL from ERD?

(using the Oracle DBMS)

Oracle - Mysterious indexes

Posted: 29 May 2013 08:24 PM PDT

I have a table in Oracle 10.2g that I'm trying to import roughly 11 million rows into. The table contains an SDO_GEOMETRY column. It previously had NOT NULL constraints on all 3 columns, a primary key constraint on a NUMBER(38,0) column, and a spatial index. I have removed all of these to speed up the import. I removed the primary key constraint using ALTER TABLE DROP CONSTRAINT with the additional parameter DROP INDEX.

However, selecting on USER_INDEXES reveals there are currently two indexes on the table with very strange names: SYS_IL0000471920C00009$$ and SYS_IL0000471920C00010$$. Their INDEX_TYPEs are both LOB, and they both have UNIQUE for their UNIQUENESS column. COMPRESSION is DISABLED. They are both VALID, non-TEMPORARY, and GENERATED (though I'm not sure what GENERATED means).

Where might these indexes have come from? Could dropping them cause problems?

Change built-in default privileges in PostgreSQL?

Posted: 29 May 2013 07:25 PM PDT

Introduction.

When I create a database,

postgres=# CREATE DATABASE test2 OWNER test2;  

it is created with an empty privileges column:

   Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges  -----------+----------+----------+-------------+-------------+-----------------------   test2     | test2    | UTF8     | en_US.UTF-8 | en_US.UTF-8 |  

Now, it is important to note that this is not the same situation as it having no privileges granted whatsoever. In the latter case, the column contains just {} (which by the way displays as empty, too, in postgres=# \l, AFAIR).

  • An empty field means that PostgreSQL uses built-in default privileges.
  • {} means that there are literally no privileges to the database.

Research.

In our case, these built-in defaults happen to be:

=Tc/test2  +  test2=CTc/test2  

How do I know that? I issued:

postgres=# GRANT CONNECT ON DATABASE test2 TO test1;  

... and that resulted in:

   Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges  -----------+----------+----------+-------------+-------------+-----------------------   test2     | test2    | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =Tc/test2            +             |          |          |             |             | test2=CTc/test2      +             |          |          |             |             | test1=c/test2  

... that is the implicit defaults plus test1=c/test2 granted explicitly.

Problem & question.

While I have no problem with =CTc granted to owner in these defaults, I'd like not to have =Tc granted to PUBLIC.

Sure, I can just REVOKE ALL [...] FROM PUBLIC after creation, but is there a way to automatize it?

Afterthought.

Really, why aren't these defaults as I want them by default? Is it a popular practice to run one PostgreSQL server for one app, not just one database on the server per app, even if it is a small one?

E.g. default installation of phpPgAdmin "crashes" when it stumbles upon a database it has no connect privs, I had to modify the source a bit. And despite numerous bug reports, it's been doing so for at least six months now (I can't remember any date of the reports, but one of them stated that six months had passed).

database structure - complicated requirements

Posted: 29 May 2013 06:28 PM PDT

I have a project to build a website but it's complicated and I'm having trouble figuring out what the best way to build the database would be to handle these particular requirements.

The site is for a local builders and farmers (and anyone else who uses heavy equipment) to rent their machinery amongst themselves. Users should be able to sign up and list an item of equipment which is then searchable and bookable by other users of the site.

So a builder might sign-up and upload a listing for his concrete mixer. Then another user can search for concrete mixers to hire between 2 dates and place a booking for the mixer through the site.

So far so good.

Problem is that the builder should be able to set a default per-day rate but they should also be able to say that through-out the month of July, or on the last two weekends in August the mixers default daily rate is different. So basically everyday could end up having a different rate and I'm having trouble figuring out what is the most efficient way to structuring the database and how to calculate the total costs of renting for several days if there's potentially a different rate every day.

At the moment I'm imaging having to loop through a 365 sized array but that can't be right. I'm a bit new to this so I'm probably just confused.

Is there any manual for the visual explain in mysql? The output plan is unreadable

Posted: 29 May 2013 02:41 PM PDT

enter image description here

Just like this example, I cannot even tell the join ordering from it. And what does "ALL" and "ref" in the table mean? Is there any manual about the visual explain?

Using something better than UNION for combining multiple queries

Posted: 29 May 2013 01:44 PM PDT

I have one big query which is based on 12 smaller queries that I have put together using UNION.

A relative simple example of my query:

(SELECT      'type1' AS 'type',       fieldOne AS someField,       fieldTwo,       fieldThree       FROM tableOne            JOIN users ON                 users.id = tableOne.id       WHERE             fieldFour = 'someValue'  )  UNION  (SELECT       'type2' AS 'type',       fieldOne AS someField,       fieldTwo,       NULL AS fieldThree       FROM tableTwo            JOIN users ON                 users.id = tableTwo.id       WHERE             fieldFour = 'someValue'  )  UNION ...  

The query gets big since each of the 12 smaller queries need to have the same amount of fields, so I set many of the queries to NULL. This results in many reused lines throughout the queries. I must have the data in different tables in order to keep a good structure on things.

I think this can be made in a much more efficient way that is both smaller and takes less time to execute though I can't find any.

When creating remote BLOB store is "RBSFilestreamFile" always the name of the file to be added to the FILEGROUP?

Posted: 29 May 2013 04:28 PM PDT

When creating a remote BLOB store in SQL Server (2008 R2) is "RBSFilestreamFile" always the name of the file when adding it to the FILEGROUP like in this query (this is the name I've seen used in every example I've found online, but I need to know for sure)?

ADD FILE (name = RBSFilestreamFile, filename = 'c:\Blobstore')      TO FILEGROUP RBSFilestreamProvider  

I'm asking because I'm working on an application for restoring SharePoint content databases and need to know if I can hardcode this string into the application.

Is it possible to have multiple RBS BLOB stores used by a single database (SharePoint 2010)?

Posted: 29 May 2013 03:07 PM PDT

I am working on backup/restore software for SharePoint 2010/13 and need to know if it is possible for a user to configure multiple BLOB stores for a single content database.

i.e. In the steps to create RBS BLOB store when you run:

...    ALTER DATABASE [ContentDbName]  ADD FILEGROUP RBSFILESTREAMPROVIDER CONTAINS FILESTREAM  

and

ADD FILE (name = RBSFilestreamFile, filename = 'c:\Blobstore')      TO FILEGROUP RBSFilestreamProvider  

Is it possible for there to me multiple FILEGROUPS or multiple FILEs per FILEGROUP associated and used by a single content database?

Execute output from query

Posted: 29 May 2013 01:30 PM PDT

I've got this query that builds my drop login statements. As part two I want to execute this output. Ideally I want to do this in a sql job with two steps (generate and execute). Can this be done?

SELECT 'DROP LOGIN [' + name + ']'  FROM [master].[dbo].[syslogins]  WHERE isntgroup = 0    AND isntuser = 0    AND sysadmin = 0    AND name != 'sa'    AND name NOT LIKE '##%'  ORDER BY name  

Are there major drawbacks/risk of using oracle blob to store files

Posted: 29 May 2013 01:50 PM PDT

Recently I was asked to do production support on an existing software.

In summary, this system is a web service that allows you to upload a file and provides you with an identifier so that you can ask for your file later.

My concern is that the files are stored as BLOB in an Oracle database.

For the moment, the system is stable, but in the near future our business wants to upload 100.000 to 200.000 PDF per year (10 to 50 Mb each).

When it comes to the Database

  • Should I just care about the disk space?

  • Are there other aspects I should worry about?

  • Or should I really try to switch to file system storage?

SQL Server 2005 xp_sendmail error

Posted: 29 May 2013 03:25 PM PDT

I have come across a SQL 2005 server that uses SQL mail to send emails. This setup works fine as long as the MAPI profile points to an Exchange 2003 server, but breaks when the profile points to an Exchange 2010 server. The error: xp_sendmail: failed with mail error 0x80004005

Would updating the MAPI client help fix this, or is an exchange 2010 incompatibility?

Does SQL CASE statement evaluate all conditions or exit on first TRUE condition?

Posted: 29 May 2013 01:19 PM PDT

Does the SQL CASE statement evaluate all the WHEN conditions or does it exit once it finds a WHEN that evaluates to true? If it does go thru the entire set of conditions does that mean that the last condition evaluating to true overwrites what the first condition that evaluated to true did? For example:

SELECT  Case   when 1+1 = 2 then 'YES'  when 1+1 = 3 then 'NO'  when 1+1 = 2 then 'NO'   END  

The results is "YES" even though the last when condition should make it evaluate to "NO" so it seems that it exits once it finds the first TRUE condition. Can someone please confirm if this is the case.

Error installing SQL Server 2008 - all componets installed successfully except database engine

Posted: 29 May 2013 01:39 PM PDT

Below is the error I copied, please someone help me, I have tried at least 10 times in the last 3 days. It is SQL Server 2008 R2 and I wanted to install it on Windows 7.

Microsoft SQL Server 2008 Setup

The following error has occurred:

Wait on the Database Engine recovery handle failed. Check the SQL Server error log for potential causes.

For help, click: http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft+SQL+Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=10.0.1600.22&EvtType=0xE53883A0%25400xBE03358B%25401306%254024

Efficient way to fetch records since a given criteria

Posted: 29 May 2013 07:00 PM PDT

I'm trying to implement a logic where the user can say give me n records since a given id#. E.g.

SELECT TOP (100) col1, col2, ... colN   FROM Table1  WHERE ID > @id  ORDER BY ID  

Performance is the biggest issue here, especially when you get into nested loops for complex joins. I've looked at the new OFFSET-FETCH feature in SQL Server 2012 but they require you to specify a number rather than a WHERE clause. I wouldn't know the offset unless I count num rows beforehand.

Is there an alternate way to do this efficiently in SQL Server (2008 R2 and above)?

Update: Complete SQL as generated by EF Code first

-- Region Parameters  DECLARE @p__linq__0 BigInt = 60375518904121  DECLARE @p__linq__1 BigInt = 60375518904121  -- EndRegion  SELECT   [Project3].[Id] AS [Id],   [Project3].[C2] AS [C1],   [Project3].[C1] AS [C2],   [Project3].[C3] AS [C3]   FROM ( SELECT       [Limit1].[Id] AS [Id],       [Limit1].[C1] AS [C1],       [Limit1].[C2] AS [C2],      [Limit1].[TmId] AS [TmId],       CASE WHEN ([Extent4].[TmId] IS NULL) THEN CAST(NULL AS int) ELSE 1 END AS [C3]      FROM   (SELECT TOP (100) [Project2].[Id] AS [Id], [Project2].[TmId] AS [TmId], [Project2].[C1] AS [C1], [Project2].[C2] AS [C2]          FROM ( SELECT               [Extent1].[Id] AS [Id],               [Extent2].[TmId] AS [TmId],               CASE WHEN ([Extent2].[TmId] IS NULL) THEN CAST(NULL AS int) ELSE 1 END AS [C1],               1 AS [C2]              FROM  [dbo].[TextMessages] AS [Extent1]              LEFT OUTER JOIN [dbo].[Tms] AS [Extent2] ON [Extent1].[Id] = [Extent2].[TmId]              WHERE ( EXISTS (SELECT                   1 AS [C1]                  FROM [dbo].[Tmr] AS [Extent3]                  WHERE ([Extent1].[Id] = [Extent3].[TmId]) AND ([Extent3].[IsDel] = 1) AND ([Extent3].[UserId] = @p__linq__0)              )) OR ([Extent1].[CreatedBy_Id] = @p__linq__1)          )  AS [Project2]          ORDER BY [Project2].[Id] DESC ) AS [Limit1]      LEFT OUTER JOIN [dbo].[Tmr] AS [Extent4] ON [Limit1].[Id] = [Extent4].[TmId]  )  AS [Project3]  ORDER BY [Project3].[Id] DESC, [Project3].[TmId] ASC, [Project3].[C3] ASC  

Using Workbench to export. Views no longer work

Posted: 29 May 2013 05:07 PM PDT

I'm using Workbench 5.2 from my production DB machine (Ubuntu Linux). I've been trying to export. First try.. I had the entire schema selected including all tables and views in the right hand window.

When I imported this data into my test machine all the views came in as tables. Obviously not what I want because they don't return any data now.... and it's now a table.

But this is where I'm confused. I can see how to export just the tables. But how do I get the views exported and into the test machine?

How to debug a db memory-leak causing mysql to go before it's own limits?

Posted: 29 May 2013 03:47 PM PDT

We are having a problem with one of the database server of one application, possibly caused by some code that is creating a problem in the way Mysql manage it's memory.

Until the second week of April, our db server has a stable consumption of memory of about 5 gigs (with a maximum of 7 gigs). But then, it started to increase limitless, even surpassing it's theorically maximum possible allocation.

This is our yearly munin graph showing the increase in the last 2 months:

This is another view from the last seven days after a restart in mysql:

This is the report created by mysqltuner.pl:

  -------- Performance Metrics -------------------------------------------------    [--] Up for: 4d 1h 56m 28s (152M q [431.585 qps], 383K conn, TX: 593B, RX: 29B)  [--] Reads / Writes: 90% / 10%  [--] Total buffers: 5.3G global + 10.2M per thread (200 max threads)    [OK] Maximum possible memory usage: 7.3G (46% of installed RAM)  [OK] Slow queries: 0% (2K/152M)  [OK] Highest usage of available connections: 13% (26/200)  [OK] Key buffer size / total MyISAM indexes: 16.0M/300.0K  [OK] Key buffer hit rate: 100.0% (61M cached / 9 reads)  [OK] Query cache efficiency: 70.8% (103M cached / 146M selects)  [!!] Query cache prunes per day: 501819  [OK] Sorts requiring temporary tables: 0% (926 temp sorts / 3M sorts)  [!!] Joins performed without indexes: 39128  [OK] Temporary tables created on disk: 16% (821K on disk / 5M total)  [OK] Thread cache hit rate: 99% (26 created / 383K connections)  [!!] Table cache hit rate: 10% (845 open / 7K opened)  [OK] Open file limit used: 3% (148/4K)  [OK] Table locks acquired immediately: 99% (65M immediate / 65M locks)  [!!] InnoDB data size / buffer pool: 5.5G/5.0G  

We are in unknown territory here. Any help will be appreciated!

Edit: Adding my.cnf

  # The MySQL database server configuration file.    [client]  port            = 3306  socket          = /var/run/mysqld/mysqld.sock    [mysqld_safe]  socket          = /var/run/mysqld/mysqld.sock  nice            = 0    [mysqld]  character_set_server = utf8  collation_server = utf8_general_ci    user            = mysql  socket          = /var/run/mysqld/mysqld.sock  pid-file        = /var/run/mysqld/mysqld.pid  port            = 3306  basedir         = /usr  datadir         = /var/lib/mysql  tmpdir          = /tmp  skip-external-locking  bind-address            = 0.0.0.0    # Fine Tuning  max_connections         = 200  key_buffer              = 16M  max_allowed_packet      = 16M  thread_stack            = 192K  join_buffer_size        = 2M  sort_buffer_size        = 2M  read_buffer_size        = 2M  read_rnd_buffer_size    = 4M  thread_cache_size       = 128  thread_concurrency      = 24  table_cache             = 2K  table_open_cache        = 2K  table_definition_cache  = 4K    # This replaces the startup script and checks MyISAM tables if needed  # the first time they are touched  myisam-recover         = BACKUP    # innodb  innodb_buffer_pool_size         = 5G  innodb_flush_log_at_trx_commit  = 1  innodb_support_xa               = 1  innodb_additional_mem_pool_size = 32M  innodb_log_buffer_size          = 8M  innodb_flush_method             = O_DIRECT    # Query Cache Configuration  query_cache_limit               = 32M  query_cache_size                = 256M  query_cache_min_res_unit        = 256    # Logging and Replication  log_error                       = /var/log/mysql/error.log  log-slow-queries                = /var/log/mysql/slow.log  long_query_time                 = 1    # REPLICATION CONFIGURATION  log_bin                 = /var/log/mysql/mysql-bin.log  log-bin                 = mysql-bin  expire_logs_days        = 15  sync_binlog             = 1  server-id               = 1    ssl-ca   =/etc/ssl/private/repl/cacert.pem  ssl-cert =/etc/ssl/private/repl/master-cert.pem  ssl-key  =/etc/ssl/private/repl/master-key.pem    [mysqldump]  quick  quote-names  max_allowed_packet      = 16M    [isamchk]  key_buffer              = 16M                                  

updation of a column after expire of specific time in mysql

Posted: 29 May 2013 05:13 PM PDT

  1. i have a table where i have a column named 'state'(int(2)) and 'modify_time' (time_stamp on update current timestamp).

  2. i have an update trigger which changes the value of state column to 0 based on some condition.

    3 . i want to set the value of state column to 1 after 24 hours of modify_time , if it still 0 .

    4 . i tried below method to test :

CREATE EVENT myevent1 ON SCHEDULE AT current_timestamp + interval 1 minute DO UPDATE test.mytabletable SET state = 0;

but it is not doing any thing.

is there any other alternative method ?

sp_executesql adds statements to executed dynamic script?

Posted: 29 May 2013 07:30 PM PDT

The Question:

As far as I can tell, sp_executesql adds statements to the beginning of submitted dynamic SQL script. But, a SQL Profiler trace does not capture the extra statements, and neither does DBCC OUTPUTBUFFER. So:

  1. Is there any way to see the extra statements added to submitted dynamic SQL batches by sp_executesql?
  2. Can anyone confirm definitively that my conclusions about the extra statements are correct/incorrect?

Background

I have a database where some objects (views, synonyms, SPs) are rewritten based on data in a Script table. If the database is moved to another server, a stored procedure loops through the rows of the Script table, replaces certain key values in the supplied SQL script with those defined for the new server context, and runs the script.

Everything was working fine until I made a few tweaks to add support for scripting permissions through this same mechanism. The database integrates with a vendor's product, and in each environment the vendor's database can have a different user that must be given permission to a particular view in my database for reporting purposes. So, I have to query for that user (from the vendor's database) then use that name to create the user in my database if it doesn't exist and finally grant SELECT permission. This required more lengthy scripting and doing dynamic-sql inside of dynamic-sql, so I wanted to pass in my outer script's @Debug parameter so I could see the extra script that was being generated and confirm its correctness before trying to execute it.

Other than changing what object types could be scripted and making the DROP script optional, the only material change I made to accommodate the @Debug parameter was to change this:

EXEC (@CreateSQL);  

to this:

EXEC sp_executesql @CreateSQL, N'@Debug bit', @Debug;  

Then I ran into a problem: the one stored procedure in my Script table could no longer be created, though the DROP just before it still worked okay. The result I got was this:

Msg 156, Level 15, State 1, Line 1

Incorrect syntax near the keyword 'PROCEDURE'.

This was very confusing, but after wrangling with it for quite some time, I finally figured out the problem: sp_executesql binds parameters to dynamic SQL by secretly adding a DECLARE statement to the top before executing. Since CREATE PROCEDURE must be the only statement in the batch, but there is now an extra statement before the CREATE PROCEDURE line, it throws an error. It does say Line 1--which further misled me--but this is obviously tweaked by the engine so people don't get confused about the line numbers of their own script when dealing with errors.

The solution to the problem was to detect which type of object was being worked with and NOT pass in the @Debug parameter so script that must have no other statements works okay. A quick change did the job:

IF @ScriptType IN ('Procedure', 'View', 'Function') BEGIN     EXEC sp_executesql @CreateSQL;  END  ELSE BEGIN     EXEC sp_executesql @CreateSQL, N'@Debug bit', @Debug;  END;  

I could also have nested my dynamic SQL one level deeper, to create the procedure inside dynamic sql (again, inside the script in the table) but that was a less optimal solution in my case.

I suspect that using OUTPUT variables with sp_executesql would also add one or more statements to the end of the script to enable the engine to capture them, most likely in a SELECT statement that is silently swallowed up.

Combining data from two databases with same structure into one database

Posted: 29 May 2013 02:43 PM PDT

I have five SQL Server databases with the same schema in five different geographical locations. These locations send periodic backups to the central server which I restore in the five respective databases.

The requirement now is that data from these five databases MUST be combined into one database for consolidation.

Any suggestion for the solution is most welcome.

How do I turn off ALTER DATABASE script output in VS 2012?

Posted: 29 May 2013 02:34 PM PDT

I am using SQL Server 2005 in conjunction with Visual Studio 2012 and a SSDT database project.

When I publish my VS project and generate the script to run against my database, VS includes the following in the script. I really don't want the script to go changing database properties, particularly the PAGE_VERIFY setting. I notice that the script doesn't set the properties back the way it found them. How do I turn this off? I have spent time in the Tools->Options dialog and I just don't see the setting.

IF EXISTS (SELECT 1         FROM   [master].[dbo].[sysdatabases]         WHERE  [name] = N'$(DatabaseName)')  BEGIN      ALTER DATABASE [$(DatabaseName)]          SET ANSI_NULLS ON,              ANSI_PADDING ON,              ANSI_WARNINGS ON,              ARITHABORT ON,              CONCAT_NULL_YIELDS_NULL ON,              QUOTED_IDENTIFIER ON,              ANSI_NULL_DEFAULT ON,              CURSOR_DEFAULT LOCAL           WITH ROLLBACK IMMEDIATE;    END  GO    IF EXISTS (SELECT 1         FROM   [master].[dbo].[sysdatabases]         WHERE  [name] = N'$(DatabaseName)')  BEGIN      ALTER DATABASE [$(DatabaseName)]          SET PAGE_VERIFY NONE           WITH ROLLBACK IMMEDIATE;  END  GO  

Why is MySQL order by performance poor within a single partition of a partitioned table?

Posted: 29 May 2013 03:40 PM PDT

I have to store some sequence numbered data in MySQL. I have about 300,000 data items per day for about a 10 year span. Let's say the table structure is just sequence number (a big int) and data (a varchar). One very common query i'll be making is for all data for a single day, ordered by sequence number.

Storing the data in one table per day works really well. Querying all data from a table takes 0.8 seconds (which is acceptable), and adding order by sequence number takes it up to 1.0 seconds (still acceptable), but this does result in a massive number of tables that i'd rather have less of.

Storing in one table per month (and adding an indexed day number field), the time to get a day's data goes up to 1.6 seconds (not great), but adding partitioning by day number brings it right back down to 0.8.

However... when I add the order by sequence number to the partitioned table query, the time goes up to 2.5 seconds! Does this make sense? I would have thought that since my where clause (day number = X) instantly limits to all the data in a single partition, it should then be about the same performance as one of my original day tables (which it is, until I add the order by clause, then everything goes to hell).

It's like it is doing the order by before checking the partitions or something. Anyone got any ideas? (either "yes, that'll happen, and here's why", or "here's what you need to do to speed it up").

Thanks.

Upgrade SQL Server 2005 Enterprise to SQL Server 2008 R2 Standard

Posted: 29 May 2013 02:13 PM PDT

I understand that this is not a supported path, but does anyone have any insight about how to accomplish this? I can't seem to find any documents on the subject. I need my QA (2005) to match my Production (2008R2). Just joined this company and this is one of my first tasks...

Mysql - How to optimize retrival time in a table

Posted: 29 May 2013 07:14 PM PDT

I have query like this! which has 200 million Records in a single table.. I am using BTree Indexes in my table...

mysql> select COUNT(DISTINCT id) from [tablename] where [columname] >=3;
+------------------------------+
| COUNT(DISTINCT id) |
+------------------------------+
| 8242063
+------------------------------+
1 row in set (3 min 23.53 sec)

I am not satisfy with this timing ..! how can I reduce the result time less than 30sec. Kindly give me any suggessions! It will be more helpful to me!

thanking you!

SQL Server 2005 Replication

Posted: 29 May 2013 05:13 PM PDT

I am in the process of creating Replication between 2 Remote Servers, server 1 is the Distributor and Publisher and server 2 is the Subscription.

server 1 windows 2003 server 192.168.10.1 connected by vpn SQL Server 2005 domain1.local

server 1  windows 2003 server  192.168.10.1 connected by vpn  SQL Server 2005  domain1.local  

server 2 windows 2003 server 192.168.10.6 connected by vpn SQL Server 2005 domain2.local

server 2  windows 2003 server  192.168.10.6 connected by vpn  SQL Server 2005  domain2.local  

When I setup up Replication everything looked fine until I looked at the sync status and it said:

The Agent could not be started    An exception occurred while executing a transact-sql statement or batch    sqlserveragent error request to run job  server1-username blah blah blah  

From user sa refused because the job is already running from a request by user sa changed database context to technical error 22022.

I have cleared jobs in the server agent as well as restarted the service.

Could this be something to do with authentication between two non trusted domains as I can browse and even control each sql server via SQL studio but just not setup replication?

Yes I can manage each SQL Server in SSMS and we are using merge with snapshot.

A database trigger is an alternative for?

Posted: 29 May 2013 07:04 PM PDT

I had the following quiz question and could not answer it myself because of a little confusion.

Please answer this and tell me why?

A database trigger is an alternative for?

A. Stored procedure
B. Primary key for implementing referential integrity
C. Foreign key for implementing referential integrity
D. All of the above

How to do something like UPDATE DELAYED in MySQL

Posted: 29 May 2013 08:14 PM PDT

I have an averages table that should keep track of an average value over time. I don't want to have a row for each value, just a single row that continuously updates the average. What I've come up with is this:

set @value=4;  set @name="myAverageValue";  UPDATE `timing` SET    `max` = greatest(`max`,@value),    `average` = `average` + ((@value - `average`) / (`count` + 1)),    `count` = `count` + 1  WHERE `name` = @name  

Many clients may be doing this at the same time, and I don't want there to be any locking issues. I don't care what order the updates are run in, since in the end it will all end up the same. I just want to have a query that sends the UPDATE to the database, and it will process it eventually, similar to an INSERT DELAYED. Does UPDATE LOW_PRIORITY do this, or does that cause the client to wait until it is available?

Replication master binlog rotation when network is unavailable

Posted: 29 May 2013 04:13 PM PDT

I recently experienced an issue where the binlog file in master rotated because network connectivity between the slave and master was unavailable.

After solving the network issue, the slave was not able to follow the master as it was doing through previous binlog and position.

That was solved by purging the binlog to last binlog in master and pointing the slave to that last binlog and previous binlog's position which was following.

I am wondering if this issue is normal?

Is there a repository for pgadmin plugins?

Posted: 29 May 2013 06:14 PM PDT

I've been using PostgreSQL with PGAdmin III for a while now and it's been bugging me that there is a plugins menu option that is empty. I've Googled some and found a plugin here and there but I wanted to know if there was a repository I was missing out on?

If you don't know of a repo but are aware of additional plugins please also let me know about them.

Get and Put performance testing on Google BigTables (and other integrated DBs)

Posted: 29 May 2013 02:43 PM PDT

What are some effective ways to perform programmatic performance testing on database operations, especially in environments where the databases themselves do not offer dedicated tools?

For example, in Google App Engine, entire page-loads are evaluated as one operation which may include specific database operations. This problem is also likely present in SQLite and other integrated DBs. As it is difficult to completely abstract the (equivalent of) selects and inserts that need to be tested, are there any recommended database tools to perform more thorough diagnostics on these sorts of queries?

In Google App Engine, what is the most effective many-to-many join model?

Posted: 29 May 2013 02:24 PM PDT

The BigTable design rejects many of the philosophies of standard relational models, explicitly preferring denormalization to a big host of tiny tables.

One of the larger areas where this is a problem is in the modelling of many to many joins.

One way to model these joins is to violate first normal form, and put all interesting data in a db.ListProperty(). While this has the ability to be searchable from a query, I have not yet explored the performance implications of searching a list versus pulling another table.

As joins are not possible, it is possible to link tables through RelationshipProperties. Therefore, with enough effort, the standard intersection table (a table with a joint primary key which references both parent tables) can be created. Has anyone explored the performance hits of the various implementations?

-Edit-

While the List of Keys suggested in the documentation is indeed one way to do it, I'm interested in the performance and anomaly rates of that and other implementations. Is there utility in creating mutual lists of keys? Is the effort involved in the repeated gets worth the price? Is there a better way to do it?

No comments:

Post a Comment

Search This Blog