Monday, June 17, 2013

[how to] Advice on NoSQL (Googel App Engine NDB) Schema?

[how to] Advice on NoSQL (Googel App Engine NDB) Schema?


Advice on NoSQL (Googel App Engine NDB) Schema?

Posted: 17 Jun 2013 08:52 PM PDT

Some background: I am very much a novice at anything database related, my knowledge caps at creating a very basic blog with MySQL. I am currently switching from PHP/MySQL to Python and as of now I am working in Google App Engine using their NDB Datastore, but eventually will move to a hosted service running MongoDB.

I am trying to setup a simple discography cataloging data structure, with Artist, Album, and Track, so far this is what I have been trying and it doesn't work, at all.

class Track(ndb.Model):      name = ndb.StringProperty(required=True)      album = ndb.StringProperty(required=True)      artists = ndb.JsonProperty(required=True)      # ...other keys...    class Album(ndb.Model):      name = ndb.StringProperty(required=True)      artists = ndb.JsonProperty(required=True)      tracks = ndb.KeyProperty(kind=Track, repeated=True)      # ...other keys...    class Artist(ndb.Model):      name = ndb.StringProperty(required=True)      albums = ndb.KeyProperty(kind=Album, repeated=True)      # ...other keys...  

Now... What I originally had tried with Track, but got an error was:

class Track(ndb.Model):      name = ndb.StringProperty(required=True)      album = ndb.KeyProperty(kind=Album)      artists = ndb.KeyProperty(kind=Artist, repeated=True)      # ...other keys...  

However, I get an errors like NameError: name 'Album' is not defined. Going with the album = StringProperty() and artists = JsonProperty() are making it extremely hard to search the database.

So, my question is... Looking at what I have now, I got the idea of reversing it and am wondering if it's a good idea to do so or not or find another way.

class Artist(ndb.Model):      name = ndb.StringProperty(required=True)      # ...other keys...    class Album(ndb.Model):      name = ndb.StringProperty(required=True)      artists = ndb.KeyProperty(kind=Artist, repeated=True)      # ...other keys...    class Track(ndb.Model):      name = ndb.StringProperty(required=True)      album = ndb.KeyProperty(kind=Album)      artists = ndb.KeyProperty(kind=Artist, repeated=True)      # ...other keys...  

Where Artist doesn't care about what Album it has, Album just provides the Artist.key for the artists it has. Album doesn't care about what Track it has, Track provides the Album.key for what album it is on.

So when searching I can do...

def get_album(album, artist):      artist_key = ndb.Key(Artist, artist.lower())        with_artist = Album.query(Album.artists == artist_key)      return with_artist.filter(Ablum.name_lower == album.lower())    def get_tracks(album, artist):      album_key = ndb.Key(Album, ':'.join([artist.lower(), album.lower()])        return Track.query(Track.album == album_key)  

Where Album.name_lower is a ComputedProperty(lambda self: self.name.lower()), which Artist and Track also have.

Need some good tutorial regarding Rman,datagaurd,rac and performance tuning

Posted: 17 Jun 2013 08:29 PM PDT

As i am very new in oracle database, i need some good tutorial,configuration file,book for ASM,omf(oracle managed file),data guard,Rac,troubleshhoting & performance tuning.Thanks.

To select from all tables inside the schema in PostgreSQL

Posted: 17 Jun 2013 07:19 PM PDT

Is it possible to select from all tables inside the schema? I got all the table names from

select table_name from information_schema.tables

but I am unable to use it to do my query.

Error while opening Oracle 11g Database

Posted: 17 Jun 2013 05:41 PM PDT

I have a problem with oracle 11g database,

I have shutdown my computer this morning normally, but in the evening when I tried to connect to my database I fond a problem. I cannot connect as normal user so I tried to connect as sysdba to shutdown the database and remount it (this works normally). Now when I try to open it (the database) I got this error :

ERROR in line 1 : ORA-00600: Internal error code, arguments : [kcratr_nab_less_than_odr], [1], [90], [13770], [13771], [], [], [], [], [], [], []

Can any one help me please ???

What are the basic job functions required of a remote DBA for Oracle and SQL Server?

Posted: 17 Jun 2013 05:38 PM PDT

When looking for a remote DBA to manage and support your database infrastructure what are the basic job functions that should be expected?

On the flip side what would be considered out side the core job functions for a remote dba?

I am looking for general high level functions that would be performed in a mixed server database environment (mostly sql server and oracle) containing custom and COTS (3rd party) applications/databases.

I am also looking for what would be appropriate Service Level Agreement for performing those various job functions. Of course this is also based on the type of system being managed for example out core operations database running 24x7 is a HA system so response time is key.

Full disclosure I am on team creating a Statement of Work for remote DBA services and I am trying to avoid technical managers from simply googling "DBA Job Functions" and throwing those into the SOW.

Hazards of Upgrading SQL Server Express 2008 to R2

Posted: 17 Jun 2013 06:44 PM PDT

A client of mine reached the 4GB database limit on a SQL Server 2008 Express. As a test I installed the R2 version on another machine and restored a backup of the database.

I'd like to upgrade the actual server itself, though I'm worried about what any of the pitfalls/consequences could be in addition to compromising the system stability.

This is because, additional critical applications (including an ERP I don't fully understand) are running on this machine which would require calling another company to re-install/setup.

I've given the client 3 options:

  1. Get new hardware in order to host the R2 server there (independent of current server).
  2. Use current setup that was a test.
  3. Take the risk and install the update to R2 on the actual server risking the 5% (or however much it is) risk of anything in the system going unstable.

I've also thought about cloning the whole computer using clonzilla (ghost) etc, but as I'm supporting remotely this would require I travel ~ 500km.

Also, this is a serious problem as they cannot add new records to this database.

Error while restoring a Database from an SQL dump

Posted: 17 Jun 2013 07:22 PM PDT

I am extremely new to MySQL and am running it on Windows. I am trying to restore a Database from a dumpfile in MySQL, but I get the following error:

$ >mysql -u root -p -h localhost -D database -o < dump.sql  ERROR: ASCII '\0' appeared in the statement, but this is not allowed unless option --binary-mode is enabled and mysql is run in non-interactive mode. Set --binary-mode to 1 if ASCII '\0' is expected. Query: 'SQLite format 3'.  

I tried $ > mysql -u root -p -h localhost -D database --binary-mode -o < dump.sql but this gave me the following ERROR at line 1: Unknown command '\☻'. It is a 500 Mb dump file, and when I view its contents using gVIM, all I can see is expressions and data which is not comprehensible. Also when I try to copy contents from the file to post here all I can copy is :SQLite format 3 This kind of seems strange.

SSIS Rollback Package

Posted: 17 Jun 2013 04:44 PM PDT

I have several data flow tasks in my package that depend on other data flow tasks. For testing reasons I want to run the entire package and then roll it back after it completes. How do I do it?

Does Database Detach or Offline Clear the Cache Quickest?

Posted: 17 Jun 2013 06:03 PM PDT

A buddy of mine told me today that instead of bouncing SQL Server, I could simply detach and then re-attach a database and this action would clear the given database's pages and plans from cache. I disagreed and provide my evidence below. If you disagree with me or have a better rebuttal, than by all means supply it.

I am using AdventureWorks2012 on this version of SQL Server:

  SELECT @@VERSION;  Microsoft SQL Server 2012 - 11.0.2100.60 (X64)  Developer Edition (64-bit) on Windows NT 6.1  (Build 7601: Service Pack 1)  

Having loaded the database, I run the following query:

Firstly, run Jonathan K's AW fattening script found here:

AW Get Fat

    ---------------------------  -- Step 1: Bpool Stuff?  ---------------------------  USE [AdventureWorks2012];  GO    SELECT       OBJECT_NAME(p.object_id) AS [ObjectName]     , p.object_id     , p.index_id     , COUNT(*) / 128 AS [buffer size(MB)]     , COUNT(*) AS [buffer_count]  FROM       sys.allocation_units AS a       INNER JOIN sys.dm_os_buffer_descriptors AS b             ON a.allocation_unit_id = b.allocation_unit_id       INNER JOIN sys.partitions AS p             ON a.container_id = p.hobt_id  WHERE       b.database_id = DB_ID()       AND p.object_id > 100  GROUP BY       p.object_id     , p.index_id  ORDER BY       buffer_count DESC;    

The result is shown here: enter image description here

Detach and re-attach the database and then re-run the query.

  ---------------------------  -- Step 2: Detach/Attach  ---------------------------  -- Detach  USE [master]  GO  EXEC master.dbo.sp_detach_db @dbname = N'AdventureWorks2012'  GO    -- Attach  USE [master];  GO    CREATE DATABASE [AdventureWorks2012] ON   (       FILENAME = N'C:\sql server\files\AdventureWorks2012_Data.mdf'   )      ,  (       FILENAME = N'C:\sql server\files\AdventureWorks2012_Log.ldf'   )   FOR ATTACH;  GO  

What is in the bpool now?

  ---------------------------  -- Step 3: Bpool Stuff?  ---------------------------  USE [AdventureWorks2012];  GO    SELECT       OBJECT_NAME(p.object_id) AS [ObjectName]     , p.object_id     , p.index_id     , COUNT(*) / 128 AS [buffer size(MB)]     , COUNT(*) AS [buffer_count]  FROM       sys.allocation_units AS a       INNER JOIN sys.dm_os_buffer_descriptors AS b             ON a.allocation_unit_id = b.allocation_unit_id       INNER JOIN sys.partitions AS p             ON a.container_id = p.hobt_id  WHERE       b.database_id = DB_ID()       AND p.object_id > 100  GROUP BY       p.object_id     , p.index_id  ORDER BY       buffer_count DESC;  

And the result: enter image description here

Are all the reads logical at this point?

  --------------------------------  -- Step 4: Logical Reads Only?  --------------------------------  USE [AdventureWorks2012];  GO    SET STATISTICS IO ON;         SELECT * FROM DatabaseLog;      GO  SET STATISTICS IO OFF;      /*  (1597 row(s) affected)  Table 'DatabaseLog'. Scan count 1, logical reads 782, physical reads 0, read-ahead reads 768, lob logical reads 94, lob physical reads 4, lob read-ahead reads 24.  */    

And we can see that the buffer pool was not totally blown away by the detach/attach. Seems like my buddy was wrong. Does anyone disagree or have a better argument?

Another option is to offline and then online the database. Let us try that.

    --------------------------------  -- Step 5: Offline/Online?  --------------------------------  ALTER DATABASE [AdventureWorks2012] SET OFFLINE;  GO  ALTER DATABASE [AdventureWorks2012] SET ONLINE;  GO    ---------------------------  -- Step 6: Bpool Stuff?  ---------------------------  USE [AdventureWorks2012];  GO    SELECT       OBJECT_NAME(p.object_id) AS [ObjectName]     , p.object_id     , p.index_id     , COUNT(*) / 128 AS [buffer size(MB)]     , COUNT(*) AS [buffer_count]  FROM       sys.allocation_units AS a       INNER JOIN sys.dm_os_buffer_descriptors AS b             ON a.allocation_unit_id = b.allocation_unit_id       INNER JOIN sys.partitions AS p             ON a.container_id = p.hobt_id  WHERE       b.database_id = DB_ID()       AND p.object_id > 100  GROUP BY       p.object_id     , p.index_id  ORDER BY       buffer_count DESC;  

It appears that the offline/online operation worked a lot better.

enter image description here

MySQL, optimize and replication

Posted: 17 Jun 2013 05:54 PM PDT

I have two MySQL (MariaDB actually) servers, a master and a slave with standard replication. Two times already in less than a month I've had my slave become unusable.

This is what I experienced:

  • Notice a drop in query per second on the slave with my monitoring tool
  • Connect to the server with the mysql client, run SHOW SLAVE STATUS \G
  • Notice everything looks normal (Waiting for master to send event, seconds behind master = 0, etc)
  • Try and stop the replication with STOP SLAVE
  • STOP SLAVE hangs
  • Impossible to connect to the server with a client
  • Execute SHOW SLAVE STATUS \G on another already connected client, and it hangs too.

After that, I saw the MySQL process using CPU, and I saw a lot of disk writes with iotop, so I let it do its stuff for a while.

After 15 minutes I killed -9 the MySQL process because I got frustrated. I then restart MySQL, and I can connect and run queries, good.

I execute SHOW SLAVE STATUS \G which now works, and it looks normal: seconds behind master = 0, waiting for events. Then I go on the master and execute CREATE DATABASE foobar to see if the replication works, and it doesn't.

Again, at that point I see a lot of disk writes so I assume the replication is catching up. Two hours later, I finally decide to check SHOW PROCESSLIST and all I see being executed is a OPTIMIZE TABLE table1 ... and then I remember that I had that exact same problem last time.

I kill the process with that query, then I reexecute SHOW PROCESSLIST and sure enough it's catching up with the master. SHOW SLAVE STAUTS \G reported 15k seconds behind master, which was coherent.

Anyway, after that wall of text, my questions:

  • why does a OPTIMIZE query block the replication catching up ?
  • can I prevent those OPTIMIZE queries ? I didn't put any cron to do that, and as far as I can see, nothing in /etc/cron.{d,daily,hourly,weekly} does this.

Thanks.

MyISAM key buffer

Posted: 17 Jun 2013 02:09 PM PDT

Good evening,

I want to know how can I use informations about MyISAM to calculate :

  • size of buffer,
  • size of buffer used,
  • Write ratio,
  • Read ratio,
  • and MyISAM key cache hit rate

and thank you.

Optimizing disk space for MyISAM tables: what is the benefit of ALTER and ANALYZE if you already do OPTIMIZE?

Posted: 17 Jun 2013 12:45 PM PDT

Per this post, in order to optimize the storage of a MyISAM table, you need to:

OPTIMIZE TABLE ;  ALTER TABLE ENGINE=MyISAM ;   ANALYZE TABLE ;  

It's obvious what OPTIMIZE does for disk space but what does ALTER and ANALYZE do when used after OPTIMIZE? I am asking specifically about disk storage, so for example, I don't care that ANALYZE improves join performance, etc.

Configure SQL Server Database for High Availability & Warehousing

Posted: 17 Jun 2013 11:09 AM PDT

I have 3 DB servers (DB1, DB2 & DB3 - all on Win 2008 R2 with SQL Standard 2008 R2 in the same datacenter). Let's assume AdventureWorks as an example database.

My goal: - Setup DB mirroring with automatic failover i.e. DB2 becomes mirroring server - Setup hourly transaction log backups for point in time recovery - Setup reporting/warehouse environment i.e. DB3 becomes the warehouse/reporting server

Setting up DB mirroring and hourly txn log backups is easy but my question is - What are my options for replicating the principal database to the DB3 server for reporting/warehousing needs? Can I use log shipping considering I do txn log backups?

The AdventureWorks database on DB3 needs to be in read-only mode.

EDIT: The reporting database can be up to a day behind.

Select query using a map [closed]

Posted: 17 Jun 2013 01:06 PM PDT

CREATE TABLE grid_rows(      [row_id] [int] NOT NULL,      [column_id] [smallint] NOT NULL,      [column_val] [decimal](18, 6) NULL     )  // (row_id, column_id) is a composite primary query  

For a given Map<column_id, column_value> what would be the select query to get the row_id?

Table values:

1 k1 v1  1 k2 v2  1 k3 v3  2 k1 v1  2 k5 v5  2 k6 v6  

How to get the row_id i.e 1 which matches all the key value pairs of {k1:v1,k2:v2,k3,v3}?

How to set password for PostgreSQL database with a script?

Posted: 17 Jun 2013 11:30 AM PDT

In a bash script I'm writing, I need to create a database and set a password that is generated automatically by my script.

The problem is I can't find a way to set up the password without user interaction.

I need something like this:

$ createdb mydb -p $(cat password_file)  

Any ideas?

How to create a table in MySQL with automatic numeration of columns?

Posted: 17 Jun 2013 04:44 PM PDT

Example. My data is from Table name:BIKE

Bike_No   Repair_Date     Repair_Cost  --------  -----------     ------------  ABC1234   2013-01-05      50.00  BMX5678   2013-02-04      75.00  ABC1234   2013-01-25      20.00  BON3333   2013-03-06      80.00  DEB1111   2013-08-04      40.00  ABC1234   2013-09-06      50.00  

I want to see the repair cost for each bike based on a date range (example: from 01/01/2013 to 31/03/2013) and result table must display the repair cost with Bike_No and the affected months as columns. How do we generate the affected months as columns based on the queried date range?

Simplify subqueries with join

Posted: 17 Jun 2013 11:30 AM PDT

I'm trying to create a report in SQL Server Reporting Services that shows the availability of multiple sites. The sites are monitored by a tool that uses a SQL Server 2012 database to store the data.
I've created a query that shows me the results and I'm wondering if I can rewrite this query in a single select without the subqueries.

My actual query looks like

    SELECT            t1.CHECKID          ,t1.CHECKNAME          ,t2.Startdate          ,t2.Enddate          ,t2.StatusID          ,t2.Month          ,t2.Year      FROM       (SELECT CHECKID, CHECKNAME FROM CHECKS ) AS t1      LEFT JOIN       (SELECT                      STATUSCHANGES.CHECKID              , STATUSCHANGES.DATETIME AS Startdate              , MIN(STATUSCHANGES_1.DATETIME) AS Enddate              , STATUSCHANGES.STATID AS StatusID              , CHECKS.CHECKNAME              , STATUS.STATUSNAME              , DATEPART(MONTH,STATUSCHANGES.DATETIME) as Month              , DATEPART(YEAR,STATUSCHANGES.DATETIME) as Year          FROM              STATUSCHANGES               INNER JOIN STATUSCHANGES AS STATUSCHANGES_1                   ON STATUSCHANGES.CHECKID = STATUSCHANGES_1.CHECKID                   AND STATUSCHANGES.DATETIME < STATUSCHANGES_1.DATETIME               INNER JOIN CHECKS ON STATUSCHANGES_1.CHECKID = CHECKS.CHECKID               INNER JOIN STATUS ON STATUSCHANGES.STATID = STATUS.STATUSID          GROUP BY               STATUSCHANGES.CHECKID,               STATUSCHANGES.DATETIME,               STATUSCHANGES.STATID,               CHECKS.CHECKNAME,               STATUS.STATUSNAME          HAVING               (STATUSCHANGES.STATID IN (3))               AND DATEPART(Q,STATUSCHANGES.DATETIME) = 2          ) AS t2          ON t1.CHECKID = t2.CHECKID          ORDER BY t2.Startdate  

Is it possible to integrate the first subquery t1 in the from clause of subquery t2?

how important are mysql's innodb logs?

Posted: 17 Jun 2013 11:07 AM PDT

I'm basically concerned about the following two settings, these are my default settings :

innodb_log_buffer_size  1048576  innodb_log_file_size    5242880  

I haven't changed them for 2 reasons : one, I don't know what they are, and two changing this is risky, according to articales I've read

My application data-structure is write-intensive and I use only innodb tables. Currently I've set innodb_buffer_pool_size to 5GB and my database size is around 7GB and increases at the rate of a 400MB per day. Also note, this 400MB insert is done within a period of only about 30 minutes and some might refer to this as "bulk insert"

As far as i understand, log files are for keeping a record of all the insert/update operations, why do I need this?

What are these log files and why should I increase their size?

And lastly, if I don not want to keep a log, how should I fine tune my database then?

Given two known statements, how can I replicate deadlock?

Posted: 17 Jun 2013 10:43 AM PDT

I have two statements (an update against every row in one table (call it table A) and a delete on another table that looks up rows in table A) that I know are causing occasional deadlocks. It seems that there is an X lock and a U lock on the same primary key index of table A.

I have been trying, and failing, to replicate the deadlock in SQL Server Management Studio. Should I be able to?

Separately, the delete statement is very inefficient and I think I can fix the issue by creating a covering index that means that the primary key index mentioned above is no longer included in the actual execution plan of the delete statement. Given that ultimately the same rows are required by both statements will this guarantee no deadlocks or simply reduce the chance of it happening by giving SQL Server a different path to the data?

SQL Server giving intermittant connection failures

Posted: 17 Jun 2013 02:22 PM PDT

I have a SQL Server 2000 running on Windows 2003 Server that is giving me intermittent connection failures.

The user connections are getting closed on waiting but mature connections. Meaning when users are connected to an application that is running on this server, the programs start up fine. The problem usually shows up after the connection has been sitting for a bit. I'm wondering if there is some odd setting that is killing older, not active connections?

What could be causing this? What can I check to further troubleshoot this.

EDIT:

Is it possible that my problems are caused by running SQL Server 2000 personal edition, where the old server that died and was replaced was on Standard? I have no way to verify the old version, but is this possible?

Custom sp_who/sp_whoUsers

Posted: 17 Jun 2013 03:32 PM PDT

I need to allow a client in a dev DW SQL 2K8R2 environment, to view and kill processes, but I do not want to grant VIEW SERVER STATE to this person (he's a former sql dba and is considered a potential internal threat).

When I run the following, it returns one row as if the user ran the sp themselves with their current permissions.

USE [master]  GO    SET ANSI_NULLS ON  GO  SET QUOTED_IDENTIFIER ON  GO    CREATE PROCEDURE [dbo].[usp_who] with execute as owner  AS  BEGIN      SET NOCOUNT ON;      exec master.dbo.sp_who;  END  

Changing the "with execute as" to "self" (I'm a sysadmin) returns the same results. I've also tried the below instead of calling sp_who, and it only returns one row.

select * from sysprocesses  

It seems that the context isn't switching, or persisting, throughout the execution of the procedure. And this is to say nothing of how I'm going to allow this person to "kill" processes.

Does anyone have a solution or some suggestions to this seemly unique problem?

Need to suppress rowcount headers when using \G

Posted: 17 Jun 2013 11:31 AM PDT

Is there a command to suppress the rowcount headers and asterisks when using '\G' to execute a SQL statement? I am executing mysql with the -s and --skip-column-name options, but these don't suppress the rowcounts.

How to search whole MySQL database for a particular string

Posted: 17 Jun 2013 01:32 PM PDT

is it possible to search a whole database tables ( row and column) to find out a particular string.

I am having a Database named A with about 35 tables,i need to search for the string named "hello" and i dont know on which table this string is saved.Is it possible?

Using MySQL

i am a linux admin and i am not familiar with databases,it would be really helpful if u can explain the query also.

multivalued weak key in ER database modeling

Posted: 17 Jun 2013 12:31 PM PDT

I was wondering since i didnt find out any clarification for this. I want to store movies that exist in different formats (dvd, bluray etc) and the price for each format differs from each other as well as the quantity of each format, so i came up with this:

example

Is this correct from a design perspective? Does this implies redundancy? I dont understand how will this be stored in a table. Would it be better to do it like this :

enter image description here

Thanks in advance.

EDIT : I add some more descriptive information about what i want to store in this point of the design. I want to store information about sales. Each movie that exist in the company i need to store format, price and stock quantity. I will also need to store customer information with a unique id, name, surname, address, movies that he/she has already bought and his credit card number. Finally i will have a basket that temporary keeps items (lets suppose that other items exist apart from movies) that the customer wants to buy.

Microsoft Office Access database engine could not find the object 'tableName'

Posted: 17 Jun 2013 04:32 PM PDT

First a little background: I am using MS access to link to tables in an advantage database. I created a System DSN. In the past in Access I've created a new database, and using the exteranl data wizard, successfully linked to tables. Those databases and the linked tables are working fine.

Now I am trying to do the same thing, create a new access db, and link to this same DSN. I get as far as seeing the tables, but after making my selection, I get the error, " The Microsoft Office Access database engine could not find the object 'tableSelected'. Make sure the object exists and that you spell its name and the path name correctly.

I've tried creating another datasource (system and user) with no luck. Environment is Wn XP, Access 2007, Advantage DB 8.1

Database Mail to Distribution List

Posted: 17 Jun 2013 11:13 AM PDT

I have a SQL Server 2005 which I have configured the database mail to setup monitoring alerts and notifications. I also create a email distribution list containing a couple of emails. The thing is that when I send the email from the database to a distribution list, it never gets to the emails inbox, but if I do it individually it does.

I tried to send an email from the outlook to the distribution list and it worked fine. I do not understand what is happening. Please can you help me?

No comments:

Post a Comment

Search This Blog