Tuesday, September 10, 2013

[how to] Sqlcmd command strips $ out of query variable

[how to] Sqlcmd command strips $ out of query variable


Sqlcmd command strips $ out of query variable

Posted: 10 Sep 2013 09:02 PM PDT

What I am trying to do

Restore a database from backup using sqlcmd. Sqlcmd is being called by PowerShell.

What is the issue

The server folder which contains the database backups has a $ inside of the name. ie Hostname$InstanceName. When I try to restore any databases I get an error and notice it is trying to restore from \\networkshare\Hostname\DatabaseName\Database.bak

You'll notice that everything after the $ got stripped out. I believe this is due to sqlcmd's support of variables (-v parameter). I have tried to use the -x parameter (disable variable substitution) with no success.

This process works fine with folders that do not have $ inside of their names.

Does anyone know what I can try from here? Or see if I'm doing something wrong.

Thanks

Sample code

$Source_Server = "Hostname`$InstanceName"  $FQFile_DatabaseBackup = $Network_Share + "\" + $Source_Server + "\" + $Source_Database + "\Database.bak"  $Query_RestoreDatabase = "RESTORE DATABASE [$Source_Database] FROM DISK = '$FQFile_DatabaseBackup' WITH RECOVERY, REPLACE"  Invoke-Expression -Command "sqlcmd -x -S `"$Destination_Server`" -d `"master`" -E -Q `"$Query_RestoreDatabase`""  

Delete rows in parent table based on criteria in child table - Oracle

Posted: 10 Sep 2013 08:04 PM PDT

I have a table B with a foreign key to table A. I want to DELETE some rows in table B, and I also want to DELETE their parent rows in table A. However, the delete criteria is based on table B. The rows in table A cannot be deleted first because the reference in table B restricts the deletion, but I also have to get the keys of A from the rows to delete in B.

Here is a SQLFiddle with a sample table structure: http://sqlfiddle.com/#!4/f156c/4/0.

My first inclination was to attempt to save the keys by SELECTing them from B into a variable, and then use that to DELETE from A.

DECLARE    A_ID_TO_DELETE DBMS_SQL.NUMBER_TABLE;  BEGIN    SELECT A_ID BULK COLLECT INTO A_ID_TO_DELETE    FROM (SELECT A_ID          FROM B          WHERE LENGTH(B_DATA) > 4         );      DELETE FROM B    WHERE LENGTH(B_DATA) > 4;      DELETE FROM A    WHERE A_ID IN A_ID_TO_DELETE;  END;  /  

But that just gives an PLS-00382: expression is of wrong type error. The error itself is coming from the DELETE on A; I know this because if I comment it out, the block runs.

How can I either get around the expression is of wrong type error, or what's another way to approach this?

Oracle version: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod

(Yes, I'm well aware of how old that is. Client's choice of DB, not ours.)

Recursive update for Tree structure in pgsql

Posted: 10 Sep 2013 08:01 PM PDT

I have the following structure of table.

  
    id chNum parentid
    --- ------ ---------
    1 1 NULL
    2 1.1 1
    3 1.1.1 2
    4 2 NULL (Here add new row)
    5 3 NULL
    6 4 NULL

After added new row in between the rows, this and all the following rows (ChNum) want to be change. Result as follows

  
    id chNum parentid
    --- ------ ---------
    1 1 NULL
    2 1.1 1
    3 1.1.1 2
    4 2 NULL

    7 3 NULL

    5 4 NULL

    6 5 NULL

Is this possible to use recursive update sql query in Plpgsql and is this best way for faster data update than through application side.

Get Results from DMV Query into SQL Server Table

Posted: 10 Sep 2013 07:08 PM PDT

today I began to troubleshoot an issue that I was asked to resolve, and for the first time I found myself querying in the SSAS database. I obtained a very useful set of results (who was connecting to what applications/cubes and for how long), but for the life of me I cannot figure out how to get the results from that query into a database table so that I can track the data. Any help would be much appreciated, and thank you in advance for any assistance you are able to provide.

Color output in PSQL?

Posted: 10 Sep 2013 04:16 PM PDT

I'm looking for a way to get psql to give me color. I love using the command line, but the drab grey is, well, quite a drag...

I did find this page, http://matlads.blogspot.com/2012/09/tweaking-psql-in-color.html, but its kind of hacky and I would prefer something better.

Anyone have suggestions?

if block fails to create temp table in a procedure

Posted: 10 Sep 2013 04:16 PM PDT

I'm attempting to do this in a procedure:

DECLARE @a bit = 1;  BEGIN      SELECT * INTO #aTemp FROM OPENROWSET( ... );        IF @a = 0      BEGIN          SELECT ... INTO #bTemp FROM #aTemp;      END      ELSE      BEGIN          SELECT ... INTO #bTemp FROM #aTemp;      END  END  

I get the error:

Msg 2714, Level 16, State 1, Line 10  There is already an object named '#bTemp' in the database.  

Why is this happening and is there a work around?

Update

I've attempted to add a DROP statement as suggested here, but it still does not work:

DECLARE @a bit = 1;  BEGIN      SELECT * INTO #aTemp FROM OPENROWSET( ... );        IF @a = 0      BEGIN          IF OBJECT_ID('[tempdb]..#bTemp') IS NOT NULL          BEGIN              DROP TABLE #bTemp;          END            SELECT ... INTO #bTemp FROM #aTemp;      END      ELSE      BEGIN          IF OBJECT_ID('[tempdb]..#bTemp') IS NOT NULL          BEGIN              DROP TABLE #bTemp;          END            SELECT ... INTO #bTemp FROM #aTemp;      END  END  

How to normalize data en-masse?

Posted: 10 Sep 2013 08:49 PM PDT

Goal:

Extract address fields from a users table into a separate addresses table linked by foreign key.

I have created the structure below, and have millions of rows to deal with. I need to do this once, but relatively quickly (ie, I can't loop over every row in my Rails code)

The query below does work to extract the address fields into new rows in the addresses table. But I can't figure out how to link the new rows back to the users table that the data came out of.

Structure:

Before:

User    login    address_line_one    address_line_two    address_city    address_state    address_zip  

After:

User    login    address_id    Address    id    line_one    line_two    city    state    postal_code    country  

What I have so far:

INSERT INTO addresses (line_one, line_two, city, state, postal_code, country)  SELECT address_line_one,    address_line_two,    address_city,    address_state,    address_zip AS zip,    'US'  FROM users  WHERE    address_line_one IS NOT NULL ||    address_line_two IS NOT NULL ||    address_city     IS NOT NULL ||    address_state    IS NOT NULL ||    address_zip      IS NOT NULL  

PROBLEM

I don't know how to get the address_id back into the users table

Wrong Last Modified Date of .ndf file on SQL Server

Posted: 10 Sep 2013 12:51 PM PDT

I have SQL Server Database and we have many tables. To improve performance, I have created different filegroups with files on different disks just for indexes. I have created new index today in a different file group. However, I still see last modified date of file as 1 month in the past. I have checked file names and I am worried that SQL Server is not setting Last Modified date of file. The new index works fine, but the file still does not show me a more recent modified date.

Ok I have just learned something new, none of the files in SQL Server's Data directory shows last modified date to recent date while, my database is continuously updated. All dates are 1 month old. The server's system clock is correct.

To EAV or not to EAV?

Posted: 10 Sep 2013 12:57 PM PDT

I am working on an application which records data from different forms. New forms can (and will be) constructed dynamically by an admin. I am wondering what would be the correct way to save this data in a database? My first guess would be to use EVA, but considering all the bad publicity I am a bit hesitant.

Let's say I have two forms:

#form 1  description = TextField()  score = IntegerField()    #form 2  feedback = TextField()  NOR_score = IntegerField()  

There is no way of predicting the number of possible attributes. I am playing around with two ideas, but maybe you'll be able to suggest a much better approach.

Idea I:

unique_id | entity | attribute   | value  1         | form 1 | description | test123  2         | form 1 | score       | 90  3         | form 2 | feedback    | blabla  4         | form 2 | NOR_score   | 5  

Idea II:

unique_id | entity | value  1         | form 1 | {"description": "test123", "score":"90"}  2         | form 2 | {"feedback": "blabla", "NOR_score":"5"}  

Thanks in advance for any suggestions.

Cannot shrink transaction log

Posted: 10 Sep 2013 08:36 PM PDT

Bit of a headscratcher here.

I have a db that has less than 1GB of data, but a 40GB log file. Transaction logs are backed up daily, and there is not a lot of activity on this database; approximately once a week it records new payroll information and then regurgitates this data for reporting purposes. The database is set to Auto Shrink.

running sp_spaceused @updateusage = true yields the following information:

database_name   database_size   unallocated space  PayrollImports  39412.06 MB 105.00 MB    reserved    data    index_size  unused  321728 KB   278640 KB   42816 KB    272 KB  

running DBCC shrinkfile (N'PayrollImports_log', 1 , notruncate) yields the following:

DbId    FileId  CurrentSize MinimumSize UsedPages   EstimatedPages  19  2   4991088 3456    4991088 3456  

...the discrepancy between the UsedPages and the EstimatedPages is mystifying, but I continue with DBCC shrinkfile (N'PayrollImports_log', 1 , truncateonly) and get:

DbId    FileId  CurrentSize MinimumSize UsedPages   EstimatedPages  19  2   4991088 3456    4991088 3456  

Nothing has changed at this point. The log file is still 40GB. So I think, maybe I have some open transaction. Running dbcc opentran should verify:

No active open transactions.  DBCC execution completed. If DBCC printed error messages, contact your system administrator.  

Crap. Well maybe my indexes are fragmented. I will defrag them with sp_msForEachTable 'DBCC indexdefrag([PayrollImports], ''?'')' and try to shrink again:

DbId    FileId  CurrentSize MinimumSize UsedPages   EstimatedPages  19  2   4991088 3456    4991088 3456  

Still nothing has changed. Ok, how about I reindex with sp_msForEachTable 'DBCC dbreindex([?])'?

DBCC execution completed. If DBCC printed error messages, contact your system administrator.  DBCC execution completed. If DBCC printed error messages, contact your system administrator.  DBCC execution completed. If DBCC printed error messages, contact your system administrator.  DBCC execution completed. If DBCC printed error messages, contact your system administrator.  DBCC execution completed. If DBCC printed error messages, contact your system administrator.  DBCC execution completed. If DBCC printed error messages, contact your system administrator.  DBCC execution completed. If DBCC printed error messages, contact your system administrator.  DBCC execution completed. If DBCC printed error messages, contact your system administrator.  DBCC execution completed. If DBCC printed error messages, contact your system administrator.  DBCC execution completed. If DBCC printed error messages, contact your system administrator.  DBCC execution completed. If DBCC printed error messages, contact your system administrator.  

...and now we get:

DbId    FileId  CurrentSize MinimumSize UsedPages   EstimatedPages  19  2   4991088 3456    4991088 3456  

no change. All right, how about sp_msForEachTable 'ALTER INDEX ALL ON [PayrollImports].[?] REBUILD WITH (FILLFACTOR = 10)'?

Immediately, this fails with:

Cannot find the object "(One of my tables)" because it does not exist or you do not have permissions.  

Huh? It's there, all right. I do a select top 10 * from (My table) and it comes up empty. Well, that's not right at all. This is a lookup table that should have over 200 rows. Is this a data corruption issue maybe? I collect the data from my development environment, re-insert it in.

But I am out of ideas. I cannot shrink this thing. What else can I try? Why are my UsedPages so incredibly higher than my EstimatedPages? What is going on here?

Adding a replication article manually & failing to generate new replication sprocs

Posted: 10 Sep 2013 09:02 AM PDT

I've got a large number of articles I need to add to a new replication transactional publication in a dynamic fashion, but appear to be running into an error every time I switch replication on.

Specifically, the error I'm getting is that the publication is telling me that the replication stored procedures aren't there, despite me telling it to specifically create them.

I don't believe this is a security thing as the dev server I'm working on is completely open and basically everyone/everything has sysadmin access.

From my investigations, it appears that when I'm running sp_addarticle from a dynamic SQL string within a cursor, even though I've explicitly set the @schema_option to create the replication sprocs, it's failing to do so.

For reference, I'm doing things in the following order:

  1. Create the publication in an inactive state
  2. Configuring it accordingly (adding all the necessary options/security settings, including the publication snapshot)
  3. Inside a loop, adding each article to the publication sequentially with the following settings.

    @type = N'logbased'  @schema_option = 0x000000000803509F  @identityrangemanagementoption = N''manual''  @destination_table = N''' + @Table + '''  @destination_owner = N''' + @Schema + '''  @vertical_partition = N''false''  @ins_cmd = N''CALL sp_MSins_' + @Table + '''  @del_cmd = N''CALL sp_MSdel_' + @Table + '''  @upd_cmd = N''SCALL sp_MSupd_' + @Table + '''  @status = 0  
  4. Inside a second loop

    Change the publication status to active  Add the subscriber (sp_addsubscription)  Add the push agent (sp_addpushsubscriptionagent)  
  5. Inside a third loop, change the article status to 'parameters' (MSDN says this is the default, so I'm assuming this is correct)

    SELECT @sql = 'exec sp_changearticle @publication = N''' + @Publication + ''', @article = N''' + @Schema + '-' + @Table +''', @property = ''status'', @value = ''parameters'''  EXEC (@SQL)  

I've had a look through all the options, but I'm honestly starting to get a bit codeblind, so can anyone help me ID why this isn't working?

Thanks.

Migrating a tablespace to a different database on the same ASM

Posted: 10 Sep 2013 07:57 AM PDT

How might one go about copying or moving tablespaces between databases that uses the same ASM instances?

Might it be possible to cut some corners in RMAN or have ASM remap files belonging to a transportable tablespace?

Oracle Materialized View Refresh Group - Montly Interval

Posted: 10 Sep 2013 08:09 AM PDT

I have a few Materialized Views that I need to update weekly and monthly.

So for the weekly I'm using:

execute DBMS_REFRESH.MAKE(  name=>'LPMVRefresh_Weekly_1',  list=>'mySchema.MV1_mv,mySchema.MV2_mv',  next_date => TO_DATE('2013-09-08:00:30:00','YYYY-MM-DD:HH24:MI:SS'),  interval =>'sysdate+7');  

How can I do so they refresh monthly, say every 1st of the month at 1 am? Is there something like Interval => sysdate+monthly?

In what order does postgresql handle queries which have conflicting locks?

Posted: 10 Sep 2013 10:53 AM PDT

Let's say that I have a long running query SELECT and this query holds a AccessShareLock on a table. While this query is being processed a new query which needs a AccessExclusiveLock on the table REINDEXDB is made. I suppose this query must wait the other query to finish, becuase they have conflicting locks. And while my REINDEXDB query is waiting for the SELECT query to finish, a second SELECT query is made. Now my question is

Which query will be serviced first? REINDEXDB or SELECT? Select query has been made later but it doesn't have a conflicting lock with the ongoing select query. So will they be processed in parallel? Or because REINDEXDB query has been made earlier will the second select query wait for REINDEXDB to finish?

Partition Function with only one value, what does this effect?

Posted: 10 Sep 2013 09:01 AM PDT

I am currently looking at a database that has been setup by the default installation routine of AppFabric. Looking at the creation scripts I encountered this partition function:

CREATE PARTITION FUNCTION [MSAppServerPF_ASWcfEventsTable](datetime2(7)) AS   RANGE RIGHT FOR VALUES (N'0001-01-01T00:00:00.000')  GO  

And it is being applied to a table that way:

CREATE PARTITION SCHEME [MSAppServerPS_ASWcfEventsTable] AS   PARTITION [MSAppServerPF_ASWcfEventsTable] TO ([PRIMARY], [PRIMARY])  GO  

I wonder what this specific partition style effects in. I was under the impression that you store boundary values into the partition function by which the data will be partitioned. Unfortunately, looking for this on the web I could not find a similar example using just one DATETIME2 value in the partition function definition.

Can someone shed some light on this for me?

MySQL gives me:“Can't open and lock privilege tables: Table 'host' is read only”

Posted: 10 Sep 2013 08:44 PM PDT

I am facing problem restoring a MySQL database. My primary database was MySQL 5.1 and now I am trying to copy it to MySQL 5.5. The database was backed up by using Xtrabackup.

I am using Ubuntu 12.04.3 LTS on this server, MySQL version is: 5.5.32-0ubuntu0.12.04.1-log.

I have followed all the steps to restore using Xtrabackup, this created database files, which I have copied to a tmp directory.

I have modified my.cnf to point to this tmp directory. I have changed the tmp directory permissions and changed the ownership of the files to mysql user.

drwxr-xr-x 12 mysql mysql 4096 Sep 10 10:04 base  

Now when I start the MySQL server I get this error:

[ERROR] Fatal error: Can't open and lock privilege tables: Table 'host' is read only

I have given a try as follows:

  1. Even tried installing MySQL 5.1 to see if that was the issue.
  2. tried chcon mysql_db_t to change the context but it gives me:

    can't apply partial context to unlabelled file

  3. I have used --skip-grant to get into database, but using this I can only access InnoDB tables only, MyISAM tables throw read-only error.

  4. After --skip-grant, I have used upgrade_mysql. This throws me errors stating that many tables are read-only.
  5. I have removed AppArmor too. And restarted too.
  6. I have restored a different database (5.1 to 5.5) previously on Ubuntu 12.04.2 LTS without any issues.

Can some please point me in right direction? I am not sure whats wrong with permissions.

Cannot set ArithAbort to Off on SQL Server

Posted: 10 Sep 2013 12:11 PM PDT

I have 2 SQL 2012 servers. The settings are exactly the same, but on one I can set ARITHABORT to Off but no matter how I try I can't do the same on the second server. The only difference I could find was that the server I could set it on is an older version of SQL 2012. Has something changed? Or is there something else I'm missing?

Where can I find the connection string for a DB2 database?

Posted: 10 Sep 2013 07:54 AM PDT

I need to connect an application to my DB2 database. Both application and db are running on the same machine.

I've tried this:

  Server=localhost:50000;Database=myDataBase;UID=myUsername;PWD=myPassword;  

I'm getting the following error:

CLI0124E Invalid argument value. SQLSTATE=HY009

Am I using the wrong connection string?

Edit I've also tried the following format:

  Provider=IBMDADB2;Database=urDataBase;Hostname=urServerAddress;Protocol=TCPIP;Port=50000;  Uid=urUsername;Pwd=urPassword;  

Same result.

Assign values with the := operator

Posted: 10 Sep 2013 02:55 PM PDT

When I assign a variable with

result := title || '', by '' || author;  

it is taking more time (approx. 15 sec) to run the function.
However, when I assign the variable with

result = title || '', by '' || author;  

it takes only 133ms.

Why is it taking more time for the first scenario? And what is the reason behind this?

Full function is given below.

CREATE OR REPLACE FUNCTION myschema.fn_get_res_no(reservation_no character varying)    RETURNS character varying AS  $BODY$      DECLARE        emd_status_firstcall varchar(2);        emd_status_secondcall varchar(2);      emd_status      varchar(6);    BEGIN        SELECT firstwscomplete, secondwscomplete      INTO emd_status_firstcall, emd_status_secondcall      FROM myschema.mytable      WHERE respkgconfirmid = reservation_no;        emd_status = emd_status_firstcall || ', ' || emd_status_secondcall;      RETURN emd_status ;      END;    $BODY$    LANGUAGE plpgsql VOLATILE;  

Software for working with SQL Server databases in Windows Server 2008 [closed]

Posted: 10 Sep 2013 11:15 AM PDT

We have an ASP .NET project that uses SQL Server that we host on a Windows Server 2008 machine. Where does Microsoft keeps its documentation for the tools that can be used for administering SQL Server databases? I'm not sure where to start looking for the software to manage our db.

Moving away from CmdExec steps in job without losing logging detail (SQL Server 2008)

Posted: 10 Sep 2013 07:20 PM PDT

My department currently relies heavily on CmdExec steps in SQL Server Jobs, called using a SQL Server Login.

For example

osql /e /n /S SERVER01 /U USER /P PA$$$ /d DATABASE_01 /i "\\LOCATION\OF\SQL\SCRIPT\scriptfile.sql" /o "\\LOCATION\OF\SQL\LOG\scriptfile.log"  

For a lot of reasons (not least security and the expectation that this method will become obsolete) I'd like us to move away from this model and replace with a mixture of stored procedures and SSIS packages.

I've proposed this several times and I'm always told that we can only move in this direction if we can recreate the detail of logging possible by the method above, which our department has come to rely on quite heavily. The method above outputs the contents of the script fed in along with any server messages including all rowcounts and error messages, in context.

This does admittedly mean that errors that wouldn't count as errors for SQL Server are far more easily picked up - Unexpectedly low row counts etc. You could put in as much error handling as you liked and never get something as easy to follow as the logging from this method, so I can see this side of the argument.

To sell my boss on a move away from cmdexec steps I'd like to find a method of recreating something like the output of the logging that cmdexec currently gives us - perhaps to SQL tables rather than files (although either would be fine) - in SSIS packages and stored procedures.

I'm familiar with using raiserror for error handling - I use this extensively in stored procedures - but the task here is not just to catch SQL errors, but to catch all that cmdexec logging catches and ideally in context of the script being executed... Short of writing a stored procedure and executing it after every step in every script I can't see a way of doing this.

Also I'm keen to hear about best practice approaches to logging for people who don't use cmdexec steps. I've learnt SQL in this department and this is the only way I've ever known of doing things.

Increasingly for my own development I try to write stored procs with error handling. And if I come across issues that don't throw errors, I do my troubleshooting by rerunning the code interractively on a dev server so I can monitor the step by step information. This is less convenient than the full logging we get from cmdexec but more secure.

Do other people do more than this? Is there a better way?

Meaning of "nonclustered located on primary"

Posted: 10 Sep 2013 05:50 PM PDT

When I run sp_help (using ALT+F1) for a table, on the indexes, there is a column index_description. and the rows have for example:

  • nonclustered located on PRIMARY (on foreign keys)
  • nonclustered, hypothetical located on PRIMARY (on non clustered indexes I created)
  • clustered, unique, primary key located on PRIMARY

I am more interested in the located on PRIMARY part, What does it mean?

InnoDB Failure of some kind

Posted: 10 Sep 2013 02:20 PM PDT

I have MySQL 5.5 installed. I tried to install Joolma but it failed. I went into their sql and replace EGNINE=InnoDB with MyISAM and the install worked.

InnoDB is listed under SHOW ENGINES;

Any idea what the cause is or how to fix this so other InnoDB sites can be used?

I had these errors:

MySQL ERROR No: 1030  MysqL Error: Got error -1 from storage engine  

MySQL is running but not working

Posted: 10 Sep 2013 04:20 PM PDT

In an attempt to tune MySQL to make it work with a recent installation of Drupal I had to modify the MySQL settings on my server. After modifying the configuration file for MySQL (/etc/my.cnf) MySQL stopped working. After some attempts I make it start again but now all my php/MySQL webistes are not being able to connect to their DBs.

Here is why is so confusing:

  • If I check a phpinfo.php on any given site, the MySQL info is there
  • phpmyadmin runs just fine
  • I can run mysql from SSH using root but I see that mysql activity is reduced, look:

[root@server mysql]# mysqladmin processlist

+-----+-----------+-----------+-----------+----------------+------+--------------------+------------------+  | Id  | User      | Host      | db        | Command        | Time | State              | Info             |  +-----+-----------+-----------+-----------+----------------+------+--------------------+------------------+  | 7   | eximstats | localhost | eximstats | Sleep          | 30   |                    |                  |  | 103 | DELAYED   | localhost | eximstats | Delayed insert | 30   | Waiting for INSERT |                  |  | 104 | DELAYED   | localhost | eximstats | Delayed insert | 149  | Waiting for INSERT |                  |  | 105 | DELAYED   | localhost | eximstats | Delayed insert | 149  | Waiting for INSERT |                  |  | 119 | root      | localhost |           | Query          | 0    |                    | show processlist |  +-----+-----------+-----------+-----------+----------------+------+--------------------+------------------+  

My websites using MySQL almost all say:

Error establishing a database connection  

Another say:

Warning: mysql_connect() [function.mysql-connect]: Access denied for user 'website_USER'@'localhost' (using password: NO)  

This is my current my.cnf:

[mysqld]  #datadir=/var/lib/mysql  socket=/var/lib/mysql/mysql.sock  #pid-file=/var/lib/mysql/mysqld.pid  #skip-bdb  #skip-innodb  #err-log=/var/log/mysql.log  #bind-address = server.escalehost.com  log-bin = /var/lib/mysql/log-bin      #innodb_buffer_pool_size=2M  #innodb_additional_mem_pool_size=500K  #innodb_log_buffer_size=500K  #innodb_thread_concurrency=2  #max_connections=125  #table_cache=2500  #thread_cache_size=100  #thread_stack=250K  #wait_timeout=10  #join_buffer=5M  #myisam_sort_buffer_size=15M  #query_cache_size=15M  #read_rnd_buffer_size=5M  max_allowed_packet = 64M  #open_files_limit=8602    #[client]  #port           = 3306  #socket=/var/lib/mysql/mysql.sock    #[mysql.server]  #user=mysql  #basedir=/var/lib    [mysqld_safe]  #socket=/var/lib/mysql/mysql.sock  #err-log=/var/log/mysqld.log  pid-file=/var/run/mysqld/mysqld.pid  

I commented most of it to return it to its simplest version... How can I make the web side to connect to mysql?

Mistake during Oracle 11g PITR

Posted: 10 Sep 2013 12:20 PM PDT

I tried using set time until.. and mis-typed the date. Can anyone help me understand how to get my backups into a manageable state?

After the accidental recover, most of my backupset disappeared.

I recovered them and used 'catalog recovery area' .. and they're listed in 'list backupset'. But something still isn't right.

When I do a PITR now, I get messages that my dbf files aren't available

and... the 'list backupset' seems to show backupsets. But they are listed differently than the files which weren't included in the 'bad' recovery.

Gists with the error and the list of backupsets are here https://gist.github.com/akinsgre/5561254

Can't connect to SQL Server Windows 7

Posted: 10 Sep 2013 10:20 AM PDT

TITLE: Connect to Server

Cannot connect to localhost.


ADDITIONAL INFORMATION:

A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 2)

For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=2&LinkId=20476

I keep getting this error when trying to connect to sql server. I also ran a repair and I got this error.

enter image description here

Could this be the problem and do you have any idea how I can repair it. I have looked all over Google and tried quite a few solutions with no luck.

I am Using SQL Server 2008 R2 Developer Edition

Empty LONGTEXT or NULL?

Posted: 10 Sep 2013 11:20 AM PDT

In my MySQL DB I have one field called html_contents that contains all the html to be shown in a webpage. Obviously the html could be huge, and certaintly bigger than 64KB, therefore I decided to use LONGTEXT rather than VARCHAR.

When the field is not set or left empty would you say it is better (alias more efficient for the DB) to set it to NULL or to empty string?

I read this: When to use NULL and when to use an empty string? but it talks about empty strings in general (probably small strings and not LONGTEXT).

I was wondering if with LONGTEXT is a different story, whether it saves a lot of space or execution time to use NULL instead of leaving empty LONGTEXT around.

Best embed/reference/field strategy for many inserts (mongo)

Posted: 10 Sep 2013 08:20 AM PDT

I'm building a gaming backend with Mongo, and have some issues on how to best design the schema to maximize performance and database size.

My models:

User

Match -ReferenceMany (User) -ReferenceMany (Score)

Score -ReferenceOne (Match) -ReferenceOne (User)

It takes 2 users to start a match. Say we get 10.000 users, and all play one match each day against another user, we get 5000 games a day.

Each match has three turns, which gives 6 scores, so for 5000 games we get 30,000 scores. These scores can be inserted simultaneously (in each game), so I have to make sure one user doesn't overwrite another users score. I believe I've solved that by having the scores in their own collection, and embedding them on the match, like so:

$score = new Score();    $score->setUser($user);  $score->setScore($playerScore);  $score->setGame($game);  $score->setMatch($match);    // Save score.  $dm->persist($score);    // Add score to match.  $match->addScores($score);  

My issue with this approach though is that it takes 6 queries just to insert a score (there are some validating queries before the above code). With 30.000 scores a day, that's a whole lotta queries.

My initial thought was to just add the scores to the match as an array, but what will happen the if two users submit at the exakt same time, updating the same match?

I'd be really interested in hearing some opinions on this. Let me know if I need to clarify anything. I'm using Symfony2 with Doctrine ODM if that helps.

No comments:

Post a Comment

Search This Blog