Thursday, October 10, 2013

[how to] Declare variable mysql

[how to] Declare variable mysql


Declare variable mysql

Posted: 10 Oct 2013 09:11 PM PDT

I am new to mysql, I have been using SQL in the past. I am trying to insert rows into a table via creating a loop. But struggling with it. Any help?

DECLARE @minid INT = (SELECT MIN(Product_ID) FROM Product_Report_Offer_Interaction_Jan_2012 pn) DECLARE @maxid INT = (SELECT max(Product_ID) FROM Product_Report_Offer_Interaction_Jan_2012 pn)

DECLARE @topid INT= @minid + 50000 BEGIN

WHILE ( @minid < @maxid ) BEGIN insert into Product_Report_Offer_Jan_2012 (Month, Product_ID, Offerview, EnlargeImage, DynamicPopUp, ShareonFacebook, BuyNowPress, Sendtofriend, Printed, StoreLocated, uLike, iPhoneOfferDetails, iPhoneStoreLocated, WishlistAd, Tweet, Catalogue1pageview, Catalogue2pageview, SearchListing, Featured, RelatedAdImpression, Wishlist) select pn.Month, pn.Product_ID, pn.Offerview, pn.EnlargeImage, pn.DynamicPopUp, pn.ShareonFacebook, pn.BuyNowPress, pn.Sendtofriend, pn.Printed, pn.StoreLocated, pn.uLike, pn.iPhoneOfferDetails, pn.iPhoneStoreLocated, pn.WishlistAd, pn.Tweet, pm.Catalogue1pageview, pm.Catalogue2pageview, pm.SearchListing, pm.Featured, pm.RelatedAdImpression, pm.Wishlist -- select count(*) -- select *
from Product_Report_Offer_Interaction_Jan_2012 pn left join Product_Report_Offer_Impression_Jan_2012 pm on pn.product_ID = pm.product_ID where pn.product_id != '0' and pn.Product_ID BETWEEN @minid AND @topid

    SET @minid = @minid + 50000 + 1      SET @topid = @topid + 50000 + 1  

END END

Thanks jakub

Database schema for record belonging to another record or its sub-record

Posted: 10 Oct 2013 06:22 PM PDT

I have 2 tables; buildings and tenants. The relationship is 1-m where buildings has many tenants Here is the schema:-

CREATE TABLE buildings  (    id serial NOT NULL,    name varchar NOT NULL,  )    CREATE TABLE tenants  (    id serial NOT NULL,    building_id integer NOT NULL,    name integer NOT NULL,  )  

I'm trying to implement another table called images that could contain images of either the building or the tenants.

CREATE TABLE images  (    building_id int NOT NULL,    tenant_id int NOT NULL,    url varchar NOT NULL  )  

In this case, the records for images table will be as follow:-

1 | 0 | img-0001.jpg  1 | 5 | img-0002.jpg  

img-0002.jpg belongs to tenant 5 of building 1 and img-0001.jpg belongs to building 1.

Any comments or improvement for such a design?

Permissions different between test and live databases

Posted: 10 Oct 2013 05:22 PM PDT

We have 2 database Servers - Test and Live. We restore the live database back to test periodically. We have noticed that a SQL User has access to execute a stored proc on live but not on the test database but we can't work out why there is a difference. (Note: The user should be able to execute the stored proc).

We have looked in the users properties and they are all in the same groups, the groups look to be the same on both servers. However when I look at the Effective Permissions on a particular stored proc the user has permission to EXECUTE on live but not on Test:

SQL Permissions

(Live on the left)

How can I find out what is giving access for this user to have permission to Execute on Live?

Syntax for restoring SQL Server backup from non-disk source

Posted: 10 Oct 2013 05:06 PM PDT

I have backed up and restored many a SQL Server database using T-SQL script, but always FROM DISK or TO DISK. I am not a provisioning-concerned database administrator, though, so don't have as much experience with the full range of available backup types.

Now I'm working with a database new to me where the physical_device_name from table msdb.dbo.backupmediafamily for the full backups is a GUID followed by a number, e.g. {F1554525-23BC-48A2-89E0-4069C2D3ACF3}1. How do I use this? I is it tape? Or a physical device name? I have searched online for references to using a GUID in restores or to restoring FROM TAPE and can't seem to find any examples.

Unfortunately at this moment I don't have on-demand access to the server where the backup is and the server doesn't have enough disk space for an extra copy of the database, anyway. But, when I DO get in this situation again, I'd like to know the correct syntax.

RESTORE DATABASE DBName FROM {F1554525-23BC-48A2-89E0-4069C2D3ACF3}1; doesn't look like good syntax to me.

RESTORE DATABASE DBName FROM TAPE = '{F1554525-23BC-48A2-89E0-4069C2D3ACF3}1'; looks more likely. Is it a tape device?

Or is there some additional set of values I need from the backupmediafamily table that will help me know how to write the restore statement (and a RESTORE FILELISTONLY statement, too)?

Entering Images url faster in mysql database

Posted: 10 Oct 2013 02:45 PM PDT

First, I would like to say I dont have much experience with entering data into database so my wording might be a bit off.

I'm working on adding products into my database for a shopping cart. I'm using OpenCart for the shopping cart.

My question is how can I edit a large about of mysql fields without manually doing it.

I have edited in all of the products information, like model id, manufacturer, and so on, but not the images because of how slow entering in the images are on OpenCart.

So I'm wondering is their any way to import all of the images urls quickly. The images are uploaded to the server all I have to do is fill in the image field for each product. I have saved the product's images with the model id. for example one poducts model is 1194, so the image is 1194.jpg. I have also organized the images to match the manufacturer, and example would be data/manufacturer/1194.jpg. Now is there a way to quickly add all the urls and have the it read the model id and insert it into the url.

like instert 'image' data/manufacturer/$model.jpg

Best way set clustered primary key on a partitioned table?

Posted: 10 Oct 2013 02:40 PM PDT

I have the following table.

create table T (K1 date, K2 int, K3 varchar(10), C1 int, C2....)  

The table will be partitioned by K1 (K1 has low selective. the data will be appended in the order by K1). Which of the following primary key is preferred?

alter table T add primary key (K1, K2, K3)  alter table T add primary key (K2, K3, K1)  

PostgreSQL Failover Issue

Posted: 10 Oct 2013 12:31 PM PDT

I have 3 PostgreSQL (9.1) server's, 1 master and 2 slave. I have configured streaming replication between them. Now when i shutdown the primary server and convert one of the slave to act as primary ( using the trigger ), the recovery.conf is changed to recovery.done.

Now my problem is while using rails application with postgres the replication works fine, But when there is change in primary i.e a slave is promoted to primary the rails app does not know that. What gem should I use to resolve this, so that the rails app will know the change in primary. And how should the config must be done.

Can't backup a Postgres database: FATAL: Peer authentication failed for user "postgres"

Posted: 10 Oct 2013 03:20 PM PDT

I can connect to my database no problem remotely, either from PHP scripts on our webserver or using PGAdmin3.

Unfortunately, when I try to run pg_dump backups locally on the server itself, I get:

pg_dump --username=postgres -W omega | gzip > omega.10.10.13.gz  
pg_dump: [archiver (db)] connection to database "omega" failed:    FATAL:  Peer authentication failed for user "postgres"  

Previously I'd had no password at all for my database, but to try to get around this I actually gave the postgres user a password. Still no dice, peer authentication fails every time.

Here's the settings in my pg_hba.conf file... please tell me what I can do to fix this. Really want to run some backups on my database.

# Database administrative login by Unix domain socket  local   all             postgres                                peer    # TYPE  DATABASE        USER            ADDRESS                 METHOD    # "local" is for Unix domain socket connections only  local   all             all                                     peer  # IPv4 local connections:  host    all             all             127.0.0.1/32            md5  # IPv6 local connections:  host    all             all             ::1/128                 md5  # Allow replication connections from localhost, by a user with the  # replication privilege.  #local   replication     postgres                                peer  #host    replication     postgres        127.0.0.1/32            md5   #host    replication     postgres        ::1/128                 md5    host    all             all             70.89.205.250/24        md5   host    all             all             23.21.112.163/24        md5   host    all             all             24.188.1.163/24         md5   

Should I join datetime to a date using cast or range?

Posted: 10 Oct 2013 12:29 PM PDT

This question is a take-off from the excellent one posed here:

Cast to date is sargable but is it a good idea?

In my case, I am not concerned with the WHERE clause but in joining to an events table which has a column of type DATE

One table has DATETIME2 and the other has DATE... so I can effectively JOIN using a CAST( AS DATE) or I can use a "traditional" range query (>= date AND < date+1).

My question is which is preferable? The DATETIME values will almost never match the predicate DATE value.

I expect to stay on the order of 2M rows having the DATETIME and under 5k having the DATE (if this consideration makes a difference)

Should I expect the same behavior on the JOIN as I might using the WHERE clause? Which should I prefer to retain performance with scaling? Does the answer change with MSSQL 2012?

My generalized use-case is to treat my events table like a calendar table

SELECT      events.columns      ,SOME_AGGREGATIONS(tasks.column)  FROM      events  LEFT OUTER JOIN      tasks          --This appropriately states my intent clearer          ON CAST(tasks.datetimecolumn AS DATE) = events.datecolumn           --But is this more effective/scalable?          --ON tasks.datetimecolumn >= events.datecolumn           --AND tasks.datetimecolumn < DATEADD(day,1,events.datecolumn)  GROUP BY      events.columns  

I want sql code retrieves fields from different tables in Visual Basic2010

Posted: 10 Oct 2013 11:21 AM PDT

0down votefavorite

I want sql code retrieves fields from different tables in Visual Basic2010.

Ihave four tables which like this: 1)tblTeacher consist of TID(pk),TName 2)tblSubject consist of SID(pk),subName,TID(fk),TMID(fk) 3)tblTime consist of TMID(pk),TMValue 4)tblLecture consist of LecID(pk),LecDate,TID(fk),subID(fk),TMID(fk) this table for registerd Lectures. there is a relation betwen tables(one to many) (tblLecture is many). and I want to offer LecID,LecDate,TName,subName,TMValue(which registered in tblLecture only) in DataGridView.(in my project I use access DataBase and sql can you use select statement?DataSourse,DataSet And TapleAdapter) and how I can call the sql in my project please?

Inserting two rows from single row

Posted: 10 Oct 2013 04:25 PM PDT

I have to import users from the usersToImport table into the userContact table. usersToImport contains telephone and e-mail information in a single row for each user, but userContact stores one kind of contact information per row.

Here's how I'm doing it currently: http://sqlfiddle.com/#!6/c9b2e/1 (for some reason, it's not outputting anything, but the code works in SSMS).

Is there a way to do this without using two different selects? Maybe self-joining usersToImport or using case somehow.

PostgreSQL Cascade Delete not working

Posted: 10 Oct 2013 11:49 AM PDT

What am I missing here?

I have two tables:

CREATE TABLE source  (    "sourceid" serial,    "state" character varying(255) REFERENCES states (state),    "leadtime" character varying(255),    CONSTRAINT source_pkey PRIMARY KEY (sourceid)  )  CREATE INDEX      ON source (state ASC NULLS LAST);    CREATE TABLE updates  (    "updateid" serial,    "sourceid" integer REFERENCES source (sourceid) ON DELETE CASCADE,    "skip" boolean,    CONSTRAINT update_pkey PRIMARY KEY (updateid)  )  CREATE INDEX      ON updates (sourceid ASC NULLS LAST);  

source is the parent table and updates the child. I want any records deleted from source to cascade down and delete all records in updates that have the same sourceid.

I have also tried adding the REFERENCES constraint to the sourceid field of the source table.

Thanks! Brad

Is there a general guideline for table partitioning? [on hold]

Posted: 10 Oct 2013 02:00 PM PDT

  1. Will you partition the table when it grows to MMM MB bytes? or NNN rows?
  2. What the size of each partition you want to keep? XXX MB bytes? or YYY rows?

What's the MMM, NNN, XXX or YYY values you choose? For example, if you have a table with size of 500GB, you probably definitely go partitioning. How about a table with 200M rows? Or a table with size of 5GB?

Multi Server Query generating Login failures

Posted: 10 Oct 2013 02:01 PM PDT

My problem is that when using multi server queries using SSMS from SQL Server 2008 R2 against SQL Server 2005, SQL Server 2008R2 and SQL Server 2000 I occasionally get error messages such as:

SQL Server 2008R2

Login failed for user 'DomainName\ADName'. Reason: Failed to open the explicitly specified database. [CLIENT: 21.12.0.0]
Error: 18456, Severity: 14, State: 38.

SQL Server 2005

2013-10-10 15:55:34.28 logon Login failed for user 'DomainName\ADName'.
Error: 18456, Severity: 14, State: 16.

SQL Server 2000

Login failed for user 'DomainName\ADName'. [CLIENT: 21.12.0.0]

Now my query works fine but given what my query does is search for errors in logs it is not ideal that it should also generate such errors.

My methodology may be flawed, what I have done is create a group under Local Server Groups and registered a few dozen servers of various versions. Then I right click on this group, select New Query to open a new batch window. Lastly I open my saved script and run it. In rough outline this is what it does

use master  go  determine version of SQL as like '%8.0%' or not  create temporary table based on version and pump results of xp_readerrorlog 0-6 into it  translate temporary table to another and insert to results temporary table  select on results table  drop temporary tables  go  

I am a sysadmin on all machines. Two things I tried was the 'use master' statement at the top of the script and changing the registered servers to have a "Connect to database" of master. Neither of these has eliminated the problem. Does anyone have any further suggestions?

I want sql code retrieves fields from different tables? [on hold]

Posted: 10 Oct 2013 10:37 AM PDT

I want sql code retrieves fields from different tables in Visual Basic 2010. thankyou very much,Ihave four tables which like this: 1)tblTeacher consist of TID(pk),TName 2)tblSubject consist of SID(pk),subName,TID(fk),TMID(fk) 3)tblTime consist of TMID(pk),TMValue 4)tblLecture consist of LecID(pk),LecDate,TID(fk),subID(fk),TMID(fk) this table for registerd Lectures. there is a relation betwen tables(one to many) (tblLecture is many). and I want to offer LecID,LecDate,TName,subName,TMValue(which registered in tblLecture only) in DataGridView.(in my project I use access DataBase and sql can you use select statement?DataSourse,DataSet And TapleAdapter) and how I can call the sql in my project please?

Can this query be optimized further?

Posted: 10 Oct 2013 03:43 PM PDT

I have 100K records in my Documents table. This is my query which gives a page of 50 records in roughly 750 ms. I would like to further bring it down if possible. I am using Row_Number function to page.

exec sp_executesql N'  set arithabort off;  set statistics time on;  set transaction isolation level read uncommitted;    With cte as  (      Select          peta_rn = ROW_NUMBER() OVER (ORDER BY  d.LastStatusChangedDateTime  desc),          d.DocumentID, d.IsReEfiled, d.IGroupID, d.ITypeID, d.RecordingDateTime,          d.CreatedByAccountID, d.JurisdictionID,          d.LastStatusChangedDateTime as LastStatusChangedDateTime,  d.IDate,          d.InstrumentID, d.DocumentStatusID , u.Username, it.Abbreviation AS ITypeAbbreviation,          ig.Abbreviation AS IGroupAbbreviation, d.DocumentDate          From ITypes it          Inner Join Documents d on it.ITypeID = d.ITypeID          Inner Join Users u on d.UserID = u.UserID          Inner Join IGroupes ig on ig.IGroupID = d.IGroupID          Where              1=1 AND              (  d.DocumentStatusID = 9  ) And              (CreatedByAccountID = @0 Or DocumentStatusID = @1 Or DocumentStatusID = @2)  And              (  d.IGroupID = @3   Or d.IGroupID = @4   ) And              (d.JurisdictionID = @5 Or DocumentStatusID = @6 Or DocumentStatusID = @7)  )  Select      cte.DocumentID, cte.IsReEfiled, cte.IGroupID, cte.ITypeID, cte.RecordingDateTime,      cte.CreatedByAccountID, cte.JurisdictionID,      cte.LastStatusChangedDateTime as LastStatusChangedDateTime, cte.IDate, cte.InstrumentID,      cte.DocumentStatusID,cte.IGroupAbbreviation, cte.Username, j.JDAbbreviation, inf.DocumentName,      cte.ITypeAbbreviation, cte.DocumentDate, ds.Abbreviation as DocumentStatusAbbreviation,      ds.Name as DocumentStatusName, (SELECT COUNT(*) FROM CTE) AS TotalRecords      FROM cte      Inner Join DocumentStatuses ds On cte.DocumentStatusID = ds.DocumentStatusID      Left Join InstrumentFiles inf On cte.DocumentID = inf.DocumentID      Inner Join Jurisdictions j on j.JurisdictionID = cte.JurisdictionID      Where          1=1 And          peta_rn>@8 AND          peta_rn<=@9      Order by peta_rn',      N'@0 int,@1 int,@2 int,@3 int,@4 int,@5 int,@6 int,@7 int,@8 int,@9 int',      @0=44,@1=5,@2=9,@3=3,@4=4,@5=1,@6=5,@7=9,@8=90000,@9=90050  

This is my statistics output: SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms.

(50 row(s) affected)     SQL Server Execution Times:     CPU time = 717 ms,  elapsed time = 727 ms.  

and this is my query plan XML. All indexes are created properly. I am unable to post here due to body limit of 30K.

https://www.dropbox.com/s/vrx5ec6cpi8vn33/plan.txt

Any help would be appreciated. Thanks!

MySQL performance for key value vs multiple columns [on hold]

Posted: 10 Oct 2013 06:57 PM PDT

For a key value store I will have 3 columns

id as a primary key,

attr as an index with a set of values,

value can be anything.

Each id can have multiple attr.

For selecting, I will be doing

SELECT id, GROUP_CONCAT('attr')  FROM `my_table`  WHERE `attr` IN ("a", "b", "c"...)  AND `value` = 1  GROUP BY `id`  

For multiple columns

I have id as a primary key and a column for each value in attr.

For selecting, I will be doing

SELECT *  FROM `my_table`  WHERE "a" = 1  AND "b" = 1  AND "c" = 1  ...  

My question is performance-wise, which design would be better query performance when there are a lot of attrvalues?

Duplicating database on the same server without data loss

Posted: 10 Oct 2013 08:09 PM PDT

I want to duplicate an innodb database on my mysql server for testing another version of the software.
Using tools for example phpmyadmin reduce the database around 20 MB and a "lot" of rows are missing.

Just copying the folder under /var/lib/mysql won't work, because I didn't enabled something special for the innodb engine.

How can I create a duplicate of an innodb database?

Should I create a table separately for each employee or one table for all? [duplicate]

Posted: 10 Oct 2013 11:42 AM PDT

I'm trying to figure out how to set up the Database structure.

There are ~10000 employees.There will be a table to store the basic details of the employee, and another table for storing transactions of the employee, so I decided to create separate tables for all the employee to record their transactions, which means ~10000 tables.

So my question is, is it efficient to create that much of tables(~10000), or any other approach you can suggest, I will be using MySQL as database and a Java client, in a LAN.

Postgres functions vs prepared queries

Posted: 10 Oct 2013 03:39 PM PDT

I'm sure it's there but I've not been able to find a simple definitive answer to this in the docs or via Google:

In Postgres, are prepared queries and user defined functions equivalent as a mechanism for guarding against SQL injection? Are there particular advantages in one approach over the other? Thanks

Classifieds Database Design

Posted: 10 Oct 2013 10:25 AM PDT

I have always worked with CMSs, but I am trying to get into using frameworks like Laravel and Yii. My main issue is when working with CMSs, I didn't have to think much about the database design since it was done for me. I have my plan drawn out on paper, but I am not really sure where to go from here....

I am trying to develop a Craigslist clone, but a little more specific. I have Googled all over for designs, and this is currently what I have.

However, I want certain categories to have specific fields. Some categories may have fields in common with other categories, but not all categories are the same.

For example:

    - Categories        - Electronics          - Title          - Type          - Description          - Year        - Furniture          - Title          - Type          - Description          - Brand        ....  

Those are just two examples, but I have a huge list of Categories and the required fields for each category.

My current plan is to load all of these fields into the ad table. What effect will this have on performance? At some point there could be 60 fields attached to the ad table, but only 5-10 may be filled at a time, and the others would be empty/NULL.

What is the best way to go about associating images with ads? I was thinking to just create an assets folder and create subfolders based on the ad id, and upload images to those subfolders of the corresponding ad id. Something like...

    - Public        - Assets          - Images            - 1 (post id = 1)              - img001.jpg              - img002.jpg            ...  

What's the best way to set up this kind of database? Would sticking to MySQL be best for this? What if I want some states to have certain categories but not others?

InnoDB Failure of some kind

Posted: 10 Oct 2013 02:25 PM PDT

I have MySQL 5.5 installed. I tried to install Joolma but it failed. I went into their sql and replace EGNINE=InnoDB with MyISAM and the install worked.

InnoDB is listed under SHOW ENGINES;

Any idea what the cause is or how to fix this so other InnoDB sites can be used?

I had these errors:

MySQL ERROR No: 1030  MysqL Error: Got error -1 from storage engine  

MySQL is running but not working

Posted: 10 Oct 2013 04:26 PM PDT

In an attempt to tune MySQL to make it work with a recent installation of Drupal I had to modify the MySQL settings on my server. After modifying the configuration file for MySQL (/etc/my.cnf) MySQL stopped working. After some attempts I make it start again but now all my php/MySQL webistes are not being able to connect to their DBs.

Here is why is so confusing:

  • If I check a phpinfo.php on any given site, the MySQL info is there
  • phpmyadmin runs just fine
  • I can run mysql from SSH using root but I see that mysql activity is reduced, look:

[root@server mysql]# mysqladmin processlist

+-----+-----------+-----------+-----------+----------------+------+--------------------+------------------+  | Id  | User      | Host      | db        | Command        | Time | State              | Info             |  +-----+-----------+-----------+-----------+----------------+------+--------------------+------------------+  | 7   | eximstats | localhost | eximstats | Sleep          | 30   |                    |                  |  | 103 | DELAYED   | localhost | eximstats | Delayed insert | 30   | Waiting for INSERT |                  |  | 104 | DELAYED   | localhost | eximstats | Delayed insert | 149  | Waiting for INSERT |                  |  | 105 | DELAYED   | localhost | eximstats | Delayed insert | 149  | Waiting for INSERT |                  |  | 119 | root      | localhost |           | Query          | 0    |                    | show processlist |  +-----+-----------+-----------+-----------+----------------+------+--------------------+------------------+  

My websites using MySQL almost all say:

Error establishing a database connection  

Another say:

Warning: mysql_connect() [function.mysql-connect]: Access denied for user 'website_USER'@'localhost' (using password: NO)  

This is my current my.cnf:

[mysqld]  #datadir=/var/lib/mysql  socket=/var/lib/mysql/mysql.sock  #pid-file=/var/lib/mysql/mysqld.pid  #skip-bdb  #skip-innodb  #err-log=/var/log/mysql.log  #bind-address = server.escalehost.com  log-bin = /var/lib/mysql/log-bin      #innodb_buffer_pool_size=2M  #innodb_additional_mem_pool_size=500K  #innodb_log_buffer_size=500K  #innodb_thread_concurrency=2  #max_connections=125  #table_cache=2500  #thread_cache_size=100  #thread_stack=250K  #wait_timeout=10  #join_buffer=5M  #myisam_sort_buffer_size=15M  #query_cache_size=15M  #read_rnd_buffer_size=5M  max_allowed_packet = 64M  #open_files_limit=8602    #[client]  #port           = 3306  #socket=/var/lib/mysql/mysql.sock    #[mysql.server]  #user=mysql  #basedir=/var/lib    [mysqld_safe]  #socket=/var/lib/mysql/mysql.sock  #err-log=/var/log/mysqld.log  pid-file=/var/run/mysqld/mysqld.pid  

I commented most of it to return it to its simplest version... How can I make the web side to connect to mysql?

Mistake during Oracle 11g PITR

Posted: 10 Oct 2013 12:25 PM PDT

I tried using set time until.. and mis-typed the date. Can anyone help me understand how to get my backups into a manageable state?

After the accidental recover, most of my backupset disappeared.

I recovered them and used 'catalog recovery area' .. and they're listed in 'list backupset'. But something still isn't right.

When I do a PITR now, I get messages that my dbf files aren't available

and... the 'list backupset' seems to show backupsets. But they are listed differently than the files which weren't included in the 'bad' recovery.

Gists with the error and the list of backupsets are here https://gist.github.com/akinsgre/5561254

How can I convert an Oracle dump file into SQL Server?

Posted: 10 Oct 2013 12:58 PM PDT

I want to ask about converting Oracle dump files (.dmp) into SQL Server files (.bak) where during conversion I don't have to be connected to any database server.

I've searched for related technologies, such as Oradump to SQL Server. Do you have another suggestion to solve this? Open source ones I mean.


Thanks for both of your response. I see how difficult it will, but is there any possibility to use another way in converting oracle dump file? because all of solution's converter tools always provide a connection database server. I'm so thankful what if you can suggest another tool. thanks anyway

SQL Server LocalDB Instance; error restoring backup (MasterDBPath not available)

Posted: 10 Oct 2013 04:34 PM PDT

I have SQL Server 2012 Local DB Installed, and am connecting to it via (localdb)\v11.0.

When trying to restore a backup file, I get:

TITLE: Microsoft SQL Server Management Studio

Property MasterDBPath is not available for Information 'Microsoft.SqlServer.Management.Smo.Information'. This property may not exist for this object, or may not be retrievable due to insufficient access rights. (Microsoft.SqlServer.Smo)

For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=11.0.2100.60+((SQL11_RTM).120210-1846+)&EvtSrc=Microsoft.SqlServer.Management.Smo.ExceptionTemplates.PropertyCannotBeRetrievedExceptionText&EvtID=MasterDBPath&LinkId=20476

I have tried setting MasterDBPath string value in:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL11E.LOCALDB\MSSQLServer\CurrentVersion`  

but this hasn't helped.

Anyone have any thoughts?

Table not getting created under dbo schema

Posted: 10 Oct 2013 09:05 PM PDT

When creating tables in SSMS, I've realized that if you execute the following statement:

CREATE TABLE [tableName];  

the table will be created under your own schema (and not dbo). So in order to create it under the dbo schema, you'll need to explicitly say so, like this:

CREATE TABLE [dbo].[tableName];  

Does anyone know of a way (eg. server-wide setting) so that when a table is created, the [dbo] part doesn't need to be specified?

No NULLs, yet invalid byte sequence for encoding "UTF8": 0x00

Posted: 10 Oct 2013 02:31 PM PDT

I've spent the last 8 hours trying to import the output of 'mysqldump --compatible=postgresql' into PostgreSQL 8.4.9, and I've read at least 20 different threads here and elesewhere already about this specific problem, but found no real usable answer that works.

MySQL 5.1.52 data dumped:

mysqldump -u root -p --compatible=postgresql --no-create-info --no-create-db --default-character-set=utf8 --skip-lock-tables rt3 > foo  

PostgreSQL 8.4.9 server as destination

Loading the data with 'psql -U rt_user -f foo' is reporting (many of these, here's one example):

psql:foo:29: ERROR:  invalid byte sequence for encoding "UTF8": 0x00  HINT:  This error can also happen if the byte sequence does not match the encoding expected by the server, which is controlled by "client_encoding".  

According the following, there are no NULL (0x00) characters in the input file.

database-dumps:rcf-temp1# sed 's/\x0/ /g' < foo > nonulls  database-dumps:rcf-temp1# sum foo nonulls  04730 2545610 foo  04730 2545610 nonulls  database-dumps:rcf-temp1# rm nonulls  

Likewise, another check with Perl shows no NULLs:

database-dumps:rcf-temp1# perl -ne '/\000/ and print;' foo  database-dumps:rcf-temp1#  

As the "HINT" in the error mentions, I have tried every possible way to set 'client_encoding' to 'UTF8', and I succeed but it has no effect toward solving my problem.

database-dumps:rcf-temp1# psql -U rt_user --variable=client_encoding=utf-8 -c "SHOW client_encoding;" rt3   client_encoding  -----------------   UTF8  (1 row)    database-dumps:rcf-temp1#  

Perfect, yet:

database-dumps:rcf-temp1# psql -U rt_user -f foo --variable=client_encoding=utf-8 rt3  ...  psql:foo:29: ERROR:  invalid byte sequence for encoding "UTF8": 0x00  HINT:  This error can also happen if the byte sequence does not match the encoding expected by the server, which is controlled by "client_encoding".  ...  

Barring the "According to Hoyle" correct answer, which would be fantastic to hear, and knowing that I really don't care about preserving any non-ASCII characters for this seldom-referenced data, what suggestions do you have?

Update: I get the same error with an ASCII-only version of the same dump file at import time. Truly mind-boggling:

database-dumps:rcf-temp1# # convert any non-ASCII character to a space  database-dumps:rcf-temp1# perl -i.bk -pe 's/[^[:ascii:]]/ /g;' mysql5-dump.sql  database-dumps:rcf-temp1# sum mysql5-dump.sql mysql5-dump.sql.bk  41053 2545611 mysql5-dump.sql  50145 2545611 mysql5-dump.sql.bk  database-dumps:rcf-temp1# cmp mysql5-dump.sql mysql5-dump.sql.bk  mysql5-dump.sql mysql5-dump.sql.bk differ: byte 1304850, line 30  database-dumps:rcf-temp1# # GOOD!  database-dumps:rcf-temp1# psql -U postgres -f mysql5-dump.sql --variable=client_encoding=utf-8 rt3  ...  INSERT 0 416  psql:mysql5-dump.sql:30: ERROR:  invalid byte sequence for encoding "UTF8": 0x00  HINT:  This error can also happen if the byte sequence does not match the encod.  INSERT 0 455  INSERT 0 424  INSERT 0 483  INSERT 0 447  INSERT 0 503  psql:mysql5-dump.sql:36: ERROR:  invalid byte sequence for encoding "UTF8": 0x00  HINT:  This error can also happen if the byte sequence does not match the encod.  INSERT 0 502  INSERT 0 507  INSERT 0 318  INSERT 0 284  psql:mysql5-dump.sql:41: ERROR:  invalid byte sequence for encoding "UTF8": 0x00  HINT:  This error can also happen if the byte sequence does not match the encod.  INSERT 0 382  INSERT 0 419  INSERT 0 247  psql:mysql5-dump.sql:45: ERROR:  invalid byte sequence for encoding "UTF8": 0x00  HINT:  This error can also happen if the byte sequence does not match the encod.  INSERT 0 267  INSERT 0 348  ^C  

One of the tables in question is defined as:

                                        Table "public.attachments"       Column      |            Type             |                        Modifie  -----------------+-----------------------------+--------------------------------   id              | integer                     | not null default nextval('atta)   transactionid   | integer                     | not null   parent          | integer                     | not null default 0   messageid       | character varying(160)      |   subject         | character varying(255)      |   filename        | character varying(255)      |   contenttype     | character varying(80)       |   contentencoding | character varying(80)       |   content         | text                        |   headers         | text                        |   creator         | integer                     | not null default 0   created         | timestamp without time zone |  Indexes:      "attachments_pkey" PRIMARY KEY, btree (id)      "attachments1" btree (parent)      "attachments2" btree (transactionid)      "attachments3" btree (parent, transactionid)  

I do not have the liberty to change the type for any part of the DB schema. Doing so would likely break future upgrades of the software, etc.

The likely problem column is 'content' of type 'text' (perhaps others in other tables as well). As I already know from previous research, PostgreSQL will not allow NULL in 'text' values. However, please see above where both sed and Perl show no NULL characters, and then further down where I strip all non-ASCII characters from the entire dump file but it still barfs.

Is it possible to restore sql-server bak and shrink the log at the same time?

Posted: 10 Oct 2013 08:11 PM PDT

We have a bak file from a client that we have transferred to our developer offices for problem investigation. The backup is currently 25GB and the restored database is about the same size however it needs 100GB to be restored. I believe this is because there database is set up to have a 75GB transaction log size. After restoring the database we can shrink the log file but is there a way to do this in the restore?

No comments:

Post a Comment

Search This Blog