Wednesday, June 12, 2013

[SQL Server 2008 issues] SSRS Memory utilisation

[SQL Server 2008 issues] SSRS Memory utilisation


SSRS Memory utilisation

Posted: 11 Jun 2013 01:19 AM PDT

Hi Team,Reporting Services CPU ulilisationPlease suggest

SQL Server Corporate Standards - Automated Compliance Checking

Posted: 21 May 2013 08:43 AM PDT

Does anyone know of a flexible automated SQL Syntax checker?I want to be able to analyse scripts before they are deployed to our test systems to check for certain things such as the fact that each statement has an appropriate error check after it.I want to make sure each statement is commented.I want to spot syntax that would be legal in an on premise SQL Server but not in Azure.I am expecting to configure my own rules.Does such a thing exist? I have started to write one but clearly its complicated because I am effectively having to start to simulate SQLs own parser so I can break down and analyse statements.Any feedback will be much appreciated (positive or negative).ThanksTim

hide databses for special user

Posted: 11 Jun 2013 05:36 PM PDT

Dear friend,i want define a login in server who are dbowner of x database and can not see other databases in this server.how can do this?best regards,zohreh

Moving databases to new server. NEED HELP please please

Posted: 11 Jun 2013 04:07 AM PDT

Please help me here;I have to rebuild the clusters so the plan is i have to move the sql databases to a new temporary server and rebuild these servers from scratch. The system engineer is saying that he will shut down the existing servers and present the storage to new stand alone server. If they do that will the databases still exist and when you connect to instance the databases will pop up or do I still have to restore on new server? will the process damage the databases? Please help me. I need a quick help.

Isolation levels

Posted: 11 Jun 2013 04:37 PM PDT

Hi all,what is the default isolation level for sql server. Does it changed in newer versions. what is the use of it. If we have to change, how to change that level. i have done some googling .. but i am unable to understand the very purpose of it. why is that much important.

Find Schedule based on business rules

Posted: 11 Jun 2013 11:31 AM PDT

Hi Experts,I need your help on the case defined below. I need to find available schedule based on below defined rulesSchedule is defined for a Camera Room. But a Camera Room may have multiple schedules. Weekly schedule is stored on week day basis, whereas onetime schedule is stored as actual date time.• Weeklyo There can be weekly schedules for a Specific Camera Room, or ANY (-1), In this case -1 overrides• Onetimeo There can be multiple weekly schedules for a resource.• Onetime Overrides Weekly schedules.o Available o UnavailableScript is attached to create schema and insert some sample data. I also have attached a sheet where I have tried to explain what should be the actual outcome should be. The green ones should come as is, but the white ones should be overridden with the values as specified in side.(Override column)Please let me know if I have missed something or for more information.Thanks everybody in advance for help.

How do you drag a table to view diagram Pane

Posted: 11 Jun 2013 10:35 AM PDT

There must be a way to drag a table into the design diagram pane of a view. What is the configurate setting to allow this. currently in my SQL Server 2008 R2 install this option is not set up.

Seeking Feedback/Recomendations on Auditing, both Simple and Detailed

Posted: 11 Jun 2013 09:45 AM PDT

[b]SQL PLATFORM:[/b] 2008r2 Enterprise 64Bit[b]SIMPLE AUDITING =[/b] Capture the name and date/time for each row that is inserted or updated in a table. This is just INSERT & UPDATE since a DELETE action would leave no row to store the info.[b]DETAIL AUDITING = [/b]Capturing the Name of the user, the date/Time of the command (UPDATE, INSERT and DELETE)As of current I can implement Simple Auditing and Detail Auditing via the use of table triggers and additional audit tables (tables that store info from the DELETED table anytime an UPDATE or DELETE DML statement is executed). However I'm looking for a better, more robust solution, ideally one that's native to SQL Server if 2008R2 has such a thing.My company has been using SQL Server 2000 and more recently 2005 for management of our primary accounting database which is form a third party vendor that specializes in vertical accounting markets. We recently moved over to 2008R2 and I'm hoping that this will open up some options for us like better auditing.Does anyone know if SQL 2008R2 natively has table level auditing that stores the info either in the table where the change has been made (SIMPLE AUDITING ) or in a backup or secondary audit table when you want to capture not only who and when but the "what changed" as well? From what I've read about SQL SERVER AUDIT you don't have the option to save this info to a table and that's important for us. In fact I don't know why someone would prefer to use windows logs except for when it comes to performance and trying to avoid adding to the DB Server workload and or size by storing the audit info within the DB. I had hoped that SQL 2008 R2 might natively offer a new data type that when you set a column to it, the thing would auto-record the login info of the user account/login that the DML statement is being executed under. Then again maybe I'm the odd one for thinking windows logs are not the best pace to store audit info. I just seems logical to me that you'd want to store that info a normalized fashion so you can use T-SQL to qeury/mine the audit info.In any event, if you know of a third party product (perhaps something by Red-Gate) that can do this please share that in a post.Thanks

Using the same partition function and scheme for multiple tables

Posted: 04 Jun 2013 02:58 PM PDT

I have a database where most of the tables have an integer field say "SetId" denoting a batch of data.If a new SetId is created all these tables will get data related to the new SetId.SetId governs data retention, purging, it also forms part of filtering in many user queries on these tables.I am thinking of partitioning all these tables on the SetID column, which I am sure would be beneficial for performance and data maintenance.My question is, do I use a common partition function and partition scheme to partition all the tables on SetId, or is it more advisable to use separate partition functions and schemes for each tables?

Finding a word\acronym in a field

Posted: 11 Jun 2013 07:45 AM PDT

Hi Guys,How would I go about finding a specific "word" in a field?What is the best way of returning records containing only the word "LC" and where LC stands on its own and is not part of another word \ acronym?ie LC,AMBLCAND, CMBABC, LCLCWelcomeSo the records returned would be:LC,AMBABC, LCLCThanksDon

Need A Help in Encyption

Posted: 11 Jun 2013 07:16 AM PDT

Hi Friends,I want to know how we can encrypt the specific Objects(table) in database!!Can we do it on field level in Object(table)?what is overhead actully mean in encyption?thanks.

Converting DateDiff to HH:MM:SS

Posted: 01 Nov 2012 04:27 AM PDT

Evening All,We're having some issues with some data and i will do my best to explain,The question relates to vehicle TrackingSo, we have Vehicle ID, Journey Start TIme, Journey End Time, Journey Duration.Vehicle ID is IntJourney Start Time and End Time is DateTime (YYYY:MM:DD hh:mm:ss)Journey Duration is (hh:mm:ss)So, to calculate Journey duration we have applied the followingConvert(varchar, Start_Time - End_Time, 108) which gives us our journey duration for a single journey.Now the issue we have is we would like to know the total journey duration for a vehicle and we are having some issues in obtaining the sum of the journey duration,What we are trying to do:Select Distinct(ID), Sum(Convert(varchar, Start_Time - End_Time, 108)), but we are always ending up with errors,so we tried a datediff between start time and end time and mangage to get the journey time in seconds,Now what we'd like to do is sum the number of seconds and then convert this to hh:mm:ss but it doesnt like us!!!I hope you can help this has been an extremely unproductive and frustrating dayMany ThanksWaqqas.Zia

Production / Dev Design

Posted: 11 Jun 2013 05:48 AM PDT

There doesn't seem to be a good place to ask this in the 2008 section... so I am placing it here..If you have a production system that has a number of SQL Servers, each holding an instance, and each instance holding a number of databases. Applications use data from outside of their database, often quite a few.So how do you set-up and maintain a development enviroment? Everytime a programmer starts a project they currently have to check with everyone else and update a number of databases from production... this seems like a lot of work, and there should be a better way to keep the two in sync. I have thought about installing one Dev instance per developer.. and have them release work into a QA system... but this also seems like a lot of overkill, and overhead, for our non-production server.

ErrorMessage as Output parameter of a stored procedure

Posted: 11 Jun 2013 01:14 AM PDT

I want to create a stored procedure to insert data into the database.If for any reason the insertion fails eg: primary key violation or invalid datatype,I want my stored procedure to return the relevant error message as output parameter.storedproc(inputparameters inp1,inp2,inp3,inp4 Outputparameter errorMessage)(inp1+inp2 is the primary key)Can some one please help me write such a stored procedure. Thanks!

DMV sys.dm_db_missing_index_group_stats returns no rows

Posted: 05 Oct 2011 10:31 AM PDT

Hi All,I am using a script to review missing indexes on various SQL Servers.Most of the time, it works just fine.Sometimes, however, certain DMVs that are part of the script, just don't return any rows.Usually, the main issue is sys.dm_db_missing_index_group_stats.1. I know that restarting the service clears DMVs - that's not it :-)2. I know I need the VIEW SERVER STATE permission - that's not it either :-)3. I know I need actual missing indexes:select * from sys.dm_db_missing_index_detailsreturns 573 rows, yet select TOP 1 * from sys.dm_db_missing_index_group_stats returns 0 rows.This renders all the various missing index scripts useless, because the JOIN doesn't work, and so the script doesn't return any results.I keep running into servers with this issue, and so far, only restarting the service has resolved it. Data collection starts from scratch, and within just a few minutes, I get data in the sys.dm_db_missing_index_group_stats. Check back a few days, weeks or sometimes months later, and the DMV again returns no results - same user, same DB, etc.I have Googled the heck out of this issue and can't find a solution. It is frustrating, since missing indexes are often a major issue of the application I am troubleshooting on many different SQL Servers.Any help would be greatly appreciated :-)

Cursor Logic causing Blocking ?

Posted: 11 Jun 2013 01:54 AM PDT

I'm trying to troubleshoot some inherited code that uses cursors and seems to be causing blocking.The cursor gets loaded from a select statement, and eventually does an insert into a production table. I think it loops around and does 1 insert per record in the cursor.Is a lock held on the table being inserted until the cursor loop completes ? Or is there a lock-release-lock-release for each loop in the cursor ?Other users are trying to issue updates against the same table but seem to be blocked for long periods of time.Or am I being to vague to get an answer ?tia

Managing One-Time Report Subscriptions from SharePoint

Posted: 11 Jun 2013 03:00 AM PDT

We have many jobs created on our report database server from users running one-time reports in SharePoint. If we delete the job from the agent, it will be recreated by SharePoint. However, SharePoint does not delete these jobs (at least not by default) once they are complete. Anyone know the preferred way to clean these up?

First day of april in a given year

Posted: 11 Jun 2013 12:56 AM PDT

Hi All,Could any one please let me know the first Monday of april provided the year is given.Say year=2013, how to find first Monday of april in year 2013?Any help in this pls.Thanks!

creating logon & logoff triggers in MSSQL

Posted: 03 Jun 2013 01:02 AM PDT

We have several database that we track user logons for, until now all these databases resided in Oracle. For Oracle I created a sys trigger than logs [i]os user, database user, program used[/i], etc., as well as logon time & logoff time that writes to a table I called "audlog" then I query the table every monthThis is what I created in Oracle:[code="sql"]CREATE TABLE audlog.session_audit (user_name VARCHAR2(30), log_date DATE, session_id VARCHAR2(30), ip_addr VARCHAR2(30), hostname VARCHAR2(30), auth_type VARCHAR2(30), os_user VARCHAR2(30), event VARCHAR2(8), program VARCHAR2(30)) tablespace users;Create Or Replace Trigger Trg_Logon_Info After Logon On DatabaseDeclareProgram Varchar2(30);BeginSelect program into program From v$session Where sid=(Select Max(Sid) From V_$Mystat);Insert Into Audlog.Session_Audit Values ( user , sysdate , Sys_Context ('USERENV', 'SESSIONID') , Sys_Context ('USERENV', 'IP_ADDRESS') , Sys_Context ('USERENV', 'HOST') , Sys_Context ('USERENV', 'AUTHENTICATION_TYPE') , Sys_Context ('USERENV', 'OS_USER') , 'LOG ON' -- Event , Program );End;/Create Or Replace Trigger Trg_Logoff_Infobefore Logoff On DatabaseDeclareProgram Varchar2(30);BeginSelect Program Into Program From V$Session Where Sid=(Select Max(Sid) From V_$Mystat);Insert Into Audlog.Session_Audit Values ( user , sysdate , Sys_Context ('USERENV', 'SESSIONID') , Sys_Context ('USERENV', 'IP_ADDRESS') , Sys_Context ('USERENV', 'HOST') , Sys_Context ('USERENV', 'AUTHENTICATION_TYPE') , Sys_Context ('USERENV', 'OS_USER') , 'LOG OFF' -- Event , Program );End;[/code]I would like to create something in MSSQL that basically shows the same information, so I can keep the reports consistent looking. Can I duplicate this trigger in MSSQL?

Backup job tied to user account fails.

Posted: 10 Jun 2013 11:05 PM PDT

What's the best practice for setting up maintenance plans? We had a former admin who had set up our backup maintenance plans. The backups stopped occurring when her Active Directory account was disabled.Her account was the "owner" of the SQL Server agent job.Should we create a service account for the purpose of setting up backups?How do you handle this?

Need some clarification on Initial Size and Autogrowth?

Posted: 10 Jun 2013 11:56 PM PDT

Is the Initial size(MB) under db properties/Files grows as database grows?As per my understanding, initial size grows as per the autogrowth set?I recently had a tempdb issue, however, the initial size shows 8MB(data) and 1MB(log) but the actual size of the tempdb was about 29GB.Can someone clear this doubts for me?Thanks,SueTons.

System Databases / msdb / System Tables / dbo.backup*

Posted: 10 Jun 2013 11:10 PM PDT

Looking at the tables within the MSDB Database -I can see that the dbo.backup* tables have been populated since the server was first commissioned.These tables contain backup set information and details of the backup including filesize ad location ( possibly useful to some )currently my files sit at 144,462 records and 72,231.Question :Can these files OR should these files be cleared down at least purge some records ?This is not due to lack of space on the server, just a housekeeping question.Thanks

can anyone give me name of the forum which helps me to give me testing tips of a database

Posted: 10 Jun 2013 08:58 PM PDT

Can any give me name of the forum which helps me for testing of database

How to connect with SQL Server using http based API

Posted: 10 Jun 2013 11:01 PM PDT

Hi,I am new to this. I want to connect SQL SERVER 2008 with a CRM called Salesforce (without using any existing tools).Some expert suggested me that it can be done using [b]http based API of SQL Server 2008[/b].I want to know whether http based API of SQL Server is available and how it can be used.Or is there any other way by which i can connect with SQL Server (by coding)[b]Please help.[/b]Thanks

Plan in cache

Posted: 10 Jun 2013 08:01 PM PDT

Why does the plan remove from cache?I mean what is the cause of removing the plan from cache?I thought Update statistics causes it but I ran a query with sp_executesql (parameter query)the plan was in cache I checked with this query:SELECT ( SELECT query_plan FROM sys.dm_exec_query_plan(qs.plan_handle)),est.text,creation_time,last_execution_timeFROM sys.dm_exec_query_stats qsCROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) estORDER BY creation_timeBut afterUPDATE STATISTICS [dbo].[Table]WITH FULLSCANthe plan was still in cache .

SQL Date Issue

Posted: 09 Jun 2013 08:04 PM PDT

Hi,I wish to create a column 'GoalBehindFlag' which will work like a flag providing me 0s and 1s if the datetime saved in another column is greate than or less than system date time.Can anyone please correct me where I am going wrong in this:select a.casecode, a.[Goal_Weighing],b.ddate, (b.ddate < sysdatetime) as GoalBehindFlagfrom [dbo].[VRep_Calc_1] as a inner join [dbo].[VRep_Goals] as b on a.casecode = b.casecode

0xc0202009 Primary Key Error

Posted: 10 Jun 2013 08:38 PM PDT

I keep receiving the error below when trying to import data from a Flat File source (CSV file) in SQL 2008 R2 Express edition.Error 0xc0202009: Data Flow Task 1: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80004005.An OLE DB record is available. Source: "Microsoft SQL Server Native Client 10.0" Hresult: 0x80004005 Description: "The statement has been terminated.".An OLE DB record is available. Source: "Microsoft SQL Server Native Client 10.0" Hresult: 0x80004005 Description: "Violation of PRIMARY KEY constraint 'PK_UoSW_STAGING_TABLE'. Cannot insert duplicate key in object 'dbo.UoSW_STAGING_TABLE'. The duplicate key value is (00492FF894C64BEFAC9C3D1866773CE2).". (SQL Server Import and Export Wizard)I have checked the import source for the primary key SQL says is causing the error but when I find that primary key in the CSV file only one entity exists.I thought perhaps it was due to that particular record so removed this and ran the import wizard again but the same error occurs with a different field. I cannot see how I can get around this when the primary key isn't being duplicated at all.I have also checked other tables within the database to make sure I haven't accidentally imported the CSV to the wrong location.Any help would be tremendous at this stage as this has driven me insane for hours!

Division problem in SQL

Posted: 06 Jun 2013 08:38 PM PDT

Hello All,Can anyone please explain why do we always get 0 when we divide 1 by any other larger number.select 1/2 gives me 0.Thank you for your help

TPR : Time Persistant Repository

Posted: 10 Jun 2013 11:22 PM PDT

Let me ask my question 1st, then explain: (sorry for the length)Solution for : Massive amount of daily data purges.VM : 8 core, 64g, Windows 2008r2 sp2; SQL 2008 r2 sp2 EnterpriseScope : 30+ data sources (phase 1 has 5) range from flat files to SQL/Oracle/Postgress databases. Each SQL Server destination database has 10 to 400 tables. Basic concept is to keep 15-30 rolling days of data with Month End days kept for 7 years. Some data loaded daily, some loaded weekly. In the larger destination databases, there are some tables that don't ever get purged of data.I have a table with a row for each : database, table, month end type, number of days kept, last load date, next load date.I have 2 VM's built testing 2 solutions, both work; both are not meeting the timeline set to do the purges. In both cases, a cur (current) and me (month end) schema are created to keep the 7 years of month end data.1) partitioning all rolling tables. the # partitions = # rolling days; merge new, drop old2) Tables : cluster index on date of load : purge data : drop index : create index : load dataThe smallest destination database fully loaded with 3 month ends is around 10g. The largest with 2 month end is 700g. Both have 100% of all data and indexes compressed.Has any one done this? Am missing an over all solution?Wanted to throw this out there; before I report we can't meet the deadlines given.

Data Cube / SQL Server 2012 / BI

Posted: 10 Jun 2013 10:02 PM PDT

Hi Folks,I am a newbie to this forum and relatively new to SQL especially SQL Server.My first query in this area:I have got 2 tables with one common field on which i can make a join.A/ I have been asked to convert this data into a cube.All i know is that a cube is more then 2 dimensional data. Would love your input on where i can learn about cubes and how do i achieve this in sql server 2012 as i already have a license for it.B/ The objective of this exercise i believe, is to keep ready as many reports in as many permutation combinations as possible so that when a user asks for this data, its readily available and very fast to execute.Wouldn't a view just work for this or is that too simple and time consuming to modify every time there is a different request?C/ We also would like to use the BI part of SQL Server 2012. End result is for users to use Excel to interact and play with the data/reports.Can users directly through excel leverage the BI tools of SQL Server 2012Thanks in advance..Regds, Ross

Large import with Replication

Posted: 10 Jun 2013 08:59 PM PDT

Hi allWe are about to do a large import of data through an application (Bulk Logged would be of no advantage) on tables that are also replicated.Are there any pit falls I should be aware of? At present I am just testing how big the log would get but how would I test the performance of replication for the 7million records that are being imported?Thanks

Tuesday, June 11, 2013

[how to] Prune unused joins

[how to] Prune unused joins


Prune unused joins

Posted: 11 Jun 2013 08:49 PM PDT

I have expected that selecting from a view would not join the tables from which I don't query any values, but it doesn't seem to be the case.

I have a bunch of tables with the following structure:

CREATE TABLE ind1 (year integer, id integer, ind1 float);  ALTER TABLE ind1 ADD PRIMARY KEY (year, id);  INSERT INTO ind1 VALUES (2000, 1, 0.0);  INSERT INTO ind1 VALUES (2000, 2, 0.3);  INSERT INTO ind1 VALUES (2000, 3, 1.1);  INSERT INTO ind1 VALUES (2001, 1, 0.0);  INSERT INTO ind1 VALUES (2001, 2, 0.3);  INSERT INTO ind1 VALUES (2001, 3, 1.1);  INSERT INTO ind1 VALUES (2002, 1, 0.0);  INSERT INTO ind1 VALUES (2002, 2, 0.3);  INSERT INTO ind1 VALUES (2002, 3, 1.1);  VACUUM ANALYZE ind1;  

There is also ind2, ind3, ... . The set of possible year and id values is identical in all tables, and also given in an extra table:

CREATE TABLE id (id integer PRIMARY KEY);  INSERT INTO id VALUES (1);  INSERT INTO id VALUES (2);  INSERT INTO id VALUES (3);  VACUUM ANALYZE id;  

Now I create a view to show all entries for a given year:

CREATE VIEW ind_2000 AS SELECT id, ind1, ind2, ind3    FROM (SELECT id, 2000 AS year FROM id) T    LEFT JOIN ind1 USING (year, id)    LEFT JOIN ind2 USING (year, id)    LEFT JOIN ind3 USING (year, id);  

However, when selecting only the columns id, ind1 and ind2, still all tables are joined, although this is not necessary to perform the query correctly:

EXPLAIN ANALYZE SELECT id, ind1, ind2 FROM ind_2000;  

The entire code is in this gist.

How do I tell the optimizer to avoid the useless joins?

(Using PostgreSQL 8.4)

EDIT: The error persists even when dropping the time dimension, see option1.sql in the gist.

Does the week day number changes according to localization in SQL Server?

Posted: 11 Jun 2013 03:27 PM PDT

Does the weekday number changes according to server/database regional configuration in SQL Server (2005+)?

Convert the table PROCESSLIST from InnoDB to MYISAM

Posted: 11 Jun 2013 09:27 PM PDT

I use MySQL. My database's name is "db", and I have a table called "Test".

I want recover the last update, so I used this query:

SELECT update_time  FROM information_schema.tables  WHERE table_schema='db'  AND table_name='Test';  

But the result is NULL, so I converted my Table from InnoDB to MYISAM by using this query:

mysql> ALTER TABLE db.Test ENGINE=MYISAM;  

and it worked.

But the problem is that when I want to convert the table PROCESSLIST from InnoDB to MYISAM I get this message:

mysql> use information_schema  Database changed  mysql> ALTER TABLE PROCESSLIST ENGINE=MYISAM;  ERROR 1044 (42000): Access denied for user 'root'@'localhost' to database 'information_schema'  

So, is there another solution to know what is the last update for this table (PROCESSLIST)?

How to compare an attribute value of one table to all attribute values of another table?

Posted: 11 Jun 2013 09:29 PM PDT

I want to get all values of A.A and compare each to all values of B.A before determining whether or not to display the A record in the result set. I would only display an A record if the value of an individual A.A does not exist as any B.A.

Can anybody help me with this one?

Automated query log

Posted: 11 Jun 2013 06:43 PM PDT

I would like to ask for some opinions or recommendations or guidelines on how to create an automatic logging system for the executed querys on a MySQL database.

Some details:

  • its a php+mysql project
  • multiple users can use the database, based upon a user table and a authentication system made in php (they share 1 mysql user)
  • so far some query log is implemented, it is not very efficient because ever single logging procedure is hard coded in php
  • i am looking for a way to create a general procedure, to which i can pass the user id, and the rest is done automatically
  • i need to store previous value, new value, column name, table name, user name, query type, time stamp

I am familiar with TRIGGERS and STORED PROCEDURES, but cant really figure out if that's the the way, and if yes how (don't need code, just pure logic, although a usable minimalistic example would be welcomed).

Thank you.

How to/advice for hosting a multi-user database on the cloud with specific objectives

Posted: 11 Jun 2013 11:03 AM PDT

I have developed an application that users will be able to download and create an account with through our company. This application will always be connected to the internet. I have been testing the application with a local SQL database up to this point, but our goal is to have the database hosted online and I need to start thinking big picture. I have come up with a couple solutions, I would just like to here some input/advice from someone who is more experienced in the database world.

Here is the problem:

Our users sell products and we are an order manager. The tables that are going to have lots of entries will be Orders, Shipments, Archived Orders, Archived Shipments, etc. (especially for large companies that have thousands of daily orders.) I am trying to decide the best way to organize the tables/database to accommodate the growth our company will receive once the product is released (a couple hundred customers, a couple thousand in a couple years, not unlikely.) Just off the top of my head, I have a couple ideas:

1) One large Orders table, one large Shipments table. These tables store the thousands of orders/shipments that are added daily, they can be queried based off some company index. These tables will obviously get large (hundreds of customers adding thousands of orders/shipments daily)

2) Individual Orders and Shipments tables for each customer. So the name of the table would essentially be the organizer for different companies orders/shipments. Is this a logical answer for a situation like this?

3) Some sort of system where say the first X customers orders/shipments are added to one database's tables, and then when we get more than X customers, another database/server would be created to divvy up the server power/storage.

Obviously you can tell I am not an expert and some of my questions may seem vague, but all I am looking for is some advice/possible solutions to move forward and start integrating my application with databases/servers on the cloud with the mindset that many customers will be added in a small period of time and I will need to approprately divide/add resources/servers/tables. Any input is appreciated.

How can extract the table schema from just the .frm file?

Posted: 11 Jun 2013 11:03 AM PDT

I have extracted the data directory of mysql from a backup and need to get the schema from an old table, however it's a backup from a different machine.

I have read a fair number of tutorials today on how to do this, but each time it seems to fail or I end up having to reinstall mysql because mysql hangs or crashes. (Create a different database, create table with same name, replace file, stop/start engine, recover from frm. Tried in varying orders and combinations.)

Is there any external tool that can extract the schema from the .frm file? I can see the column names if I open up the file. I had a look but I can't seem to find anything that will enable me to do this.

Thanks in advance.

App can't find or connect to database

Posted: 11 Jun 2013 10:55 AM PDT

I'm trying to install a web app on my Ubuntu 12.04 server. The app is based on PHP and MySQL and I've got LAMP already working OK on my server - infact I've already got other apps working on PHP & MySQL already installed and working OK.

The problem I'm facing is, that when I try to run this app I get the error

Error: A Database connection using "Mysql" was missing or unable to connect. The database server returned this error: SQLSTATE[HY000] [2003] Can't connect to MySQL server on '5.175.145.251' (111)

I have double and triple checked the username and password that I've put into the database.php and have confirmed that it's the same login credentials as what I use for phpMyAdmin and I'm even referring to the correct database (which as can be seen in phpMyAdmin, has been setup correctly.

For the install, I have followed the instructions at http://sourceforge.net/p/coopos/wiki/Install/ The only place I have differed is, "Step III. Configure the Virtual Host" - reason being, I cannot move from "/var/www" to "/home/username/public_html" as I already have other apps running under "/var/www"

My installation of the app (with all the error messages) can be seen at

http://5.175.145.251/pos  

Would really appreciate some help on this, pls.

Is there a way, in a single SQL statement to ensure that all items in a list are present in some column?

Posted: 11 Jun 2013 01:01 PM PDT

What I'm looking for is a way of verifying that all of a list of items (let's say 'FOO', 'BAR' and 'BAZ') are all in a given table.column. This would be something like SELECT something FROM sometable WHERE somevalue IN ('THIS', 'THAT', 'OTHER THING') ... except it's sort of the inverse:

Ideally the statement would give me everything from ('FOO', 'BAR', 'BAZ') NOT IN sometable.somecolumn then I could use that as my check (assert that my SQL query returns an empty set or take the resulting set and complain that each of these is an "unrecognized" key). (A brute force approach would be to insert all of the terms/tags/items into a temporary, single column table and perform a JOIN ON my actual table's column for the set of rows which are NOT IN it.

I'm sure I'm missing some syntactic trick that'll seem obvious once I see an example.

In this particular case my table structures are something like:

CREATE TABLE items (id     INTEGER PRIMARY KEY, item TEXT UNIQUE NOT NULL);    CREATE TABLE tags  (tag_id INTEGER PRIMARY KEY, tag  TEXT UNIQUE NOT NULL);    CREATE TABLE item_tag (item_id integer, tag_id integer,                                 FOREIGN KEY(item_id) REFERENCES items(id),                                 FOREIGN KEY(tag_id) REFERENCES tags(tag_id),                                 PRIMARY KEY (item_id, tag_id));  

... but I'm simply trying to come up with a template for taking an arbitrary number of "tags" and ensure that all of them are valid entries the "tags.tag" column. (The application will have to support arbitrarily complex queries for subsets of these tags to return DISTINCT subsets of the items; but I want to raise an error on any non-existent tag before building the JOIN's WHERE expression).

Obviously I could just loop over the tags doing a separate SELECT tag_id FROM tags where tag=? ... entailing numerous round trips to the database. But this seems silly when I could send all of them to some sort of query in a single statement.

Combining multiple queries

Posted: 11 Jun 2013 06:47 PM PDT

I have 2 queries given below. Now I want to have one quey by combining the two queries so as to get the table with columns (Tag Number, Primary Name and Discoverd Name)

Query 1:

select  RT.[TagName] as 'Tag Number',          RA.SourceName as 'Primary Document',          RA.SourceRevision  from dbo.REP_TTA as RT   join dbo.REP_ASS as RA on RA.TargetName=RT.TagName  where RA.[AssocName]='is a document for'  

Query 2:

select RT.[TagName] as 'Tag Number',         RA.SourceName as 'Discoverd Document',        RA.SourceRevision   from dbo.REP_TransposedTagAttributes as RT   join dbo.REP_ASS as RA on RA.TargetName=RT.TagName  where RA.[AssociationName]='refers to'   

Multiple table rows in one row

Posted: 11 Jun 2013 03:48 PM PDT

I have a table from where want to get such classes where 3rd position candidate got more marks than difference of 1st and 2nd.

class  position  name  total_marks  10     1         a     100  10     2         b     200  10     3         c     150  11     1         d     300   11     2         e     400  11     3         f     200  

I want to write a query to find similar situation in all classes.

How to partition a very large table?

Posted: 11 Jun 2013 01:16 PM PDT

I have a very big table. I want to partition it, but I cannot.

Database size: 1TB, Free space 200GB

Table:

  • Size: 165 columns (row lengh 4216 KB, no LOBs), 500 million rows, 600GB of data.
  • Possible partitioning: a day per partition
  • Number of rows per day/parition: 2 million

In order to partition it, I need to create a clustered index. But creating parition requires free space of the same size as the table, and I do not have extra 600GB.

Are there any options how I partition this table?

EDIT 1:

I have tried copying data into separate table.

However when I try to DELETE (or INSERT) 1 day of data into another table, I get an error, that tranasaction log is full and my transaction is being rolled back. My transaction log is approx 20 GB and I cannot make it any bigger.

MySQL Dumped data not displayed

Posted: 11 Jun 2013 01:06 PM PDT

I am using MySQL 5.0.88. I imported 20 GB dump file in my local machine. After successful import i check my DB but there is no records on that. When I looked a data directory the ibdata1 file size is 30 GB. There is no other database inside the data directory. how to i get the dumped data.

Edit by RolandoMySQLDBA

Request #1 : Please run this query and post its output

SELECT table_schema DB,SUM(data_length+index_length) InnoDBSpace  FROM information_schema.tables WHERE engine='InnoDB' GROUP BY table_schema;  

The output was

+------------------------+-------------+  | DB                     | InnoDBSpace |  +------------------------+-------------+  | sample_development |    18513920     |   | sample_production  |    18857984     |   +------------------------+-------------+  

Request #2 : Please run these queries and post their output

SHOW VARIABLES LIKE 'innodb%';  SHOW VARIABLES LIKE 'datadir';    mysql> SHOW VARIABLES LIKE 'innodb%';  +-----------------------------------------+------------------------+  | Variable_name                           | Value                  |  +-----------------------------------------+------------------------+  | innodb_additional_mem_pool_size         | 1048576                |   | innodb_autoextend_increment             | 8                      |   | innodb_buffer_pool_awe_mem_mb           | 0                      |   | innodb_buffer_pool_size                 | 8388608                |   | innodb_checksums                        | ON                     |   | innodb_commit_concurrency               | 0                      |   | innodb_concurrency_tickets              | 500                    |   | innodb_data_file_path                   | ibdata1:10M:autoextend |   | innodb_data_home_dir                    |                        |   | innodb_adaptive_hash_index              | ON                     |   | innodb_doublewrite                      | ON                     |   | innodb_fast_shutdown                    | 1                      |   | innodb_file_io_threads                  | 4                      |   | innodb_file_per_table                   | OFF                    |   | innodb_flush_log_at_trx_commit          | 1                      |   | innodb_flush_method                     |                        |   | innodb_force_recovery                   | 0                      |   | innodb_lock_wait_timeout                | 50                     |   | innodb_locks_unsafe_for_binlog          | OFF                    |   | innodb_log_arch_dir                     |                        |   | innodb_log_archive                      | OFF                    |   | innodb_log_buffer_size                  | 1048576                |   | innodb_log_file_size                    | 5242880                |   | innodb_log_files_in_group               | 2                      |   | innodb_log_group_home_dir               | ./                     |   | innodb_max_dirty_pages_pct              | 90                     |   | innodb_max_purge_lag                    | 0                      |   | innodb_mirrored_log_groups              | 1                      |   | innodb_open_files                       | 300                    |   | innodb_rollback_on_timeout              | OFF                    |   | innodb_support_xa                       | ON                     |   | innodb_sync_spin_loops                  | 20                     |   | innodb_table_locks                      | ON                     |   | innodb_thread_concurrency               | 8                      |   | innodb_thread_sleep_delay               | 10000                  |   | innodb_use_legacy_cardinality_algorithm | ON                     |   +-----------------------------------------+------------------------+      mysql> SHOW VARIABLES LIKE 'datadir';  +---------------+-----------------+  | Variable_name | Value           |  +---------------+-----------------+  | datadir       | /var/lib/mysql/ |   +---------------+-----------------+  

Request #3 : Please run this query and post its output

I need to see the top of the import file. Please run this in the OS and post its output

# head -30 dump.sql    -> head -30 db.sql  -- MySQL dump 10.11  --  -- Host: localhost    Database: sample_production  -- ------------------------------------------------------  -- Server version   5.0.88-community    /*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;  /*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;  /*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;  /*!40101 SET NAMES utf8 */;  /*!40103 SET @OLD_TIME_ZONE=@@TIME_ZONE */;  /*!40103 SET TIME_ZONE='+00:00' */;  /*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;  /*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 */;  /*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */;  /*!40111 SET @OLD_SQL_NOTES=@@SQL_NOTES, SQL_NOTES=0 */;    --  -- Table structure for table `GODZIRRA`  --    DROP TABLE IF EXISTS `GODZIRRA`;  /*!40101 SET @saved_cs_client     = @@character_set_client */;  /*!40101 SET character_set_client = utf8 */;  CREATE TABLE `GODZIRRA` (    `id` int(11) NOT NULL default '0'  ) ENGINE=MyISAM DEFAULT CHARSET=utf8;  /*!40101 SET character_set_client = @saved_cs_client */;  

Request #4 : Please show the mysqld process from the command line # ps -ef | grep mysqld | grep -v grep

-> ps -ef | grep mysqld | grep -v grep  root     27852     1  0 Jun10 ?        00:00:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/localhost.localdomain.pid  mysql    27876 27852  0 Jun10 ?        00:00:00 /usr/sbin/mysqld --basedir=/ --datadir=/var/lib/mysql --user=mysql --pid-file=/var/lib/mysql/localhost.localdomain.pid --skip-external-locking  

Request #5 : Please run this query as shown

SELECT      IFNULL(ENGINE,'Total') "Storage Engine",      LPAD(CONCAT(FORMAT(DAT/POWER(1024,pw1),2),' ',SUBSTR(units,pw1*2+1,2)),17,' ') Data,      LPAD(CONCAT(FORMAT(NDX/POWER(1024,pw2),2),' ',SUBSTR(units,pw2*2+1,2)),17,' ') Indexes,      LPAD(CONCAT(FORMAT(TBL/POWER(1024,pw3),2),' ',SUBSTR(units,pw3*2+1,2)),17,' ') Total  FROM  (      SELECT ENGINE,DAT,NDX,TBL,IF(px>4,4,px) pw1,IF(py>4,4,py) pw2,IF(pz>4,4,pz) pw3      FROM       (          SELECT *,              FLOOR(LOG(IF(DAT=0,1,DAT))/LOG(1024)) px,              FLOOR(LOG(NDX)/LOG(1024)) py,              FLOOR(LOG(TBL)/LOG(1024)) pz          FROM          (              SELECT                  ENGINE,                  SUM(data_length) DAT,                  SUM(index_length) NDX,                  SUM(data_length+index_length) TBL              FROM              (                 SELECT engine,data_length,index_length FROM                 information_schema.tables WHERE table_schema NOT IN                 ('information_schema','performance_schema','mysql')                 AND ENGINE IS NOT NULL              ) AAA              GROUP BY ENGINE WITH ROLLUP          ) AAA      ) AA  ) A,(SELECT ' BKBMBGBTB' units) B;  

Backup not creating a new file

Posted: 11 Jun 2013 11:23 AM PDT

I run a daily backup of my database through command prompt:

c:\sqlcmd -H localhost -Q "BACKUP DATABASE test TO DISK='c:\test.bak'"  

But the new backup replaces the previous backup. Please tell me what to change so that all backups will be stored on disk.

Database Mail sending functionality not working on local system

Posted: 11 Jun 2013 12:45 PM PDT

I am using Database Mail functionality to send mail from a SQL Server 2008 database via following stored procedure execution:

EXEC sp_send_dbmail @profile_name = 'MyProfile',                       @recipients = 'abc@companyname.com',                       @subject = 'Test message',                      @body = 'Congrats Database Mail Received By you Successfully.'   

I have tried with my gmail account profile on my local system it's working properly but not with my company or outlook profile.

Error message:

The mail could not be sent to the recipients because of the mail server failure. (Sending Mail using Account 5 . Exception Message: Could not connect to mail server. (A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond abc.j.i.ooo:pp). )

Reference

http://blogs.msdn.com/b/suhde/archive/2009/07/12/how-to-configure-sql-server-database-mail-to-send-email-using-your-windows-live-mail-account-or-your-gmail-account.aspx  

What would be the problem?

Thanks

In the Vertica database, what is a namespace?

Posted: 11 Jun 2013 10:29 AM PDT

In the Vertica database, what does the term "namespace" mean?

I have reviewed the entire Vertica documentation and cannot find what this means.

createdb: could not connect to database postgres: FATAL: could not write init file

Posted: 11 Jun 2013 07:26 PM PDT

RedHat Enterprise Server 3.0 32 Bits

psql (PostgreSQL) 8.2.3

user: postgres

server is running:

/soft/postgres/8.2.3/bin/pg_ctl start  pg_ctl: another server may be running; trying to start server anyway  2013-05-09 11:23:07 BRST---:FATAL:  lock file "postmaster.pid" already exists  2013-05-09 11:23:07 BRST---:HINT:  Is another postmaster (PID 12810) running in data directory "/opt/psql/dba/bdadms/data1/pstg"?  pg_ctl: could not start server  Examine the log output.  

I had just created a new database cluster with initdb; but when I run createdb:

8.2.3:postgres:pstg:>/soft/postgres/8.2.3/bin/createdb pstg  createdb: could not connect to database postgres: FATAL:  could not write init file  8.2.3:postgres:pstg:>/soft/postgres/8.2.3/bin/createdb postgres  createdb: could not connect to database template1: FATAL:  could not write init file  8.2.3:postgres:pstg:>/soft/postgres/8.2.3/bin/createdb template1  createdb: could not connect to database postgres: FATAL:  could not write init file  

any clues as to the cause and possible solutions to this problem?

Repeated values in group_concat

Posted: 11 Jun 2013 03:47 PM PDT

I have two tables, first the table food and Second is Activity:

INSERT INTO food      (`id`, `foodName`)  VALUES      (1, 'food1'),      (2, 'food2'),      (3, 'food3'),      (4, 'food4'),      (5, 'food5'),      (6, 'food6'),  ;  CREATE TABLE Activity      (`id` int,`place` varchar(14),`food_id` int,`timing` TIME,`date_and_time` DATETIME)  ;  INSERT INTO Activity      (`id`,`place`, `food_id`,`timing`,`date_and_time`)  VALUES      (1, 'place1', 1, '10:30am','2013-05-01'),      (2, 'place1', 1, '12:30pm','2013-05-01'),      (3, 'place1', 1, '04:30pm','2013-05-01'),      (4, 'place2', 2, '10:30am','2013-05-02'),      (5, 'place2', 2, '12:30pm','2013-05-02'),      (6, 'place2', 2, '4:30pm','2013-05-02'),      (7, 'place1', 2, '10:30am','2013-05-02'),      (8, 'place1', 2, '12:30pm','2013-05-02'),      (9, 'place1', 2, '4:30pm','2013-05-02'),      (10, 'place2', 3, '10:30am','2013-05-03'),      (11, 'place2', 3, '12:30pm','2013-05-03'),      (12, 'place2', 3, '4:30pm','2013-05-03')  ;  

For now I'm using the following query:

SELECT       a.activity_type AS Activity,       COUNT(DISTINCT p.id) AS Products,      CONVERT(GROUP_CONCAT(p.category_id SEPARATOR ',  ') USING utf8)         AS Categories  FROM       food AS p    JOIN       ( SELECT activity_type             , prod_id        FROM activity         WHERE activity_type <> ''         GROUP BY activity_type               , prod_id      ) AS a      ON p.id = a.prod_id  GROUP BY       activity_type  ORDER BY       Products DESC ;  

Could you please help me, I need output in the below format:

place | food_id | Timings             |                              |        |         |---------------------|Date                          |        |         |Time1 |Time2 | Time3 |                              |  ---------------+----------+------------------------------------------|  place1 | 1      | 10:30am| 12:30pm| 4:30pm |2013-05-01(MAX timestamp)|            place2 | 1      | 10:30am| 12:30am| 4:30am |2013-05-01(MAX timestamp)|  

Moving one TempdB on 3 instance server

Posted: 11 Jun 2013 08:53 PM PDT

I have a large virtual SQL Server (Full 2008R2).

I run 3 SQL instances and would like to relocate the TempdB database file to another location, splitting off from the TempdB log file.

The trouble I am having is that, even though the query ALTER DATABASE executes successfully, and then restarting that particular instance's SQL Server Service, does not relocate that database file.

Do I have to restart the whole server to move this database?

MySQL PDO Cannot assign requested address

Posted: 11 Jun 2013 01:26 PM PDT

Can someone help me with this error?

[08-Apr-2013 17:44:08 Europe/Berlin] PHP Warning:  PDO::__construct(): [2002]      Cannot assign requested address (trying to connect via tcp://****:3306) in       /var/www/***  [08-Apr-2013 17:44:08 Europe/Berlin] PHP Fatal error:  Uncaught exception       'PDOException' with message 'SQLSTATE[HY000] [2002] Cannot assign requested       address' in /var/www/***  

I have a Server with a lot connections per second; out of about 100 Connections, a single one got this error.

I've tried this recommendation from stackoverflow however it does not solve my problem.

Alternative tools to export Oracle database to SQL Server?

Posted: 11 Jun 2013 03:26 PM PDT

I've got an Oracle database that I need to export (schema and data) to SQL Server.

I am trying the Microsoft SQL Server Migration Assistant for Oracle, but it is horribly slow, grossly inefficient and very un-user-friendly, e.g. I was having problems connecting to the SQL Server DB during data migration - but it still spent ~5 minutes preparing all the data before attempting a connection to SQL Server, then when it failed, the 5 minutes of preparatory work were wasted.

Right now, I'm just trying to connect to another Oracle DB using this tool, I left it overnight and came back this morning, and it's still stuck on 19% of "Loading objects..." And this is on a machine with a good 18GB RAM, of which maybe 8.5 GB currently in use. Task Manager shows me that SSMAforOracle.exe is using 0 CPU, 0 PF Delta, and no change whatsoever in memory usage. In other words: frozen stiff. Absolutely intolerable.

Are there any other tools out there that can migrate an Oracle DB to SQL Server a little more efficiently?

Performing SELECT on EACH ROW in CTE or Nested QUERY?

Posted: 11 Jun 2013 06:19 PM PDT

This is a problem in PostgreSQL

I have a table which stores the tree of users;

      +------+---------+      |  id  | parent  |      |------+---------|      |  1   |   0     |      |------|---------|      |  2   |   1     |      |------|---------|      |  3   |   1     |      |------|---------|      |  4   |   2     |      |------|---------|      |  5   |   2     |      |------|---------|      |  6   |   4     |      |------|---------|      |  7   |   6     |      |------|---------|      |  8   |   6     |      +------+---------+  

I can query a complete tree from any node by using the connectby function, and I can separately query the size of tree in terms of total nodes in it, for example

tree for #1 has size 7
tree for #5 has size 0
tree for #6 has size 2, and so on

Now I want to do something like Selecting all possible trees from this table (which is again carried out by connectby), count the size of it and create another dataset with records of ID and size of underlying tree, like this:

      +------------------+-------------+      |  tree_root_node  |  tree_size  |      |------------------+-------------|      |      1           |     7       |      |------------------+-------------|      |      2           |     3       |      |------------------+-------------|      |      3           |     0       |      |------------------+-------------|      |      4           |     3       |      |------------------+-------------|      |      5           |     0       |      |------------------+-------------|      |      6           |     2       |      |------------------+-------------|      |      7           |     0       |      |------------------+-------------|      |      8           |     0       |      +------------------+-------------+  

The problem is, I am unable to perform the same SELECT statement for every available row in original table in order to fetch the tree and calculate the size, and even if I could, I dont know how to create a separate dataset using the fetched and calculated data.

I am not sure if this could be simple use of some functions available in Postgres or I'd have to write a function for it or simply I dont know what exactly is this kind of query is called but googling for hours and searching for another hour over here at dba.stackexchange returned nothing.

Can someone please point to right direction ?

List all permissions for a given role?

Posted: 11 Jun 2013 08:09 PM PDT

I've searched around all over and haven't found a conclusive answer to this question.

I need a script that can give ALL permissions for an associated role.

Any thoughts, or is it even possible?

This gets me CLOSE - but I can't seem to flip it around and give the summary for roles, rather than users.

http://consultingblogs.emc.com/jamiethomson/archive/2007/02/09/SQL-Server-2005_3A00_-View-all-permissions--_2800_2_2900_.aspx

 WITH    perms_cte as  (          select USER_NAME(p.grantee_principal_id) AS principal_name,                  dp.principal_id,                  dp.type_desc AS principal_type_desc,                  p.class_desc,                  OBJECT_NAME(p.major_id) AS object_name,                  p.permission_name,                  p.state_desc AS permission_state_desc          from    sys.database_permissions p          inner   JOIN sys.database_principals dp          on     p.grantee_principal_id = dp.principal_id  )  --role members  SELECT rm.member_principal_name, rm.principal_type_desc, p.class_desc,       p.object_name, p.permission_name, p.permission_state_desc,rm.role_name  FROM    perms_cte p  right outer JOIN (      select role_principal_id, dp.type_desc as principal_type_desc,      member_principal_id,user_name(member_principal_id) as member_principal_name,     user_name(role_principal_id) as role_name--,*      from    sys.database_role_members rm      INNER   JOIN sys.database_principals dp      ON     rm.member_principal_id = dp.principal_id  ) rm  ON     rm.role_principal_id = p.principal_id  order by 1  

Named Pipe Provider Error code 40

Posted: 11 Jun 2013 06:26 PM PDT

I have literally tried everything, from enabling named pipe to adding exception to ports in the firewall, to everything possible in surface configuration. I can connect to the SQL instance(using TCP and Named Pipes) with SQL Server Management Studio. But sqlcmd throws an error:

Login timeout expired  

Help!

ERWIN create and use database

Posted: 11 Jun 2013 11:29 AM PDT

Im trying to generate physical database from my logical model using ERWIN 7.

All is well in the script that is generated automatically, there is only the problem that the model is created in the database "master" which is the default SQL Server, rather than being created in the database that I inserted along with the model.

I cant find a way to associate my MODEL with the database im creating along with the model.

Here is the script that ERWIN auto-generates:

CREATE DATABASE Movies go  ALTER DATABASE Movies SET  RECOVERY FULL go  CREATE SCHEMA fbd AUTHORIZATION dbo go  CREATE DEFAULT Default_Value_movies_gen   AS 1 go  CREATE DEFAULT Default_Value_titulo_movies    AS 'No title' go  

BUT it should generate automatically:

CREATE DATABASE Movies go   --Here should be "USE Movies",to use database that ERWIN have just created  ALTER DATABASE Movies SET RECOVERY FULL go  CREATE SCHEMA fbd AUTHORIZATION dbo go  CREATE DEFAULT Default_Value_movies_gen AS 1 go  CREATE DEFAULT Default_Value_titulo_movies AS 'No title' go  

Any ideas?

Restore SQL Server 2012 backup to a SQL Server 2008 database?

Posted: 11 Jun 2013 08:23 PM PDT

Is there a way to restore a SQL Server 2012 database backup to a SQL Server 2008?

I tried to attach file, it does not work.

How to recover MySQL table structure from FRM files

Posted: 11 Jun 2013 03:48 PM PDT

What I have learned from Google searching:

With MyISAM tables you can simply copy the FRM, MYD, and MYI files. In the folder for the DB I'm trying to recover, there are only FRM files and a "db.opt" file, so they must not be MyISAM tables.

Situation:

My computer crashed, but the data is still on the hard drive. The MySQL 5.5.8 server was installed with WAMP on the crashed computer.

I have tried copying the entire data folder over, but MySQL only creates a blank database - no tables.

There is an FRM file for each of the tables I wish to recover. However, they are not readable text files. Is there any way to recover the table structure from these files?

This post http://stackoverflow.com/a/7688688/1188138 indicates that the structure is in fact in those files, but does not provide a method of retrieval.

Thank you so much - I have tried everything I can think of.

INFORMATION_SCHEMA Selection Error

Posted: 11 Jun 2013 02:17 PM PDT

I'm trying to select data from information_schema but I'm getting the following error. How can I fix this?

mysql> SELECT * FROM information_schema.tables ;  ERROR 1018 (HY000): Can't read dir of '.' (errno: 13)  

Moving tables to another SQL2008 database (including indexes, triggers, etc.)

Posted: 11 Jun 2013 09:17 AM PDT

I need to move a whole bunch (100+) of large (millions of rows) tables from one SQL2008 database to another.

I originally just used the Import/Export Wizard, but all the destination tables were missing primary and foreign keys, indexes, constraints, triggers, etc. (Identity columns were also converted to plain INTs, but I think I just missed a checkbox in the wizard.)

What's the right way to do this?

If this were just a couple of tables, I would go back to the source, script out the table definition (with all indexes, etc), then run the index creation portions of the script on the destination. But with so many tables, this seems impractical.

If there wasn't quite so much data, I could use the "Create Scripts..." wizard to script out the source, including data, but a 72m row script just doesn't seem like a good idea!

[SQL Server] Foreign Key Error

[SQL Server] Foreign Key Error


Foreign Key Error

Posted: 11 Jun 2013 04:56 AM PDT

Keep getting this error :Error :Msg 1776, Level 16, State 0, Line 1There are no primary or candidate keys in the referenced table 'tbl_property' that match the referencing column list in the foreign key 'fk_Trace_num'.Msg 1750, Level 16, State 0, Line 1Could not create constraint. See previous errors.I have created two tables , and am trying to create a relationship but somehow keeps getting the above error.I have created a composite key in the table and now want to create a foreign key for that in another table.Kindly advise.

select not working

Posted: 10 Jun 2013 03:29 PM PDT

Dear AllIn procedure i am using Remark table to capture all the steps executed (instead of print). From another session when I try to select * from Remark it does not show results. I need to view details from this table to monitor the progress of the stored procedure. In the procedure I have used commit statement after few inserts.What I need to do to view results?Regards

[Articles] The Watson Service

[Articles] The Watson Service


The Watson Service

Posted: 10 Jun 2013 11:00 PM PDT

IBM's Watson project continues to grow and the latest implementation might be one that affects many of us in our daily lives.

Search This Blog