Sunday, September 15, 2013

[how to] MySQL: Alter Table on very active table gives "Waiting for table metadata lock"

[how to] MySQL: Alter Table on very active table gives "Waiting for table metadata lock"


MySQL: Alter Table on very active table gives "Waiting for table metadata lock"

Posted: 15 Sep 2013 06:44 PM PDT

At my work, we've recently upgraded to MySQL 5.5 and since then, have been bitten a few times by the new (and intended) behavior of metadata locking. Basically, whenever we run an ALTER TABLE statement on a fairly active table, the table becomes inaccessible to both reads and writes and threads are shown hanging, basically forever, with the "Waiting for table metadata lock" message.

I've done a bit of research and found good explanations of this in the MySQL manual and here and see that this is an expected and actually intended behavior. What I haven't found anywhere online is a graceful way to mitigate the obvious problems this causes in production. The only way we have been able to run alter table statements on our more active tables since upgrading is by disabling read and write permissions for any usernames our application runs under, killing all processes that are still running, and then running the ALTER TABLE statements. This works, but results in many errors from our app: both from background processes and our web site.

In the past (ie before 5.5) schema changes were no problem. I understand the reason for the new behavior, but it causes obvious problems in production and I figure folks in the community must have work arounds.

Any ideas would be appreciated. Thanks!

dividing the comma separated strings into columns

Posted: 15 Sep 2013 06:51 PM PDT

I have an Problem with Dividing the comma Separated values.

  • i have 2 colums with Comma separated values

For ex:

  ID                                Name 
1,2,3             Ab,cd,ef

I want the columns divided as

  Id                                 Name
1                     ab
2                    cd
3                    ef

I used

xmltable('r/c' passing xmltype('' || replace(ID,',','') || '') columns ID_NEW varchar2(400) path '.')

Output was

  ID           Name 
1     ab,cd,ef
2     ab,cd,ef
3     ab,cd,ef

I'm Using Oracle 10g

Measuring SQL execution time in PostgreSQL?

Posted: 15 Sep 2013 09:05 PM PDT

MySQL has a performance_schema database that allows one to capture SQL statement execution time data in a table (e.g. performance_schema.events_statements_history_long ; useful MySQL link).

I was wondering if a similar set of tools existed in PostgreSQL?

How to avoid duplicate entries in SELECT statement SQL?

Posted: 15 Sep 2013 02:22 PM PDT

    +----+----------------------+---------+---------+---------------------+-------+  | id | translation          | id_word | id_user | added               | tuser |  +----+----------------------+---------+---------+---------------------+-------+  | 17 | допомагати           |       4 |       1 | 2013-08-29 14:52:20 |     2 |  | 17 | допомагати           |       4 |       1 | 2013-08-29 14:52:20 |     1 |  |  5 | когось               |       1 |       1 | 2013-08-27 23:35:09 |     1 |  |  4 | хто-небудь           |       1 |       1 | 2013-08-27 23:35:09 |  NULL |  |  1 | хтось                |       1 |       1 | 2013-08-27 23:34:17 |     2 |  |  1 | хтось                |       1 |       1 | 2013-08-27 23:34:17 |     1 |  +----+----------------------+---------+---------+---------------------+-------+  

As you can see, I have duplicate entries in id. I have to avoid duplicate entries if:

There is a row with a tuser = 1, then I have to remove other entries with the same id, id_user and id_word from result. If in rows (with id, id_user and id_word) no tuser = 1, it must show all values.

I need only one unique row if tuser = 1 for each unique combination id, id_word, id_user

get time format in postgresql

Posted: 15 Sep 2013 03:36 PM PDT

If I want get ubuntu system time, I have two options:

$ cat /etc/timezone  US/Eastern    $ date  Sun Sep 15 14:45:02 EDT 2013  

How can I find out if postgresql is using utc or a utc offset, such as US/Eastern?

Specified key was too long; max key length is 1000 bytes in mysql 5.6

Posted: 15 Sep 2013 08:40 PM PDT

one of the application server is internally creating the database on my mysql but when ever the following create table command gets executed

 CREATE TABLE ofRoster (           rosterID              BIGINT          NOT NULL,         username              VARCHAR(64)     NOT NULL,         jid                   VARCHAR(1024)   NOT NULL,         sub                   TINYINT         NOT NULL,           ask                   TINYINT         NOT NULL,         recv                  TINYINT         NOT NULL,         nick                  VARCHAR(255),         PRIMARY KEY (rosterID),         INDEX ofRoster_unameid_idx (username),         INDEX ofRoster_jid_idx (jid)      )  

i am getting following error

ERROR 1071 (42000): Specified key was too long; max key length is 1000 bytes  

i have set my default engine to MyISAM because i was getting following error in InnoDB

specified key was too long max key length is 767 bytes  

my current engines are as follows

+--------------------+---------+----------------------------------------------------------------+--------------+------+------------+  | Engine             | Support | Comment                                                        | Transactions | XA   | Savepoints |  +--------------------+---------+----------------------------------------------------------------+--------------+------+------------+  | FEDERATED          | NO      | Federated MySQL storage engine                                 | NULL         | NULL | NULL       |  | MRG_MYISAM         | YES     | Collection of identical MyISAM tables                          | NO           | NO   | NO         |  | MyISAM             | DEFAULT | MyISAM storage engine                                          | NO           | NO   | NO         |  | BLACKHOLE          | YES     | /dev/null storage engine (anything you write to it disappears) | NO           | NO   | NO         |  | CSV                | YES     | CSV storage engine                                             | NO           | NO   | NO         |  | MEMORY             | YES     | Hash based, stored in memory, useful for temporary tables      | NO           | NO   | NO         |  | ARCHIVE            | YES     | Archive storage engine                                         | NO           | NO   | NO         |  | InnoDB             | YES     | Supports transactions, row-level locking, and foreign keys     | YES          | YES  | YES        |  | PERFORMANCE_SCHEMA | YES     | Performance Schema                                             | NO           | NO   | NO         |  +--------------------+---------+----------------------------------------------------------------+--------------+------+------------+  

now i really don't know how would i get rid off it as the application server itself is creating database automatically in mysql so i dont have control over it.

i am using Server version: 5.6.10 MySQL Community Server (GPL) version

How is "BIG DATA or Hadoop" in current IT industry? [on hold]

Posted: 15 Sep 2013 10:48 AM PDT

I would like to do a course & certification in BIGDATA. However I would like to verify from Database experts like you regarding the same.

  • whether is it a good idea to go with BIGDATA (I heard from one of my friends that It's going to be popular like SAP. However I'm not sure about it)
  • How can i get study materials or video tutorials for BIGDATA?
  • As It is a new technology, I don't think we have any specific sites for interview preparation.Does anyone have idea on it?
  • How is the future with BIGDATA?

SQL Server 2005 - Query optimization for fetching large number of rows from table with 750 million rows

Posted: 15 Sep 2013 10:20 AM PDT

Brief on application:

This is audio fingerprinting application, being developed in Java with Microsoft SQL Server 2005 database.

I have one application to create fingerprints of original songs and put these fingerprints in database. To store fingerprint in database I have table:

CREATE TABLE [dbo].[fp_core](      [hashkey] [bigint] NOT NULL,      [note_id] [int] NOT NULL,      [timeoffset] [int] NOT NULL  ) ON [PRIMARY]  

The application processes song and takes 100 sample per second, so around 15000 samples for complete song. These sample values are stored in database, 1 row for each sample as {HASHKEY, NOTE_ID, TIMEOFFSET}. For fingerprint of complete song, I may have around 15000 rows in fp_core table. I am planning to put fingerprints of 50000 songs in database, so around 750 million rows will be in fp_core table.

I have other application to process recordings and detect songs played in it. Process is, create set of HASHKEY from recording audio, same as for creating fingerprint of original song. Recording audio will generate around 20000-30000 HASHKEYs. Then application retrieves rows from fp_core table for all matching HASHKEYs generated by recording audio.

To retrieve data from fp_core table by processing recording, I am doing is, filling these all HASHKEYs of recording in one more table, table is:

CREATE TABLE [dbo].[fp_core_keys](      [hashkey] [bigint] NOT NULL  ) ON [PRIMARY]  

then I am joining these two tables to retrieve all matching rows, the query is:

select fp.hashkey, fp.note_id, fp.timeoffset  from dbo.fp_core fp   INNER JOIN dbo.fp_core_keys keys ON fp.hashkey = keys.hashkey  

I have following indexes:

CREATE CLUSTERED INDEX [index_fp_core] ON [dbo].[fp_core]   (      [hashkey] ASC  )WITH (SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF) ON [PRIMARY]    CREATE UNIQUE CLUSTERED INDEX [IX_fp_core_keys] ON [dbo].[fp_core_keys]   (      [hashkey] ASC  )WITH (SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF) ON [PRIMARY]  

Problem:

Retrieving data using above query is so slow, taking time around 40 seconds.

Right now, here is stats:

Query:

select count(hashkey) from fp_core  go  select count(distinct(hashkey)) from fp_core  

Result:

57177764  13675633  

Plan:

enter image description here

Can anybody help me?

Choose security group for restore-db-instance-to-point-in-time

Posted: 15 Sep 2013 08:17 AM PDT

When using the AWS CLI restore-db-instance-to-point-in-time command, I can't figure out how to set the security group. The doc on this page says:

The target database is created from the source database with the same configuration as the original database except that the DB instance is created with the default DB security group.

Is it possible to override this parameter to use the same security group as the original instance?

How to measure a perfomance of SQL Server database? [on hold]

Posted: 15 Sep 2013 07:34 AM PDT

I have the task to improve the performance of a SQL Server 2012 database (one of 4 in an instance) by 60% confirmed by corresponding statistics.

So, I need to measure "performance" of the RDBMS database before performance tuning and optimization and after.

Which metrics are better suited for this?

Trying to answer the obvious questions ahead ...

I/O (hardware) bottlenecks are absent since the SQL Server runs on a virtual rack having plenty of physical resources under it.

The database is used by approx. 60 users (mostly 8 hours a day) with widely varying load (per sec).

This is a company management task, so the results of this task should be easy to grasp.

How to DRY record metadata in SQL Server

Posted: 15 Sep 2013 11:52 AM PDT

There is a database that has been given to me, and all tables contain these columns:

  1. CreatorUserId
  2. CreatorUsername
  3. ModifierUserId
  4. ModifierUsername

However, it seems that this design is not DRY and it's really hard to maintain because this database contains like 300 tables.

On the other hand, the modification history and creation history and lot's of other actions on the database level matter to us.

Is there any better way to record any DDL and DML operations?

How to securely connect app and database servers?

Posted: 15 Sep 2013 08:02 PM PDT

(Updated) For a start, I have my app and database on separate servers. It's easy to connect them, except I am not sure how to secure my database server.

Here's what I've already done:

  • Ran mysql_secure_installation right after installing MySQL. So, all of these have been taken care of:

    • strong 64 char root password
    • no anonymous users
    • root login only on localhost
    • no test databases
  • A non-public network for the inter-server communication (in my my.cnf, there's something like this: bind-address = 10.128.2.18, where 10.128.2.18 is the private network IP address of the MySQL database server).

  • A separate user for the database, an unguessable username and 64 char strong password to go with it; and the ip addresses of the user accounts set to the private IP addresses of the app server. I created the user with command like this:

    GRANT ALL ON `app_db`.* TO 'db_user'@'10.128.2.9' IDENTIFIED BY 'password';  

    The app is wordpress, so I need GRANT ALL ON to avoid any unexpected issues.

Options considered (but not employed):

  • I've been told that technologies like SSH Tunnel, SSL, OpenVPN, Tinc, and IPsec are not generally used because they have a performance cost (resource usage due to encryption, latency, etc).

So, whatelse do I need, or am I good enough? How else do others do it? Please be as detailed as possible (link to a tutorial or something you are suggesting would help a lot).

How to get SQL Server 2012 to use the invariant culture in format()?

Posted: 15 Sep 2013 08:16 AM PDT

This has now been posted to Connect: The invariant culture identifier is rejected by the FORMAT() function in SQL Server 2012.


I'm trying to get the built-in format() function in SQL Server 2012 to use the invariant culture.

It is said in the documentation that the function accepts a .NET culture identifier as the third parameter. The identifier for the invariant culture is a blank string:

You specify the invariant culture by name by using an empty string ("") in the call to a CultureInfo instantiation method.

That does not work with SQL Server however:

select format(getdate(), N'g', '');  

Msg 9818, Level 16, State 1, Line 1
The culture parameter '' provided in the function call is not supported.

It is also documented that the invariant culture is associated with the English language, but not with any country/region. One would think this allows to pass 'en' as the identifier, but then, in .NET, CultureInfo.InvariantCulture.Equals(CultureInfo.GetCultureInfo("")) yields true, but CultureInfo.InvariantCulture.Equals(CultureInfo.GetCultureInfo("en")) gives false, so they aren't really the same.

So how do I make SQL Server to use the invariant culture?

(Note: I'm interested in making the built-in thing to work. I already have my own CLR functions to do this, I was going to remove them in favor of the now-built-in functionality).

Microsoft SQL Server 2012 (SP1) - 11.0.3128.0 (X64)
Dec 28 2012 20:23:12
Copyright (c) Microsoft Corporation
Business Intelligence Edition (64-bit) on Windows NT 6.0 (Build 6002: Service Pack 2) (Hypervisor)

multiple line text values in mysqldump text export file

Posted: 15 Sep 2013 10:20 AM PDT

I'm trying to export +100mil record table into txt file. My plan is split up txt file to small pieces by size or line then import.

I have one text field has multiple line like blog post text, in txt export file it exported as multiple lines which I want it to be 1 line 1 row so I can process it by lines.

I tried various fields-terminated-by, lines-terminated-by, fields-escaped-by parameters for export but nothing made that multiple line text into single, quoted and comma separated line.

It does quote well when I export the data in sql format but I haven't succeeded to convert new line characters in the text field to \n\r or \n whatever those characters are. Even if I escape it, still exported as new line with the quote.

Any advantage of creating table in temporary tablespace - Oracle

Posted: 15 Sep 2013 07:20 AM PDT

My on-site DBA told me to create the particular table on temporary tablespace.Is there any advantage of such action?

Altering the location of Oracle-Suggested Backup

Posted: 15 Sep 2013 03:21 PM PDT

On one database, the Oracle-Suggested Backup scheduled from Enterprise Manager always ends up in the recovery area, despite RMAN configuration showing that device type disk format points elsewhere.

As far as I can see, the scheduled backup job is simply:

run {  allocate channel oem_disk_backup device type disk;  recover copy of database with tag 'ORA_OEM_LEVEL_0';  backup incremental level 1 cumulative  copies=1 for recover of copy with tag 'ORA_OEM_LEVEL_0' database;  }  

Asking RMAN to show all reveals that device type disk is indeed configured to store elsewhere:

CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT   '/s01/backup/PROD11/PROD11_%U';  

If I run the script manually, the backupset is placed at the above location, when the script is run from the job scheduler the backupset goes to the RECO group on ASM,

Why might Oracle still choose to dump the backupset to the db_recovery_file_dest?

Ultimately, how can I change the backup destination?

MYSQL Timezone support

Posted: 15 Sep 2013 05:21 PM PDT

We are having a shared hosting plan and they are saying that do provide MYSQL Timezone support in a shared hosting plan. I can create timezone related tables in our database and populate them with required data(data from from our local MYSQL Timezone related tables. How to view the code syntax for MySQL "CONVERT_TZ" function?

Thanks Arun

Database Design Confusion

Posted: 15 Sep 2013 03:57 AM PDT

I am making a Phonegap based ERP solution for school in which every school can configure their own school individually and can manage the whole school from that single application, in which a main administrator configures the school,i.e., adds all the subjects, teachers, classes, timetables, result, etc.

The application also has roles for students, teachers, parents, and admins in which the admin can define the permissions that all the different roles have plus can add and delete the permissions as well.

The application manages the attendance, time table, results, profile of all the students and staff as well.

I can think of only two ways of doing the same,

  1. To maintain a column of school_id in every table that i create and in all the queries that i make i specify the school_id with it.
  2. To create a different database for every school that registers in the application so that all the school data is not in the same table and the single table is not over populated.

I just cant make out with what design I shall go forward with.

Thanks in advance.

replication breaks after upgrading master

Posted: 15 Sep 2013 01:21 PM PDT

I have a set up of replication with master 5.1.30 and slave 5.5.16 and the replication is working good

Now i have upgraded mysql master to 5.1.47

As far as i know we have to turn off the log bin with sql_log_bin=0 before using mysql_upgrade program in order to up grade the replication setup as well

but the problem here is the binary log was not turned off while mysql_upgrade program is running

The reason i found is in 5.1 the sql_log_bin is a session variable and mysql_upgrade program runs in another session

so how to upgrade the replication as well along with the server with any breakage on replication setup.

any suggestions are really useful.....

The client need 'SUPER' privileges to the reports DB in order to create and maintain my Functions

Posted: 15 Sep 2013 03:20 AM PDT

I have problem with one of my client

Provided super privileges to the client James on 'Reports ' database , but the problem is the client need 'SUPER' privileges to the reports DB in order to create and maintain my Functions. So I'm still unable to move forward...

BUT, I still need SUPER privileges to the reports DB in order to create and maintain my Functions. So I'm still unable to move forward... (This is what client responding , he can login but he can't create nor maintain functions on reports db server )

After providing super privileges the client cant maintenance any of this functions nor he can create functions , how to work on this .

extproc env variables oracle 11g

Posted: 15 Sep 2013 09:20 AM PDT

I have oracle 11g with extproc separately configured in listener.ora.

Users report some environmental variables that should be exported are not set.

From where does extproc gets it environment besides ENV in its definition in listener.ora? They come from shell that started listener? Why variables included in ENV do not appear?

How could I efficiently check what env variabls extproc has set?

Need to suppress rowcount headers when using \G

Posted: 15 Sep 2013 02:21 PM PDT

Is there a command to suppress the rowcount headers and asterisks when using '\G' to execute a SQL statement? I am executing mysql with the -s and --skip-column-name options, but these don't suppress the rowcounts.

multivalued weak key in ER database modeling

Posted: 15 Sep 2013 04:21 PM PDT

I was wondering since i didnt find out any clarification for this. I want to store movies that exist in different formats (dvd, bluray etc) and the price for each format differs from each other as well as the quantity of each format, so i came up with this:

example

Is this correct from a design perspective? Does this implies redundancy? I dont understand how will this be stored in a table. Would it be better to do it like this :

enter image description here

Thanks in advance.

EDIT : I add some more descriptive information about what i want to store in this point of the design. I want to store information about sales. Each movie that exist in the company i need to store format, price and stock quantity. I will also need to store customer information with a unique id, name, surname, address, movies that he/she has already bought and his credit card number. Finally i will have a basket that temporary keeps items (lets suppose that other items exist apart from movies) that the customer wants to buy.

Payment methods conceptual and logical model

Posted: 15 Sep 2013 01:20 AM PDT

I need to create a conceptual and logical (normalized) models of parking house according to the requirements below. It looks to me as a very simple concept that doesn't need all tables to have relationships - but then they could not be modelled as entities. I tried asking this on stackoverflow but got no feedback for couple of days now.

  1. Three possible methods of payment:

    • a ticket paid on leave,
    • prepaid card with cash credit,
    • prepaid card with "time credit",
  2. Price of ticket depends on time:

    1. 1-2hrs = $0,
    2. 3hrs = $2,
    3. 4hrs = $4,
    4. afterwards x hrs = $(x+1), but max. $20 for 24hrs (... easiest to put these to 24 rows, right?).
  3. A ticket (a) may be allowed a 20% discount (ie. for shopping in the mall).

  4. Cash credit card uses same prices as tickets but with 40% discount.
  5. Cash credit card can be reloaded.
  6. Time card is paid once and allows parking while valid.

The problem is I don't know how to put those highlighted relations to the logical db model and whether event to put them there. Is it ok-practice to have isolated tables in the design?

Microsoft Office Access database engine could not find the object 'tableName'

Posted: 15 Sep 2013 06:21 PM PDT

First a little background: I am using MS access to link to tables in an advantage database. I created a System DSN. In the past in Access I've created a new database, and using the exteranl data wizard, successfully linked to tables. Those databases and the linked tables are working fine.

Now I am trying to do the same thing, create a new access db, and link to this same DSN. I get as far as seeing the tables, but after making my selection, I get the error, " The Microsoft Office Access database engine could not find the object 'tableSelected'. Make sure the object exists and that you spell its name and the path name correctly.

I've tried creating another datasource (system and user) with no luck. Environment is Wn XP, Access 2007, Advantage DB 8.1

MYSQL 5.5 Fail start Fedora 16

Posted: 15 Sep 2013 12:21 PM PDT

I installed mysql and mysql-server from the repos (MySQL version 5.5). Then tried to start it, but got an error.

[root@server]# service mysqld start  Redirecting to /bin/systemctl start  mysqld.service  Job failed. See system logs and 'systemctl status' for details.  

Here is the log:

121118  2:41:38 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql  121118  2:41:38 [Note] Plugin 'FEDERATED' is disabled.  121118  2:41:38 InnoDB: The InnoDB memory heap is disabled  121118  2:41:38 InnoDB: Mutexes and rw_locks use GCC atomic builtins  121118  2:41:38 InnoDB: Compressed tables use zlib 1.2.5  121118  2:41:38 InnoDB: Using Linux native AIO /usr/libexec/mysqld: Can't create/write to file '/tmp/ibhsfQfU' (Errcode: 13)  121118  2:41:38  InnoDB: Error: unable to create temporary file; errno: 13  121118  2:41:38 [ERROR] Plugin 'InnoDB' init function returned error.  121118  2:41:38 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.  121118  2:41:38 [ERROR] Unknown/unsupported storage engine: InnoDB  121118  2:41:38 [ERROR] Aborting    121118  2:41:38 [Note] /usr/libexec/mysqld: Shutdown complete    121118 02:41:38 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended  

Fresh installation, nothing changed prior to that, just ran yum update.

Here is the systemctl status trace

[root@linyansho /]# systemctl status mysqld.service  mysqld.service - MySQL database server    Loaded: loaded (/lib/systemd/system/mysqld.service; disabled)    Active: failed since Sun, 18 Nov 2012 02:45:19 +0300; 5min ago    Process: 864 ExecStartPost=/usr/libexec/mysqld-wait-ready $MAINPID (code=exited, status=1/FAILURE)    Process: 863 ExecStart=/usr/bin/mysqld_safe --basedir=/usr (code=exited, status=0/SUCCESS)    Process: 842 ExecStartPre=/usr/libexec/mysqld-prepare-db-dir %n (code=exited, status=0/SUCCESS)    CGroup: name=systemd:/system/mysqld.service  

Sql Anywhere 11: Restoring incremental backup failure

Posted: 15 Sep 2013 11:21 AM PDT

We want to create remote incremental backups after a full backup. This will allow us to restore in the event of a failure and bring up another machine with as close to real time backups as possible with SQL Anywhere network servers.

We are doing a full backup as follows:

dbbackup -y -c "eng=ServerName.DbName;uid=dba;pwd=sql;links=tcpip(host=ServerName)"      c:\backuppath\full  

This makes a backup of the database and log files and can be restored as expected. For incremental backups I've tried both live and incremental transaction logs with a renaming scheme if there are multiple incremental backups:

dbbackup -y -t -c "eng=ServerName.DbName;uid=dba;pwd=sql;links=tcpip(host=ServerName)"      c:\backuppath\inc    dbbackup -y -l -c "eng=ServerName.DbName;uid=dba;pwd=sql;links=tcpip(host=ServerName)"       c:\backuppath\live  

However, on applying the transaction logs on restore I always receive an error when applying the transaction logs to the database:

10092: Unable to find table definition for table referenced in transaction log

The transaction log restore command is:

dbeng11 "c:\dbpath\dbname.db" -a "c:\backuppath\dbname.log"  

The error doesn't specify what table it can't find but this is a controlled test and no tables are being created or dropped. I insert a few rows then kick off an incremental backup before attempting to restore.

Does anyone know the correct way to do incremental backup and restore on Sql Anywhere 11?

UPDATE: Thinking it may be related to the complexity of the target database I made a new blank database and network service. Then added one table with two columns and inserted a few rows. Made a full backup, then inserted and deleted a few more rows and committed transactions, then made an incremental backup. This also failed with the same error when attempting to apply the incremental backups of transaction logs after restoring the full backup ...

Edit:

You can follow this link to see the same question with slightly more feedback on SA: http://sqlanywhere-forum.sybase.com/questions/4760/restoring-incrementallive-backup-failure

What are database statistics, and how can I benefit from them?

Posted: 15 Sep 2013 06:19 AM PDT

I've heard mention of statistics that SQL Server keeps by default. What are they tracking, and how can I use this information to improve my database?

No comments:

Post a Comment

Search This Blog