Wednesday, October 9, 2013

[how to] calculate days and equally distribute remaining

[how to] calculate days and equally distribute remaining


calculate days and equally distribute remaining

Posted: 09 Oct 2013 01:37 PM PDT

I'm having difficulty with calculating remaining days from hours being distributed over a date range within a given time period. For this example, the time period would be 2013-11-01 through 2013-05-01 (it will always be a 6 month period).

The calendar table has a 30 year set. 1 Value of 1 for weekday indicates Monday - Friday.

Setup

CREATE TABLE #calendar (    [d] date PRIMARY KEY,    [weekday] tinyint DEFAULT 0,  );    INSERT INTO #calendar ([d]) VALUES  ('2006-01-01', 0),  ...  ('2013-01-01', 1),  ('2013-01-02', 1),  ('2013-01-03', 1),  ...  ('2014-12-28', 0),  ('2014-12-29', 1),  ('2014-12-30', 1),  ('2014-12-31', 1),  ...  ('2035-12-31', 1);    CREATE TABLE #tasks (    [task] int PRIMARY KEY,    [begin] date,    [end] date,    [hours] decimal(6,3)  );    INSERT INTO #tasks ([task], [begin], [end], [hours]) VALUES  (1, '2013-11-05', '2013-11-05', 5.0),  (2, '2013-10-01', '2013-11-15', 2.0),  (3, '2013-11-15', '2013-12-31', 80.0),  (4, '2013-05-01', '2013-12-04', 2.0),  (5, '2013-07-01', '2013-11-27', 15),  (6, '2013-11-01', '2013-12-10', 40),  (7, '2013-07-01', '2013-08-01', 50.0),  (8, '2014-06-01', '2014-07-01', 10.0);  

Part 1

Getting the total weekdays for the tasks:

DECLARE @periodStart date, @periodEnd date;    SET @periodStart = '2013-11-01';  SET @periodEnd = '2014-05-01';    SELECT a.[task], a.[begin], a.[end], a.[hours],      (SELECT SUM([weekday])      FROM #calendar      WHERE [d] BETWEEN a.[begin] AND a.[end]) AS weekdays  FROM #tasks a  WHERE (a.[begin] < @periodStart AND a.[end] > @periodEnd)      OR (a.[begin] >= @periodStart AND a.[end] < @periodEnd)      OR (a.[begin] < @periodEnd AND a.[end] > @periodStart)  

Result

task        begin      end        hours    weekdays  ----------- ---------- ---------- -------  --------  1           2013-11-05 2013-11-05 5.000    1  2           2013-10-01 2013-11-15 2.000    34  3           2013-11-15 2013-12-31 80.000   33  4           2013-05-01 2013-12-04 2.000    156  5           2013-07-01 2013-11-27 15.000   108  6           2013-11-01 2013-12-10 40.000   28

Part 2

Getting the workdays for the time period:

DECLARE @periodStart date, @periodEnd date;    SET @periodStart = '2013-11-01';  SET @periodEnd = '2014-05-01';    SELECT a.[task], a.[begin], a.[end], a.[hours],      (SELECT SUM([weekday])      FROM #calendar      WHERE [d] BETWEEN a.[begin] AND a.[end]) AS weekdays,      m1.weekdays AS m1wd,      m2.weekdays AS m2wd,      m3.weekdays AS m3wd,      m4.weekdays AS m4wd,      m5.weekdays AS m5wd,      m6.weekdays AS m6wd  FROM #tasks a  CROSS JOIN      -- Month 1 Weekdays      (SELECT CAST(SUM([weekday]) AS decimal(3,1)) AS weekdays FROM #calendar      WHERE [d] BETWEEN DATEADD(MONTH, DATEDIFF(MONTH, 0, @periodStart)+1, 0)           AND DATEADD(MONTH, DATEDIFF(MONTH, 0, @periodStart)+2, -1)) m1  CROSS JOIN      -- Month 2 Weekdays      (SELECT CAST(SUM([weekday]) AS decimal(3,1)) AS weekdays FROM #calendar      WHERE [d] BETWEEN DATEADD(MONTH, DATEDIFF(MONTH, 0, @periodStart)+2, 0)           AND DATEADD(MONTH, DATEDIFF(MONTH, 0, @periodStart)+3, -1)) m2  CROSS JOIN      -- Month 3 Weekdays      (SELECT CAST(SUM([weekday]) AS decimal(3,1)) AS weekdays FROM #calendar      WHERE [d] BETWEEN DATEADD(MONTH, DATEDIFF(MONTH, 0, @periodStart)+3, 0)           AND DATEADD(MONTH, DATEDIFF(MONTH, 0, @periodStart)+4, -1)) m3  CROSS JOIN      -- Month 4 Weekdays      (SELECT CAST(SUM([weekday]) AS decimal(3,1)) AS weekdays FROM #calendar      WHERE [d] BETWEEN DATEADD(MONTH, DATEDIFF(MONTH, 0, @periodStart)+4, 0)           AND DATEADD(MONTH, DATEDIFF(MONTH, 0, @periodStart)+5, -1)) m4  CROSS JOIN      -- Month 5 Weekdays      (SELECT CAST(SUM([weekday]) AS decimal(3,1)) AS weekdays FROM #calendar      WHERE [d] BETWEEN DATEADD(MONTH, DATEDIFF(MONTH, 0, @periodStart)+5, 0)           AND DATEADD(MONTH, DATEDIFF(MONTH, 0, @periodStart)+6, -1)) m5  CROSS JOIN      -- Month 6 Weekdays      (SELECT CAST(SUM([weekday]) AS decimal(3,1)) AS weekdays FROM #calendar      WHERE [d] BETWEEN DATEADD(MONTH, DATEDIFF(MONTH, 0, @periodStart)+6, 0)           AND DATEADD(MONTH, DATEDIFF(MONTH, 0, @periodStart)+7, -1)) m6  WHERE (a.[begin] < @periodStart AND a.[end] > @periodEnd)      OR (a.[begin] >= @periodStart AND a.[end] < @periodEnd)      OR (a.[begin] < @periodEnd AND a.[end] > @periodStart)  

Result

task   begin      end        hours   weekdays    m1wd  m2wd  m3wd  m4wd  m5wd  m6wd  ------ ---------- ---------- ------- ----------- ----- ----- ----- ----- ----- -----  1      2013-11-05 2013-11-05 5.000   1           22.0  23.0  20.0  21.0  22.0  22.0  2      2013-10-01 2013-11-15 2.000   34          22.0  23.0  20.0  21.0  22.0  22.0  3      2013-11-15 2013-12-31 80.000  33          22.0  23.0  20.0  21.0  22.0  22.0  4      2013-05-01 2013-12-04 2.000   156         22.0  23.0  20.0  21.0  22.0  22.0  5      2013-07-01 2013-11-27 15.000  108         22.0  23.0  20.0  21.0  22.0  22.0  6      2013-11-01 2013-12-10 40.000  28          22.0  23.0  20.0  21.0  22.0  22.0

Problem

Since each task can start/end at different times, how do I calculate and distribute the hours over the remaining days in the months? In part 2, would it be easier to just return a table between the period dates with the workdays and then pivot it?

Desired Result

task   begin      end        hours   weekdays    m1h       m2h       ...   m6h  ------ ---------- ---------- ------- ----------- --------  --------  ----- ---  1      2013-11-05 2013-11-05 5.000   1           5           2      2013-10-01 2013-11-15 2.000   34          0.647059    3      2013-11-15 2013-12-31 80.000  33          26.66667  53.33333  4      2013-05-01 2013-12-04 2.000   156         0.269231  0.038462  5      2013-07-01 2013-11-27 15.000  108         2.638889    6      2013-11-01 2013-12-10 40.000  28          30        10

1 Generate a set or sequence without loops

The specified @job_id does not exist

Posted: 09 Oct 2013 12:52 PM PDT

After deleting a SQL Server Agent job I continue to get notifications that the job attempted to run and failed. I am receiving the following error message.

[000] Unable to retrieve steps for job my_job
[298] SQLServer Error: 14262, The specified @job_id ('CC65FEFB-0033-442E-B55E-6209E25C5039') does not exist. [SQLSTATE 42000]

I have checked the following tables in the msdb database and none of them have the specified job_id except the dbo.sysjobhistory table.

  • dbo.sysjobs
  • dbo.sysjobschedules
  • dbo.sysjobsteps
  • dbo.sysjobstepslogs
  • dbo.sysjobservers
  • dbo.sysjobactivity
  • dbo.sysjobhistory

Is there anywhere else I can look?
Also, today I have stopped and restarted the SQL Server Agent and will update this post tomorrow with the result.

Thank you in advance!

mysql is locked; can't kill query

Posted: 09 Oct 2013 08:52 PM PDT

I have a mediawiki running on my local machine and just started running mysqldump last night:

$ mysqldump -h localhost -u wikiuser --default-character-set=UTF8 wikidb --add-locks --skip-lock-table> ~/Dropbox/admin/wikimedia_backup/backup.sql

But now mediawiki protests

Warning: The database has been locked for maintenance, so you will not be able to save your edits right now. You may wish to cut-n-paste the text into a text file and save it for later. The administrator who locked it offered this explanation: Dumping Database, Access will be restored shortly

This sent me looking for locks in the wikidb database. I ran

UNLOCK TABLES;  Query OK, 0 rows affected (0.00 sec)  

, but problem persists.

My sqldump statement runs with the --skip-lock-table options (I think I should add or replace with --add-locks?)

I reran the command with both options.

I also tried to kill the process outright:

mysql> SHOW FULL PROCESSLIST \G;  *************************** 1. row ***************************       Id: 77     User: wikiuser     Host: localhost       db: wikidb  Command: Query     Time: 0    State: NULL     Info: SHOW FULL PROCESSLIST  1 row in set (0.00 sec)    ERROR:   No query specified    mysql> kill query 77;  ERROR 1317 (70100): Query execution was interrupted  

So, two questions:

  1. how do I lift the existing lock;
  2. how should my options be in the mysqldump call to avoid the problem, prospectively?

Multi-master and multi slave

Posted: 09 Oct 2013 01:05 PM PDT

I have been having some issues creating a redundant multi master setup. Here's the proposed layout:

dbsetup

The problem I'm having is when something is written to master1 it replicates to slave1 and master2 but not slave2. And when something is written to master2 it replicates to slave2 and master1 but not to slave 1.

Is there any way I can get this to work where both slaves are updated no matter which master is written to?

Is there a way to determine automatic storage table spaces?

Posted: 09 Oct 2013 12:23 PM PDT

First off, we are on both Windows and AIX and are using DB2 ESE 9.7 FP 4.

I am trying to determine dynamically whether or not table spaces are set up via automatic storage or not (so we can dynamically script how to lower the high water mark when necessary). I can see whether they are defined SMS or DMS "under the covers" via statements like db2 list tablespaces or db2 list tablespaces show detail.

I have also issued queries against the system catalog tables and views (ie, select * from syscat.tablespaces and select * from sysibm.systablespaces).

All I can tell is that the tablespace is defined as SMS or DMS. How can I tell whether or not the tablespace is configured for automatic storage? I know there must be a way I can do it dynamically, because if I use IBM Data Studio and visit the tablespaces "view" and click on properties, I can see whether or not the table space is automatic storage or not. I am assuming there must be a configuration or a query behind that.

Just wondering, how do we figure this out?

How to move partitions between filesystems

Posted: 09 Oct 2013 11:13 AM PDT

We have ~9TB MySQL database which consists from a few tables which are partitioned by day. Sum of daily information is ~20GB. Database is practically write only. There are at most tens of queries against the data per day.

What I would like to achieve is to use fast disks for current day partitions while there is data load activity. And somehow move those partitions to slow and cheap disks when day changes and new partition is started.

How can I do that with minimal or no downtime? Data load activity is 24/7 but can be delayed for a few minutes.

Is it possible to optimize a range query so an explain statement would state "using index" instead of "using index condition"?

Posted: 09 Oct 2013 11:13 AM PDT

I have a very large table where specific queries made by my application are timing out. The table looks something like this:

LargeTable  ------------  Id [BIGINT(20)]  UtcTimestamp [BIGINT(20)]  Pivot [BIGINT(20)]  Details [Text]  

I have an index over 1) Pivot 2) UtcTimestamp.

Now, the query I'm running looks something like this:

SELECT Id, Pivot, UtcTimestamp, Details FROM  LargeTable WHERE   UtcTimestamp>=1380067200 AND UtcTimestamp<=1380153599 AND Pivot=2;  

Since there is an index over the pivot and timestamp, this query should be blazing fast, but it doesn't appear to be. I ran an explain on this query expecting "using index", but instead I get "using index condition". Here is the explain:

+----+-------------+-------------+-------+------------------------+-----------------------+---------+------+------+-----------------------+  | id | select_type | table       | type  | possible_keys          | key                   | key_len | ref  | rows | Extra                 |  +----+-------------+-------------+-------+------------------------+-----------------------+---------+------+------+-----------------------+  |  1 | SIMPLE      | LargeTable  | range | pivot_and_timestamp_id | pivot_and_timestamp_id| 18      | NULL |  963 | Using index condition |  +----+-------------+-------------+-------+------------------------+-----------------------+---------+------+------+-----------------------+   

It's definitely selecting the right index, but why is this query not as fast as it should be?

The MySQL server version is 5.6.14.

Issue In INSERT INTO SELECT Statement [duplicate]

Posted: 09 Oct 2013 10:31 AM PDT

This question already has an answer here:

Can anyone please advise how to rectify below code:

INSERT INTO table1 values(10, SELECT col1 FROM table2, null, null);

---Here table1 has only 4 column(In actual table there are 100 columns and I don't want to mention each column and then enter value)

Is it possible to mention select statement inside values during table insertion ?

Many to many link tables with history?

Posted: 09 Oct 2013 09:55 AM PDT

I'm new here.

While creating a PostgreSQL database of customer subscribed services with DNS domains, I want to:

  • enable customers to subscribe to multiple services
  • each service can have multiple domain names per customer
  • a domain name must be unique per customer per timestamp

Example customers:

1 | Sun Microsystems (USA) Inc.  2 | Sun Microsystems (UK) Ltd.  

Example domains:

1 | sun.com  2 | sun.co.nz  3 | sun.co.uk  

If Sun USA subscribes with the .com & .nz domains in January, then decides in April to have the offshore UK office manage these services instead, along with the .uk domain. Therefore the customer ID changes at a timestamp for billing in the different currency.

A domain name must be live (have no deleted_at timestamp) for only 1 customer per time. A customer may delete a domain & re-add it later, or another customer may add it later.

For billing, I need to know which customer had what domains active on what dates.

How do I manage the link between domains & subscriptions, and maintain a history?

CREATE TABLE subscription  (    id serial NOT NULL,    customer integer NOT NULL,    service integer NOT NULL,    created timestamp with time zone NOT NULL DEFAULT now(),    suspended timestamp with time zone,    ceased timestamp with time zone,    CONSTRAINT subscription_pk PRIMARY KEY (id),    CONSTRAINT subscription_customer_fk FOREIGN KEY (customer)        REFERENCES customer (id) MATCH SIMPLE        ON UPDATE CASCADE ON DELETE RESTRICT,    CONSTRAINT subscription_service_fk FOREIGN KEY (service)        REFERENCES service (id) MATCH SIMPLE        ON UPDATE CASCADE ON DELETE RESTRICT  )      CREATE TABLE dns_domain  (    id serial NOT NULL,    name character varying(256) NOT NULL,    added_at timestamp with time zone NOT NULL DEFAULT now(),    CONSTRAINT dns_domain_pk PRIMARY KEY (id),    CONSTRAINT dns_domain_uk UNIQUE (name)  )  

This simple link table works, but without maintaining any history:

CREATE TABLE subscribed_dns_domain  (    subscription integer NOT NULL,    dns_domain integer NOT NULL,    CONSTRAINT subscribed_dns_domain_pk PRIMARY KEY (subscription, dns_domain),    CONSTRAINT subscribed_dns_domain_dns_domain_fk FOREIGN KEY (dns_domain)        REFERENCES dns_domain (id) MATCH SIMPLE        ON UPDATE CASCADE ON DELETE RESTRICT,    CONSTRAINT subscribed_dns_domain_subscription_fk FOREIGN KEY (subscription)        REFERENCES subscription (id) MATCH SIMPLE        ON UPDATE CASCADE ON DELETE RESTRICT  )  

If I want to allow multiple instances of the same subscribed domain on different dates (only 1 having a NULL deleted_at timestamp), how do I do that? This doesn't work:

DROP TABLE subscribed_dns_domain;  CREATE TABLE subscribed_dns_domain  (    subscription integer NOT NULL,    dns_domain integer NOT NULL,    added_at timestamp with time zone NOT NULL DEFAULT now(),    deleted_at timestamp with time zone,    customer_add_reference character varying(40),    customer_delete_reference character varying(40),    CONSTRAINT subscribed_dns_domain_pk PRIMARY KEY (subscription, dns_domain, added_at),    CONSTRAINT subscribed_dns_domain_dns_domain_fk FOREIGN KEY (dns_domain)        REFERENCES dns_domain (id) MATCH SIMPLE        ON UPDATE CASCADE ON DELETE RESTRICT,    CONSTRAINT subscribed_dns_domain_subscription_fk FOREIGN KEY (subscription)        REFERENCES subscription (id) MATCH SIMPLE        ON UPDATE CASCADE ON DELETE RESTRICT  )  

Should I revert to the first subscribed_dns_domain table, and also have a subscribed_dns_domain_history table, with timestamps populated by triggers on the simple subscribed_dns_domain table? Not sure how to populate customer reference/ticket numbers in there too...

How about partial unique indexes or EXCLUDE USING gist () on the above table?

Note: The preferred way to add a unique constraint to a table is ALTER TABLE ... ADD CONSTRAINT. The use of indexes to enforce unique constraints could be considered an implementation detail that should not be accessed directly.

From http://www.postgresql.org/docs/9.2/static/indexes-unique.html

Thoughts/tutorial websites?

Oracle 11g Enterprise downgrade to Standard One

Posted: 09 Oct 2013 06:05 AM PDT

I'm checking possibilty of downgrade Oracle 11g from Enterpise Edition to Standard One edition. I've cross-checked edition comparison with

select name from dba_feature_usage_statistics where currently_used='TRUE'  

Query result from dba_feature_usage_statistics:

Job Scheduler  Oracle Java Virtual Machine (user)  Oracle Java Virtual Machine (system)  Automatic Memory Tuning  Oracle Utility Metadata API  Segment Advisor  Automatic SQL Tuning Advisor  Object  Materialized Views (User)  Audit Options  Automatic Maintenance - Optimizer Statistics Gathering  Automatic Maintenance - Space Advisor  Automatic Maintenance - SQL Tuning Advisor  Automatic Segment Space Management (system)  Automatic Segment Space Management (user)  Automatic Undo Management  Client Identifier  Character Set  Locally Managed Tablespaces (system)  Locally Managed Tablespaces (user)  Parallel SQL Query Execution  Partitioning (system)  Recovery Area  Recovery Manager (RMAN)  RMAN - Disk Backup  Backup BASIC Compression  Logfile Multiplexing  Server Parameter File  Shared Server  Streams (user)  LOB  SecureFiles (system)  

I think Parallel SQL Query Execution and Partitioning (system) are only in Enterprise Edition, but I'm not sure about rest, especially:

Automatic Memory Tuning  Segment Advisor  Automatic SQL Tuning Advisor  Automatic Maintenance - SQL Tuning Advisor  Recovery Area  

Do you guys know which edition of Oracle is needed to use these features?

Get unmatched rows for foreign key

Posted: 09 Oct 2013 01:31 PM PDT

Consider two tables:

CREATE TABLE Users(    id INT PRIMARY KEY,    name VARCHAR(128)  ) Engine=InnoDB    CREATE TABLE Tags(    id INT PRIMARY KEY,    user REFERENCES Users(id),    tag VARCHAR(128)  ) Engine=InnoDB  

I would like to know which Users do not have any Tags assigned to them. I could do a LEFT JOIN and then add a WHERE clause for no Tags, but as there are tens of thousands of Users and each could have a few tens of Tags, I think that would be impractical. Is there a better way?

This is running on MySQL 5.5 and 5.6 on Ubuntu Server 12.04 running in Amazon EC2.

Cant see SQL Server Agent in SSMS [duplicate]

Posted: 09 Oct 2013 06:29 AM PDT

This question already has an answer here:

I have SSMS 2012 installed on my laptop, and SQL Server 2012 on my hosting, but when i try to schedule a sql job i didn't find "SQL Server Agent" in the left side list!!?

How to store schema-less data efficiently in a relational database?

Posted: 09 Oct 2013 02:07 PM PDT

I'm trying to get a nice balance of searchability and speed. I'm experimenting with different methods.

Option 1: Serialize the array as JSON/PHP serialized array and store it in a 'meta' column.

   id  |  name   | meta    1       Bob       {"city":"GoTown","birthdate":"1980\/8\/14"}  2       Alice     {"city":"Streamville","birthdate":"1986\/6\/6"}  

Option 2: Store keys and values together in a stack.

user_id  |    key   |   value     1         name       Bob  1         city       GoTown  1         birthdate  1980/8/14  2         name       Alice  2         city       Streamville  2         birthdate  1986/6/6  

Option 3: 3 tables: entites, keys, values. Store each key name only once.

user_id  |   name     1         Bob  2         Alice    key_id   |   keyname     1         city  2         birthdate    user_id   |   key_id   |   value  1          1            GoTown  1          2            1980/8/14  2          1            Streamville  2          2            1986/6/6  

Are there any pitfalls with using any of these strategies? I'd like to eliminate some of them if they have insurmountable disadvantages.

How to restrict users to select only query through application textbox field?

Posted: 09 Oct 2013 06:40 AM PDT

Say that I have a web application that lets users to type in a SQL query (SQL Server 2008+). I want users to be able to run any SELECT query and read whatever is in the database, but I do not want them to be able to insert/update/drop/truncate/exec etc. (in short: only select). Please note that there are no data or even database metadata that are considered confidential in the entire database.

The rules currently in place:

  1. No ";" character. If it is there, it must be exactly one and at the end of the trimmed query string.
  2. Query must begin with select (case ignored, white-spaces are part of regex).
  3. All newlines are removed from the query.

Is this enough? Is there any way to do anything else than just select?

EDIT: Thank you Thomas Stringer for the note about SELECT ... INSERT. This is exactly the information I was after. I will remove all the newlines from the query string, which should help solve this problem. Is there any other way to get around this and be able to modify the data?

NOTE: CTE is not relevant for this case. Lets assume that query has to begin with SELECT.

TPS too high in PostgreSQL

Posted: 09 Oct 2013 06:56 AM PDT

I created a table:

CREATE TABLE foo (      id integer primary key,      bar integer NOT NULL  );  

and inserted one row:

INSERT INTO foo (id, bar) VALUES (1, 1);  

and ran a pgbench of the following simple update statement.

script.txt:

update foo set bar=bar+1 where id=1  

pgbench result:

C:\my-temp>"c:\Program Files\PostgreSQL\9.3\bin\pgbench.exe" -U postgres -c 1 -t 10000 -n -f script.txt testdb  Password:  transaction type: Custom query  scaling factor: 1  query mode: simple  number of clients: 1  number of threads: 1  number of transactions per client: 10000  number of transactions actually processed: 10000/10000  tps = 2052.384567 (including connections establishing)  tps = 2058.699883 (excluding connections establishing)  

Given that it's serially executing transactions, this tps result seems to be too high for my system:

OS: Windows 8  CPU: Intel Core i3-2100 3.10GHz  RAM: 4GB  Disk: SATA II 7200 rpm  PostgreSQL: 9.3  

I haven't changed the default settings, so fsync and synchronous_commit options should be both on.

Here is my postgresql.conf:

# -----------------------------  # PostgreSQL configuration file  # -----------------------------  #  # This file consists of lines of the form:  #  #   name = value  #  # (The "=" is optional.)  Whitespace may be used.  Comments are introduced with  # "#" anywhere on a line.  The complete list of parameter names and allowed  # values can be found in the PostgreSQL documentation.  #  # The commented-out settings shown in this file represent the default values.  # Re-commenting a setting is NOT sufficient to revert it to the default value;  # you need to reload the server.  #  # This file is read on server startup and when the server receives a SIGHUP  # signal.  If you edit the file on a running system, you have to SIGHUP the  # server for the changes to take effect, or use "pg_ctl reload".  Some  # parameters, which are marked below, require a server shutdown and restart to  # take effect.  #  # Any parameter can also be given as a command-line option to the server, e.g.,  # "postgres -c log_connections=on".  Some parameters can be changed at run time  # with the "SET" SQL command.  #  # Memory units:  kB = kilobytes        Time units:  ms  = milliseconds  #                MB = megabytes                     s   = seconds  #                GB = gigabytes                     min = minutes  #                                                   h   = hours  #                                                   d   = days      #------------------------------------------------------------------------------  # FILE LOCATIONS  #------------------------------------------------------------------------------    # The default values of these variables are driven from the -D command-line  # option or PGDATA environment variable, represented here as ConfigDir.    #data_directory = 'ConfigDir'       # use data in another directory                      # (change requires restart)  #hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file                      # (change requires restart)  #ident_file = 'ConfigDir/pg_ident.conf' # ident configuration file                      # (change requires restart)    # If external_pid_file is not explicitly set, no extra PID file is written.  #external_pid_file = ''         # write an extra PID file                      # (change requires restart)      #------------------------------------------------------------------------------  # CONNECTIONS AND AUTHENTICATION  #------------------------------------------------------------------------------    # - Connection Settings -    listen_addresses = '*'      # what IP address(es) to listen on;                      # comma-separated list of addresses;                      # defaults to 'localhost'; use '*' for all                      # (change requires restart)  port = 5432             # (change requires restart)  max_connections = 100           # (change requires restart)  # Note:  Increasing max_connections costs ~400 bytes of shared memory per  # connection slot, plus lock space (see max_locks_per_transaction).  #superuser_reserved_connections = 3 # (change requires restart)  #unix_socket_directories = ''   # comma-separated list of directories                      # (change requires restart)  #unix_socket_group = ''         # (change requires restart)  #unix_socket_permissions = 0777     # begin with 0 to use octal notation                      # (change requires restart)  #bonjour = off              # advertise server via Bonjour                      # (change requires restart)  #bonjour_name = ''          # defaults to the computer name                      # (change requires restart)    # - Security and Authentication -    #authentication_timeout = 1min      # 1s-600s  #ssl = off              # (change requires restart)  #ssl_ciphers = 'DEFAULT:!LOW:!EXP:!MD5:@STRENGTH'   # allowed SSL ciphers                      # (change requires restart)  #ssl_renegotiation_limit = 512MB    # amount of data between renegotiations  #ssl_cert_file = 'server.crt'       # (change requires restart)  #ssl_key_file = 'server.key'        # (change requires restart)  #ssl_ca_file = ''           # (change requires restart)  #ssl_crl_file = ''          # (change requires restart)  #password_encryption = on  #db_user_namespace = off    # Kerberos and GSSAPI  #krb_server_keyfile = ''  #krb_srvname = 'postgres'       # (Kerberos only)  #krb_caseins_users = off    # - TCP Keepalives -  # see "man 7 tcp" for details    #tcp_keepalives_idle = 0        # TCP_KEEPIDLE, in seconds;                      # 0 selects the system default  #tcp_keepalives_interval = 0        # TCP_KEEPINTVL, in seconds;                      # 0 selects the system default  #tcp_keepalives_count = 0       # TCP_KEEPCNT;                      # 0 selects the system default      #------------------------------------------------------------------------------  # RESOURCE USAGE (except WAL)  #------------------------------------------------------------------------------    # - Memory -    shared_buffers = 128MB          # min 128kB                      # (change requires restart)  #temp_buffers = 8MB         # min 800kB  #max_prepared_transactions = 0      # zero disables the feature                      # (change requires restart)  # Note:  Increasing max_prepared_transactions costs ~600 bytes of shared memory  # per transaction slot, plus lock space (see max_locks_per_transaction).  # It is not advisable to set max_prepared_transactions nonzero unless you  # actively intend to use prepared transactions.  #work_mem = 1MB             # min 64kB  #maintenance_work_mem = 16MB        # min 1MB  #max_stack_depth = 2MB          # min 100kB    # - Disk -    #temp_file_limit = -1           # limits per-session temp file space                      # in kB, or -1 for no limit    # - Kernel Resource Usage -    #max_files_per_process = 1000       # min 25                      # (change requires restart)  #shared_preload_libraries = ''      # (change requires restart)    # - Cost-Based Vacuum Delay -    #vacuum_cost_delay = 0          # 0-100 milliseconds  #vacuum_cost_page_hit = 1       # 0-10000 credits  #vacuum_cost_page_miss = 10     # 0-10000 credits  #vacuum_cost_page_dirty = 20        # 0-10000 credits  #vacuum_cost_limit = 200        # 1-10000 credits    # - Background Writer -    #bgwriter_delay = 200ms         # 10-10000ms between rounds  #bgwriter_lru_maxpages = 100        # 0-1000 max buffers written/round  #bgwriter_lru_multiplier = 2.0      # 0-10.0 multipler on buffers scanned/round    # - Asynchronous Behavior -    #effective_io_concurrency = 1       # 1-1000; 0 disables prefetching      #------------------------------------------------------------------------------  # WRITE AHEAD LOG  #------------------------------------------------------------------------------    # - Settings -    #wal_level = minimal            # minimal, archive, or hot_standby                      # (change requires restart)  #fsync = on             # turns forced synchronization on or off  #synchronous_commit = on        # synchronization level;                      # off, local, remote_write, or on  #wal_sync_method = fsync        # the default is the first option                      # supported by the operating system:                      #   open_datasync                      #   fdatasync (default on Linux)                      #   fsync                      #   fsync_writethrough                      #   open_sync  #full_page_writes = on          # recover from partial page writes  #wal_buffers = -1           # min 32kB, -1 sets based on shared_buffers                      # (change requires restart)  #wal_writer_delay = 200ms       # 1-10000 milliseconds    #commit_delay = 0           # range 0-100000, in microseconds  #commit_siblings = 5            # range 1-1000    # - Checkpoints -    #checkpoint_segments = 3        # in logfile segments, min 1, 16MB each  #checkpoint_timeout = 5min      # range 30s-1h  #checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0  #checkpoint_warning = 30s       # 0 disables    # - Archiving -    #archive_mode = off     # allows archiving to be done                  # (change requires restart)  #archive_command = ''       # command to use to archive a logfile segment                  # placeholders: %p = path of file to archive                  #               %f = file name only                  # e.g. 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'  #archive_timeout = 0        # force a logfile segment switch after this                  # number of seconds; 0 disables      #------------------------------------------------------------------------------  # REPLICATION  #------------------------------------------------------------------------------    # - Sending Server(s) -    # Set these on the master and on any standby that will send replication data.    #max_wal_senders = 0        # max number of walsender processes                  # (change requires restart)  #wal_keep_segments = 0      # in logfile segments, 16MB each; 0 disables  #wal_sender_timeout = 60s   # in milliseconds; 0 disables    # - Master Server -    # These settings are ignored on a standby server.    #synchronous_standby_names = '' # standby servers that provide sync rep                  # comma-separated list of application_name                  # from standby(s); '*' = all  #vacuum_defer_cleanup_age = 0   # number of xacts by which cleanup is delayed    # - Standby Servers -    # These settings are ignored on a master server.    #hot_standby = off          # "on" allows queries during recovery                      # (change requires restart)  #max_standby_archive_delay = 30s    # max delay before canceling queries                      # when reading WAL from archive;                      # -1 allows indefinite delay  #max_standby_streaming_delay = 30s  # max delay before canceling queries                      # when reading streaming WAL;                      # -1 allows indefinite delay  #wal_receiver_status_interval = 10s # send replies at least this often                      # 0 disables  #hot_standby_feedback = off     # send info from standby to prevent                      # query conflicts  #wal_receiver_timeout = 60s     # time that receiver waits for                      # communication from master                      # in milliseconds; 0 disables      #------------------------------------------------------------------------------  # QUERY TUNING  #------------------------------------------------------------------------------    # - Planner Method Configuration -    #enable_bitmapscan = on  #enable_hashagg = on  #enable_hashjoin = on  #enable_indexscan = on  #enable_indexonlyscan = on  #enable_material = on  #enable_mergejoin = on  #enable_nestloop = on  #enable_seqscan = on  #enable_sort = on  #enable_tidscan = on    # - Planner Cost Constants -    #seq_page_cost = 1.0            # measured on an arbitrary scale  #random_page_cost = 4.0         # same scale as above  #cpu_tuple_cost = 0.01          # same scale as above  #cpu_index_tuple_cost = 0.005       # same scale as above  #cpu_operator_cost = 0.0025     # same scale as above  #effective_cache_size = 128MB    # - Genetic Query Optimizer -    #geqo = on  #geqo_threshold = 12  #geqo_effort = 5            # range 1-10  #geqo_pool_size = 0         # selects default based on effort  #geqo_generations = 0           # selects default based on effort  #geqo_selection_bias = 2.0      # range 1.5-2.0  #geqo_seed = 0.0            # range 0.0-1.0    # - Other Planner Options -    #default_statistics_target = 100    # range 1-10000  #constraint_exclusion = partition   # on, off, or partition  #cursor_tuple_fraction = 0.1        # range 0.0-1.0  #from_collapse_limit = 8  #join_collapse_limit = 8        # 1 disables collapsing of explicit                      # JOIN clauses      #------------------------------------------------------------------------------  # ERROR REPORTING AND LOGGING  #------------------------------------------------------------------------------    # - Where to Log -    log_destination = 'stderr'      # Valid values are combinations of                      # stderr, csvlog, syslog, and eventlog,                      # depending on platform.  csvlog                      # requires logging_collector to be on.    # This is used when logging to stderr:  logging_collector = on      # Enable capturing of stderr and csvlog                      # into log files. Required to be on for                      # csvlogs.                      # (change requires restart)    # These are only used if logging_collector is on:  #log_directory = 'pg_log'       # directory where log files are written,                      # can be absolute or relative to PGDATA  #log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'    # log file name pattern,                      # can include strftime() escapes  #log_file_mode = 0600           # creation mode for log files,                      # begin with 0 to use octal notation  #log_truncate_on_rotation = off     # If on, an existing log file with the                      # same name as the new log file will be                      # truncated rather than appended to.                      # But such truncation only occurs on                      # time-driven rotation, not on restarts                      # or size-driven rotation.  Default is                      # off, meaning append to existing files                      # in all cases.  #log_rotation_age = 1d          # Automatic rotation of logfiles will                      # happen after that time.  0 disables.  #log_rotation_size = 10MB       # Automatic rotation of logfiles will                      # happen after that much log output.                      # 0 disables.    # These are relevant when logging to syslog:  #syslog_facility = 'LOCAL0'  #syslog_ident = 'postgres'    # This is only relevant when logging to eventlog (win32):  #event_source = 'PostgreSQL'    # - When to Log -    #client_min_messages = notice       # values in order of decreasing detail:                      #   debug5                      #   debug4                      #   debug3                      #   debug2                      #   debug1                      #   log                      #   notice                      #   warning                      #   error    #log_min_messages = warning     # values in order of decreasing detail:                      #   debug5                      #   debug4                      #   debug3                      #   debug2                      #   debug1                      #   info                      #   notice                      #   warning                      #   error                      #   log                      #   fatal                      #   panic    #log_min_error_statement = error    # values in order of decreasing detail:                      #   debug5                      #   debug4                      #   debug3                      #   debug2                      #   debug1                      #   info                      #   notice                      #   warning                      #   error                      #   log                      #   fatal                      #   panic (effectively off)    #log_min_duration_statement = -1    # -1 is disabled, 0 logs all statements                      # and their durations, > 0 logs only                      # statements running at least this number                      # of milliseconds      # - What to Log -    #debug_print_parse = off  #debug_print_rewritten = off  #debug_print_plan = off  #debug_pretty_print = on  #log_checkpoints = off  #log_connections = off  #log_disconnections = off  #log_duration = off  #log_error_verbosity = default      # terse, default, or verbose messages  #log_hostname = off  log_line_prefix = '%t '         # special values:                      #   %a = application name                      #   %u = user name                      #   %d = database name                      #   %r = remote host and port                      #   %h = remote host                      #   %p = process ID                      #   %t = timestamp without milliseconds                      #   %m = timestamp with milliseconds                      #   %i = command tag                      #   %e = SQL state                      #   %c = session ID                      #   %l = session line number                      #   %s = session start timestamp                      #   %v = virtual transaction ID                      #   %x = transaction ID (0 if none)                      #   %q = stop here in non-session                      #        processes                      #   %% = '%'                      # e.g. '<%u%%%d> '  #log_lock_waits = off           # log lock waits >= deadlock_timeout  #log_statement = 'none'         # none, ddl, mod, all  #log_temp_files = -1            # log temporary files equal or larger                      # than the specified size in kilobytes;                      # -1 disables, 0 logs all temp files  log_timezone = 'Asia/Seoul'      #------------------------------------------------------------------------------  # RUNTIME STATISTICS  #------------------------------------------------------------------------------    # - Query/Index Statistics Collector -    #track_activities = on  #track_counts = on  #track_io_timing = off  #track_functions = none         # none, pl, all  #track_activity_query_size = 1024   # (change requires restart)  #update_process_title = on  #stats_temp_directory = 'pg_stat_tmp'      # - Statistics Monitoring -    #log_parser_stats = off  #log_planner_stats = off  #log_executor_stats = off  #log_statement_stats = off      #------------------------------------------------------------------------------  # AUTOVACUUM PARAMETERS  #------------------------------------------------------------------------------    #autovacuum = on            # Enable autovacuum subprocess?  'on'                      # requires track_counts to also be on.  #log_autovacuum_min_duration = -1   # -1 disables, 0 logs all actions and                      # their durations, > 0 logs only                      # actions running at least this number                      # of milliseconds.  #autovacuum_max_workers = 3     # max number of autovacuum subprocesses                      # (change requires restart)  #autovacuum_naptime = 1min      # time between autovacuum runs  #autovacuum_vacuum_threshold = 50   # min number of row updates before                      # vacuum  #autovacuum_analyze_threshold = 50  # min number of row updates before                      # analyze  #autovacuum_vacuum_scale_factor = 0.2   # fraction of table size before vacuum  #autovacuum_analyze_scale_factor = 0.1  # fraction of table size before analyze  #autovacuum_freeze_max_age = 200000000  # maximum XID age before forced vacuum                      # (change requires restart)  #autovacuum_vacuum_cost_delay = 20ms    # default vacuum cost delay for                      # autovacuum, in milliseconds;                      # -1 means use vacuum_cost_delay  #autovacuum_vacuum_cost_limit = -1  # default vacuum cost limit for                      # autovacuum, -1 means use                      # vacuum_cost_limit      #------------------------------------------------------------------------------  # CLIENT CONNECTION DEFAULTS  #------------------------------------------------------------------------------    # - Statement Behavior -    #search_path = '"$user",public'     # schema names  #default_tablespace = ''        # a tablespace name, '' uses the default  #temp_tablespaces = ''          # a list of tablespace names, '' uses                      # only default tablespace  #check_function_bodies = on  #default_transaction_isolation = 'read committed'  #default_transaction_read_only = off  #default_transaction_deferrable = off  #session_replication_role = 'origin'  #statement_timeout = 0          # in milliseconds, 0 is disabled  #lock_timeout = 0           # in milliseconds, 0 is disabled  #vacuum_freeze_min_age = 50000000  #vacuum_freeze_table_age = 150000000  #bytea_output = 'hex'           # hex, escape  #xmlbinary = 'base64'  #xmloption = 'content'    # - Locale and Formatting -    datestyle = 'iso, ymd'  #intervalstyle = 'postgres'  timezone = 'Asia/Seoul'  #timezone_abbreviations = 'Default'     # Select the set of available time zone                      # abbreviations.  Currently, there are                      #   Default                      #   Australia                      #   India                      # You can create your own file in                      # share/timezonesets/.  #extra_float_digits = 0         # min -15, max 3  #client_encoding = sql_ascii        # actually, defaults to database                      # encoding    # These settings are initialized by initdb, but they can be changed.  lc_messages = 'Korean_Korea.949'            # locale for system error message                      # strings  lc_monetary = 'Korean_Korea.949'            # locale for monetary formatting  lc_numeric = 'Korean_Korea.949'         # locale for number formatting  lc_time = 'Korean_Korea.949'                # locale for time formatting    # default configuration for text search  default_text_search_config = 'pg_catalog.simple'    # - Other Defaults -    #dynamic_library_path = '$libdir'  #local_preload_libraries = ''      #------------------------------------------------------------------------------  # LOCK MANAGEMENT  #------------------------------------------------------------------------------    #deadlock_timeout = 1s  #max_locks_per_transaction = 64     # min 10                      # (change requires restart)  # Note:  Each lock table slot uses ~270 bytes of shared memory, and there are  # max_locks_per_transaction * (max_connections + max_prepared_transactions)  # lock table slots.  #max_pred_locks_per_transaction = 64    # min 10                      # (change requires restart)      #------------------------------------------------------------------------------  # VERSION/PLATFORM COMPATIBILITY  #------------------------------------------------------------------------------    # - Previous PostgreSQL Versions -    #array_nulls = on  #backslash_quote = safe_encoding    # on, off, or safe_encoding  #default_with_oids = off  #escape_string_warning = on  #lo_compat_privileges = off  #quote_all_identifiers = off  #sql_inheritance = on  #standard_conforming_strings = on  #synchronize_seqscans = on    # - Other Platforms and Clients -    #transform_null_equals = off      #------------------------------------------------------------------------------  # ERROR HANDLING  #------------------------------------------------------------------------------    #exit_on_error = off            # terminate session on any error?  #restart_after_crash = on       # reinitialize after backend crash?      #------------------------------------------------------------------------------  # CONFIG FILE INCLUDES  #------------------------------------------------------------------------------    # These options allow settings to be loaded from files other than the  # default postgresql.conf.    #include_dir = 'conf.d'         # include files ending in '.conf' from                      # directory 'conf.d'  #include_if_exists = 'exists.conf'  # include file only if it exists  #include = 'special.conf'       # include file      #------------------------------------------------------------------------------  # CUSTOMIZED OPTIONS  #------------------------------------------------------------------------------    # Add settings for extensions here  

pg_test_fsync result:

C:\temp>"C:\Program Files\PostgreSQL\9.3\bin\pg_test_fsync"  5 seconds per test  O_DIRECT supported on this platform for open_datasync and open_sync.    Compare file sync methods using one 8kB write:  (in wal_sync_method preference order, except fdatasync  is Linux's default)          open_datasync                   81199.920 ops/sec      12 usecs/op          fdatasync                                     n/a          fsync                              45.337 ops/sec   22057 usecs/op          fsync_writethrough                 46.470 ops/sec   21519 usecs/op          open_sync                                     n/a    Compare file sync methods using two 8kB writes:  (in wal_sync_method preference order, except fdatasync  is Linux's default)          open_datasync                   41093.981 ops/sec      24 usecs/op          fdatasync                                     n/a          fsync                              38.569 ops/sec   25927 usecs/op          fsync_writethrough                 36.970 ops/sec   27049 usecs/op          open_sync                                     n/a    Compare open_sync with different write sizes:  (This is designed to compare the cost of writing 16kB  in different write open_sync sizes.)           1 * 16kB open_sync write                     n/a           2 *  8kB open_sync writes                    n/a           4 *  4kB open_sync writes                    n/a           8 *  2kB open_sync writes                    n/a          16 *  1kB open_sync writes                    n/a    Test if fsync on non-write file descriptor is honored:  (If the times are similar, fsync() can sync data written  on a different descriptor.)          write, fsync, close                45.564 ops/sec   21947 usecs/op          write, close, fsync                33.373 ops/sec   29964 usecs/op    Non-Sync'ed 8kB writes:          write                             889.800 ops/sec    1124 usecs/op  

Doesn't this fsync result mean that maximum write tps cannot exceed 45 tps?

Another question: Shouldn't 7200 rpm disk be able to perform about 120 tps? Why is fsync so slow?

mysql innodb space x did not exist in memory

Posted: 09 Oct 2013 11:25 AM PDT

Into my innodb log I got the errors below. How to fix? What did it mean? Some tables are corrupted but not all.

InnoDB: space id 1753 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/eav_attribute,  InnoDB: space id 1777 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/catalog_eav_attribute,  InnoDB: space id 1626 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/catalog_category_entity_int,  InnoDB: space id 1609 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/catalog_category_entity_text,  InnoDB: space id 1610 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/catalog_category_entity_varchar,  InnoDB: space id 1611 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/customer_eav_attribute,  InnoDB: space id 1746 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/customer_eav_attribute_website,  InnoDB: space id 1747 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/customer_form_attribute,  InnoDB: space id 1754 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/eav_attribute_label,  InnoDB: space id 1779 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/eav_attribute_option,  InnoDB: space id 1780 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/eav_attribute_option_value,  InnoDB: space id 1781 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/eav_entity_attribute,  InnoDB: space id 1784 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/eav_form_element,  InnoDB: space id 1792 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/googleshopping_attributes,  InnoDB: space id 1804 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/salesrule_product_attribute,  InnoDB: space id 1516 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/googlebase_attributes,  InnoDB: space id 1798 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/eav_attribute_set,  

Troubleshotting Mirroring Configration

Posted: 09 Oct 2013 08:08 AM PDT

tcp server cannot be reached or does not exist.Check the network address name and the ports for the local and remote are operational.

an Exception occured while executing a Transactional-SQL Statment.

I am configuring database Mirroring on Sql Server 2012 after congiruation am getting the abover Error while Starting Mirroring

MySQL shutdown unexpectedly

Posted: 09 Oct 2013 08:25 PM PDT

2013-09-09 10:21:44 5776 [Note] Plugin 'FEDERATED' is disabled.  2013-09-09 10:21:44 1624 InnoDB: Warning: Using innodb_additional_mem_pool_size is DEPRECATED. This option may be removed in future releases, together with the option innodb_use_sys_malloc and with the InnoDB's internal memory allocator.  2013-09-09 10:21:44 5776 [Note] InnoDB: The InnoDB memory heap is disabled  2013-09-09 10:21:44 5776 [Note] InnoDB: Mutexes and rw_locks use Windows interlocked functions  2013-09-09 10:21:44 5776 [Note] InnoDB: Compressed tables use zlib 1.2.3  2013-09-09 10:21:44 5776 [Note] InnoDB: Not using CPU crc32 instructions  2013-09-09 10:21:44 5776 [Note] InnoDB: Initializing buffer pool, size = 16.0M  2013-09-09 10:21:44 5776 [Note] InnoDB: Completed initialization of buffer pool  2013-09-09 10:21:44 5776 [Note] InnoDB: Highest supported file format is Barracuda.  2013-09-09 10:21:44 5776 [Note] InnoDB: The log sequence numbers 0 and 0 in ibdata files do not match the log sequence number 19862295 in the ib_logfiles!  2013-09-09 10:21:44 5776 [Note] InnoDB: Database was not shutdown normally!  2013-09-09 10:21:44 5776 [Note] InnoDB: Starting crash recovery.  2013-09-09 10:21:44 5776 [Note] InnoDB: Reading tablespace information from the .ibd files...  2013-09-09 10:21:44 5776 [ERROR] InnoDB: Attempted to open a previously opened tablespace. Previous tablespace casualme/scraped_user uses space ID: 2 at filepath: .\casualme\scraped_user.ibd. Cannot open tablespace mysql/innodb_index_stats which uses space ID: 2 at filepath: .\mysql\innodb_index_stats.ibd  InnoDB: Error: could not open single-table tablespace file .\mysql\innodb_index_stats.ibd  InnoDB: We do not continue the crash recovery, because the table may become  InnoDB: corrupt if we cannot apply the log records in the InnoDB log to it.  InnoDB: To fix the problem and start mysqld:  InnoDB: 1) If there is a permission problem in the file and mysqld cannot  InnoDB: open the file, you should modify the permissions.  InnoDB: 2) If the table is not needed, or you can restore it from a backup,  InnoDB: then you can remove the .ibd file, and InnoDB will do a normal  InnoDB: crash recovery and ignore that table.  InnoDB: 3) If the file system or the disk is broken, and you cannot remove  InnoDB: the .ibd file, you can set innodb_force_recovery > 0 in my.cnf  InnoDB: and force InnoDB to continue crash recovery here.  

Hello guys.I need your help. my phpmyadmin is not working. How can i make it working again, i have a very large data, please help me how to fix this. i just want to get my database.

I have this following files. db.opt , scraped_user.frm , scraped_user.ibd is there a way to recover my database from this files?, I just want to get the tables data.

I want to reinstall my xampp but is there a way to back up my db when i can't run it? like copying this folder 'DBNAME' with files db.opt , scraped_user.frm , scraped_user.ibd

Select on several {name,value} pairs

Posted: 09 Oct 2013 12:25 PM PDT

I have a this schema:

CREATE TABLE Pairs(     id    VARCHAR(8)  NOT NULL     ,name  VARCHAR(40) NOT NULL     ,value VARCHAR(1000) NOT NULL     ,PRIMARY KEY (id,name));  

I want to write a query to retrieve the id which is present in several rows with several constraints on name and value like:

  • name like 'forname%' AND value like 'CAMI%'
  • name like 'projet%' AND value like '%gociation'
  • name ... AND value ...
  • ...

The cardinality of the constraints is unknown at compile time, it comes from a GUI.

I wrote this query:

select * from (        SELECT * FROM Pairs WHERE ( name   =  'projet'   ) AND ( value like '%gociation' )  union SELECT * FROM Pairs WHERE ( name like 'forname%' ) AND ( value like 'CAMI%' )  ) as u order by id  

the result is:

id      name    value  AEMIMA  projet  renegociation  AENABE  projet  renegociation  AEREDH  projet  renegociation  AGOGLY  projet  renegociation  AHOGAL  projet  renegociation  AHSPID  projet  renegociation       <<<<<<<<<<<<  AHSPID  fornameE    CAMILLE         <<<<<<<<<<<<  AIOSAP  projet  renegociation  AIPNEU  projet  renegociation  

Only the <<<<<<<<<<<< marked lines are good and the id I want is AHSPID.

How can I eliminate the lines which doesn't match ALL the constraints?

Restoring of subscriber database failed

Posted: 09 Oct 2013 08:25 AM PDT

We have replicated a database from live to test in SQL Server 2005. Now we need to restore the subscriber database which is in test to the same server without replication setting in the restored database. How can we achieve it?

SqlPackage does not pick up variables from profile

Posted: 09 Oct 2013 06:25 PM PDT

I want to upgrade a database using .dacpac and sqlpackage.exe

here is how I run sqlpackage:

SqlPackage.exe      /Action:Publish      /SourceFile:"my.dacpac"      /Profile:"myprofile.publish.xml"  

The error I get is:

* The following SqlCmd variables are not defined in the target scripts: foo.

I have verified that myprofile.publish.xml file does contain that var:

<ItemGroup>    <SqlCmdVariable Include="foo">      <Value>bc\local</Value>    </SqlCmdVariable>  

I also verified that project that creates dacpac does publish successfully from within visual studio using myprofile.publish.xml

What else could I be missing?

(I'm using SQL Server 2012)

Speeding up mysqldump / reload

Posted: 09 Oct 2013 10:25 AM PDT

Converting a large schema to file-per-table and I will be performing a mysqldump/reload with --all-databases. I have edited the my.cnf and changed "innod_flush_log_at_trx_commit=2" to speed up the load. I am planning to "SET GLOBAL innodb_max_dirty_pages_pct=0;" at some point before the dump. I am curious to know which combination of settings will get me the fastest dump and reload times?

SCHEMA stats:

26 myisam tables 413 innodb ~240GB of data

[--opt= --disable-keys; --extended-insert; --quick, etc] --no-autocommit ??

vs prepending session vars like: "SET autocommit=0; SET unique_checks=0; SET foreign_key_checks=0;"

Are the mysqldump options equivalent or not really?

Thanks for your advice!

Repeated values in group_concat

Posted: 09 Oct 2013 04:25 PM PDT

I have two tables, first the table food and Second is Activity:

INSERT INTO food      (`id`, `foodName`)  VALUES      (1, 'food1'),      (2, 'food2'),      (3, 'food3'),      (4, 'food4'),      (5, 'food5'),      (6, 'food6'),  ;  CREATE TABLE Activity      (`id` int,`place` varchar(14),`food_id` int,`timing` TIME,`date_and_time` DATETIME)  ;  INSERT INTO Activity      (`id`,`place`, `food_id`,`timing`,`date_and_time`)  VALUES      (1, 'place1', 1, '10:30am','2013-05-01'),      (2, 'place1', 1, '12:30pm','2013-05-01'),      (3, 'place1', 1, '04:30pm','2013-05-01'),      (4, 'place2', 2, '10:30am','2013-05-02'),      (5, 'place2', 2, '12:30pm','2013-05-02'),      (6, 'place2', 2, '4:30pm','2013-05-02'),      (7, 'place1', 2, '10:30am','2013-05-02'),      (8, 'place1', 2, '12:30pm','2013-05-02'),      (9, 'place1', 2, '4:30pm','2013-05-02'),      (10, 'place2', 3, '10:30am','2013-05-03'),      (11, 'place2', 3, '12:30pm','2013-05-03'),      (12, 'place2', 3, '4:30pm','2013-05-03')  ;  

For now I'm using the following query:

SELECT       a.activity_type AS Activity,       COUNT(DISTINCT p.id) AS Products,      CONVERT(GROUP_CONCAT(p.category_id SEPARATOR ',  ') USING utf8)         AS Categories  FROM       food AS p    JOIN       ( SELECT activity_type             , prod_id        FROM activity         WHERE activity_type <> ''         GROUP BY activity_type               , prod_id      ) AS a      ON p.id = a.prod_id  GROUP BY       activity_type  ORDER BY       Products DESC ;  

Could you please help me, I need output in the below format:

place | food_id | Timings             |                              |        |         |---------------------|Date                          |        |         |Time1 |Time2 | Time3 |                              |  ---------------+----------+------------------------------------------|  place1 | 1      | 10:30am| 12:30pm| 4:30pm |2013-05-01(MAX timestamp)|            place2 | 1      | 10:30am| 12:30am| 4:30am |2013-05-01(MAX timestamp)|  

MySQL PDO Cannot assign requested address

Posted: 09 Oct 2013 02:25 PM PDT

Can someone help me with this error?

[08-Apr-2013 17:44:08 Europe/Berlin] PHP Warning:  PDO::__construct(): [2002]      Cannot assign requested address (trying to connect via tcp://****:3306) in       /var/www/***  [08-Apr-2013 17:44:08 Europe/Berlin] PHP Fatal error:  Uncaught exception       'PDOException' with message 'SQLSTATE[HY000] [2002] Cannot assign requested       address' in /var/www/***  

I have a Server with a lot connections per second; out of about 100 Connections, a single one got this error.

I've tried this recommendation from stackoverflow however it does not solve my problem.

How I prevent deadlock occurrence in my application?

Posted: 09 Oct 2013 07:25 AM PDT

I am developing an LMS application in PHP framework(Codeigniter 2.1.0). I am using MySQL database. All the tables in the database have innodb engine. I also created indexes on each tables. Now I am doing load testing using Jmeter version 2.9 locally for 200 users concurrently. During the load testing, in a specific page action I got Deadlock Found error. I changed my original query to the new one but again same error is occurring.

I have written save_interactions function which takes four parameters interaction array,module_id,course_id,user_id & is been called so many times by the AJAX script. The following script inserts the record if the specific interaction_id is not present in that table otherwise the update query will get fire.

public function save_interactions($interaction_array,$modid,$cid,$uid)  {      foreach($interaction_array as $key=>$interact_value)      {          $select_query = $this->db->query("SELECT COUNT(*) AS total FROM `scorm_interactions` WHERE `mod_id`='".$modid."' AND `course_id`='".$cid."' AND `user_id`='".$uid."' AND `interaction_id`='".$interact_value[0]."'");          $fetchRow = $select_query->row_array();            if($fetchRow['total']==1)          {              $update_data = array(                          "interaction_type"=>$interact_value[1],                          "time"=>$interact_value[2],                          "weighting"=>$interact_value[3],                          "correct_response"=>$interact_value[4],                          "learner_response"=>$interact_value[5],                          "result"=>$interact_value[6],                          "latency"=>$interact_value[7],                          "objectives"=>$interact_value[8],                          "description"=>$interact_value[9]              );              $this->db->where('mod_id', $modid);              $this->db->where('course_id', $cid);              $this->db->where('user_id', $uid);              $this->db->where('interaction_id', $interact_value[0]);              $this->db->update('scorm_interactions', $update_data);          }else          {              $insert_data = array(                          "user_id"=>$uid,                          "course_id"=>$cid,                          "mod_id"=>$modid,                          "interaction_id"=>$interact_value[0],                          "interaction_type"=>$interact_value[1],                          "time"=>$interact_value[2],                          "weighting"=>$interact_value[3],                          "correct_response"=>$interact_value[4],                          "learner_response"=>$interact_value[5],                          "result"=>$interact_value[6],                          "latency"=>$interact_value[7],                          "objectives"=>$interact_value[8],                          "description"=>$interact_value[9]              );              $this->db->insert('scorm_interactions', $insert_data);          }      }  }  

I got this type of error:

Deadlock found when trying to get lock; try restarting transaction

UPDATE `scorm_interactions` SET      `interaction_type` = 'choice',      `time` = '10:45:31',      `weighting` = '1',      `correct_response` = 'Knees*',      `learner_response` = 'Knees*',      `result` = 'correct',      `latency` = '0000:00:02.11',      `objectives` = 'Question2_1',      `description` = ''  WHERE      `mod_id` =  '4' AND      `course_id` =  '5' AND      `user_id` =  '185' AND      `interaction_id` =  'Question2_1'  ;    Filename: application/models/user/scorm1_2_model.php Line Number: 234  

Can anyone please suggest me how to avoid Deadlock?

How much data is needed to show MySQL Cluster's performance scaling vs. InnoDB

Posted: 09 Oct 2013 09:25 AM PDT

I am evaluating MySQL Cluster as a possible replacement for an InnoDB schema. So far, I have tested it with 10s of MB of data, and found MySQL Cluster slower than InnoDB; however, I have been told MySQL Cluster scales much better.

How much data does it take to show a performance benefit to MySQL Cluster vs. an InnoDB schema? Or, is there a better way to demonstrate MySQL Cluster's merits?

EDIT

Perhaps an important note: My cluster is currently a heterogeneous cluster with 4 machines. On each machine, I have given an equal amount of Data and Index Memory; 4GB, 2GB, 2GB, and 1GB respectively. The machines are running i7's and are connected over a Gigabit Lan. NumOfReplicas is set to 2.

EDIT

This application is a low-usage analytics database, which has roughly 3 tables >= 200M rows and 5 tables <= 10K rows. When we use it, it takes 15 seconds to run our aggregate functions. My boss asked me to research MySQL Cluster, to see if we could increase performance, since we thought aggregate functions could run pretty well in parallel.

Named Pipe Provider Error code 40

Posted: 09 Oct 2013 09:25 PM PDT

I have literally tried everything, from enabling named pipe to adding exception to ports in the firewall, to everything possible in surface configuration. I can connect to the SQL instance(using TCP and Named Pipes) with SQL Server Management Studio. But sqlcmd throws an error:

Login timeout expired  

Help!

[MS SQL Server] Query taking over 50 mins; Stuff with XML Path

[MS SQL Server] Query taking over 50 mins; Stuff with XML Path


Query taking over 50 mins; Stuff with XML Path

Posted: 09 Oct 2013 12:15 AM PDT

Hi I am using Stuff with XML Path for concatenating the data in multiple columns like city, town etc by Zip code. ie., unique column is Zip and all the cities will be concatenated to one cell, similarly town and so on. The query runs over 1 million data, table has 180 columns. I tried - Adding indexes to the table, it took 50 minutes to run the query. Added a temporary table to get only required 20 columns and added indexes to the temporary table and it helped reduce about 3 minutes.Checked the execution plan the issue was with the table scan. There are no filters required to generate this report.Can you please suggest what else could be done to improve.ThanksEsha

Number of files for database MDF

Posted: 09 Oct 2013 01:43 AM PDT

Hi,I have a system with 1 Quad core CPU and 32GB of RAM.The database files, the MDF only, are on a RAID 10 system (the LDF is on a RAID1 and tempdb on a SSD disk).The database is for an ERP application that's quite write and read intensive.Is it advisable to create an "extra" FILEGROUP and add another file (NDF) to the database and move some tables to that FILEGROUP?Some queries use UNIONs between two large tables. If on table was on PRIMARY FILEGROUP and the other table on my extra FILEGROUP would there be any advantage?Thanks,Pedro

SQL Server Top Third Party Monitoring tolls

Posted: 09 Oct 2013 02:58 AM PDT

Can you please any one give me the SQL Server top 4 or 5 montoring tools names and details like.. features,License details for each tool.if you give any tool report comparing with other tools would be Appreciate.

[Articles] Letting People Go Securely

[Articles] Letting People Go Securely


Letting People Go Securely

Posted: 08 Oct 2013 11:00 PM PDT

Having employees leave your company is a reality of life. But how do you handle letting your IT workers go and protect your systems? Steve Jones isn't sure there is much you can do to prevent issues, but you can deal with them.

[SQL 2012] creating user databases on ssrs server

[SQL 2012] creating user databases on ssrs server


creating user databases on ssrs server

Posted: 09 Oct 2013 03:31 AM PDT

Hi,Can I create an user database to hold the tables that will have data imported from multiple servers using SSIS package.I have 4 servers and I will get data from all the four servers and planning import all that data onto a user database on my reporting server for reporting purpose.Is that recommended?Thanks.

log shipping job failing any idea why?

Posted: 09 Oct 2013 04:09 AM PDT

Hi, Alllog shipping job failing any idea why?all jobs are failing thanks for looking

query performance vs server to server latency

Posted: 09 Oct 2013 03:48 AM PDT

I've tried running two different queries: one that was based on a much smaller query that didn't scale, and one that I thought would scale [i]better[/i], since it used a CTE to pre-process the data set to test. I've run both separately, and cancelled each after an hour. Both columns are varchar and neither is nullable. Running different portions of each query returns quick results. It's comparing ~56k rows to ~588k rows. The Thing That Shall Not Scale:[code="sql"]select a1.sam_batch, count(*) as Recordsfrom aww_sample a1,[p42].sample.dbo.wireless_blocks p2where substring(a1.sam_phonenumber, 4, 9) = p2.matchand a1.sam_batch between 437 and 449group by a1.sam_batch[/code]From Query To Eternity:[code="sql"]with cc(sam_batch, phone) as (select sam_batch, SUBSTRING(sam_phonenumber, 4, 9)from aww_samplewhere sam_batch between 437 and 449)select cc.sam_batch, count(*)from cc,[p42].sample.dbo.wireless_blocks p2where cc.phone = p2.matchgroup by cc.sam_batch[/code]So then I re-wrote things a touch and ran from the server with the larger table. It completed in 2 seconds (mostly) flat:[code="sql"]with cc(sam_batch, phone) as (select sam_batch, SUBSTRING(sam_phonenumber, 4, 9)from [phlsql].capabilities.dbo.aww_samplewhere sam_batch between 437 and 449)select cc.sam_batch, count(*) as Recordsfrom cc,sample.dbo.wireless_blocks p2where cc.phone = p2.matchgroup by cc.sam_batch[/code]Is there a smart SQL reason for this, or is the network the culprit?

Deriving a Session Number

Posted: 09 Oct 2013 02:57 AM PDT

Hi Guys,Could really do with some help trying to derive a field (session_number).I have a staging table which is populated with around 10 million rows daily, each day I need to identify the session number for each row by user_id.To identify the session_number we need to check if there is a gap greater than 5 minutes between the previous and current start_time.I have been able to do this on a small subset, but when dealing with the entire 10 million rows the database seems to process for days, any ideas?e.g.user_id start_time session_number123 09:00:00 1123 09:01:00 1123 09:02:00 1123 09:03:00 1123 09:09:00 2123 09:22:00 3Thanks,Rich

Enabling AlwaysOn on an existing HA Shared Storage SQL 2012 Cluster

Posted: 09 Oct 2013 02:44 AM PDT

Hi!I've searched numerous blogs and forums for the answer to this, but cant see to find any info on it (maybe i'm being blind!)We have an existing Windows Server 2008 R2 / SQL 2012 Enterprise 2-node Cluster with shared SAN storage.I'm looking at the possibility of creating a 3rd node, which wont have access to the shared storage.is it possible to enable AlwaysOn on the existing 2node cluster, and adding the 3rd node as part of the AlwaysOn availabilty group?I'm thinking it wont be possible but thought i would ask.many ThanksJames

Linked Server error 7357

Posted: 09 Oct 2013 02:41 AM PDT

We're in the process of moving a database from an old SQL Server 2000 instance to SQL Server 2012.Both servers have a linked server which connects to an Oracle 10g database to perform some data retrieval. On 2012, Oracle's ODBC driver is installed, System DSN set up and the linked server (named "ABC" below) tested - everything looks good. Some quick testing using OPENQUERY can select rows as expected.What isn't working is the execution of an Oracle function from a package. Executed from the 2000 instance it is successful and returns the data expected. Executed from the 2012 instance, the exact same code throws the following:[code]Msg 7357, Level 16, State 2, Line 1Cannot process the object "{CALL UB.WRAPADDR.CS2ADDR(1,'4310','E','TRENT','AVE',{resultset 10,ubaddr_ubilacct,ubaddr_buildingnumber, ubaddr_predirection,ubaddr_streetname,ubaddr_streetsuffix})}". The OLE DB provider "MSDASQL" for linked server "ABC" indicates that either the object has no columns or the current user does not have permissions on that object.[/code]Is there a configuration change I need to make on the ODBC DSN, or in the linked server? I don't believe it's a permissions issue - both 2000 and 2012 are using the same Oracle login and I (with sysadmin) get the error.

Partition Creation Time

Posted: 09 Oct 2013 02:37 AM PDT

Hi, using SQL 2012 Ent. Does anyone know if there is an easy way to get the creation time of a Partition from SQL server Metadata?

IBM iSeries DB2/400 Version 6.1.1 driver to install on SQL Server 2012

Posted: 09 Oct 2013 02:20 AM PDT

Hello all,I would like to know what ODBC driver is needed to install on SQL Server 2012 machine to gain access to DB2/400 Version 6.1.1. Thanks,Victor S.

Copy Only - Security Policy

Posted: 08 Oct 2013 11:42 PM PDT

Hi all,I'm looking to create a policy that will stop any manual backups being performed, unless they are copy only. I haven't had much success so far. Does anybody know if this is possible? Is there a facet for this?Cheers in advance

Silent Install Of 2 SSRS Instances With Same Service Account

Posted: 09 Oct 2013 01:00 AM PDT

Hello AllI am working on doing an automated silent install of SSRS and SQL Server. The silent install is meant to be generic so it can be quickly deployed so I use built in Accounts. So for Report Server I used RSSVCACCOUNT="NT Service\ReportServer" and for AGTSVCACCOUNT="NT Service\SQLSERVERAGENT". I know its not best practice but the goal is to get it installed automatically.In my testing I am using silent install using Config File. For the initial install it installs fine. When I try to do a second install(instance 2) it doesn't work. For the 2nd install the only thing I am changing is INSTANCE NAME and INSTANCE ID. I am seeing errors related to invalid accounts even though they are the same accounts that work for the first install. One error for example I see is:"The credentials you provided for the Reporting Services service are invalid. To continue, provide a valid account and password for the Reporting Services Service"My question is I guess can you use the same service accounts for multiple instances? Any other ideas why this would fail when you use the same config file but change only the instance details? Any help would be greatly appreciated.Thanks in Advance

SQL 2012 with Windows Server 2008 or Windows Server 2012?

Posted: 08 Oct 2013 08:15 AM PDT

We're looking at migrating some of our SQL Servers to SQL 2012 (mainly for the BI improvements), but I'm wondering if it's worth upgrading the OS at the same time from Windows Server 2008 to Windows Server 2012.I've been trying to find some articles giving what benefits SQL 2012 has running in Windows Server 2012 compared to Windows Server 2008 but have been unable to find anything.Does anyone know of anything?

AlwaysOn Availability Groups

Posted: 08 Oct 2013 11:10 AM PDT

I'm planning to set up AlwaysOn Availability Groups.Questions:1. Can one cluster node should be enoguh to configure AlwaysOn Availability Groups?2. The node hosting the secondary replica, must be part of same cluster as Primary?Thanks,

Getting a semaphore timeout error when executing a job

Posted: 08 Oct 2013 06:54 AM PDT

I'm copying our SQL Jobs from the old SQL 2005 server to our new SQL 2012 server. Every one that I've copied and run, so far, is generating the following error:[quote]A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - The semaphore timeout period has expired.) (Microsoft SQL Server, Error: 121)[/quote]This error occurs in just trying to get the SQL Job started - it doesn't even make it to step 1.I've looked this error up and see that there's a lot of potential causes for it, so yet, I get that it won't be easy to get resolved, but I've got to start somewhere. Especially since now that someone else is managing our server, I've got to know where to begin pointing them to get it resolved.

Using Power View in SQL Server Enterprise 2012

Posted: 08 Oct 2013 06:50 AM PDT

Hi all,I understand that PowerView forms part of the Enterprise edition of SQL Server 2012, which I just installed on my local machine. As I am very interested to test the BI capabilities (Reporting) of SQL Server 2012, I was looking for PowerView but can't seem to find it. Isn't it installed by default in SQL Server Enterprise 2012? Also, do I need to have Sharepoint 2010 also installed to be able to use PowerView?Thanks for your help!

How to recover a job that i deleted from job activity monitor

Posted: 08 Oct 2013 04:12 AM PDT

Hi, allI accidentally deleted the job that are created on primary and secondary data base when I was configured for log shipping.the backup job on primarythe copy job and restore job on secondaryis there a way to recover these jobs this is on sql 2012

Setting out Database Mail when Office 365 is involved

Posted: 08 Oct 2013 04:19 AM PDT

I'm trying to setup Database Mail in SQL Server 2012, but have come across a problem I don't know how to resolve. We've got a departmental email address, which I use for many of the SQL Jobs I've created and am migrating to this new instance of SQL 2012. At one point it asks for the server. We're now using Office 365 for our email. So, I have no idea at all what the server would be that I have to specify in Database Mail.Help please.

[T-SQL] splitting a string into columns

[T-SQL] splitting a string into columns


splitting a string into columns

Posted: 08 Oct 2013 02:53 AM PDT

Hi All , i am having a string i want output in 3 separate columns 'A-111:B-2222:C-33333' A B C 111 2222 333333'A-111:B-2222' A B C 111 2222 -'A-111'A B C 111 - -'B-2222' A B C - 222 -'B-2222:C-33333' A B C - 222 33333 How can i best achieve this using STUFF or by using split function in SQL Server. ThanxVD

BCP text out that includes quotation marks

Posted: 01 Oct 2013 04:45 AM PDT

I'm trying to export a text string that includes " to a text file using BCP.This code works as expected by exporting the word blah into a text file on my C drive:[code="sql"]-- Turn on cmdshell EXEC sp_configure 'xp_cmdshell', 1reconfiguregoDECLARE @cmd varchar(1000)SET @cmd = 'bcp "SELECT ''blah''" queryout "C:\bcpout.txt" -w -T'EXEC master..xp_cmdshell @cmd[/code]However if I change my text string from 'blah' to include quotation marks so it reads 'blah"blah', it fails to do anything.[code="sql"]DECLARE @cmd varchar(1000)SET @cmd = 'bcp "SELECT ''blah"blah''" queryout "C:\bcpout.txt" -w -T'EXEC master..xp_cmdshell @cmd[/code]Is there a way I can get the quotation marks exported to my text file using BCP?

Bulk insert with format file - handling quotation mark text qualifiers in header row

Posted: 08 Oct 2013 10:57 PM PDT

I'm trying to use BULK insert a flat file.The file has a header row. In addition to the delimiter character which is [code="plain"],[/code] the header row contains text qualifiers: [code="plain"]"[/code]The text file looks like this when opened in notepad:[code="plain"]"Column1Name","Column2Name","Column3Name""Row2Column1Data","Row2Column2Data","Row2Column3Data"[/code]I would like to use the bulk insert command with the format file option to import the data into a table in my database.The table in my database looks like this:[code="sql"]CREATE TABLE [dbo].[BulkInsertedData]( [Column1Name] [nvarchar](4000) NULL, [Column2Name] [nvarchar](4000) NULL, [Column3Name] [nvarchar](4000) NULL) ON [PRIMARY][/code]If I try to import from a flat file that is identical in all respects except that does not contain delimiters, I can import without any problems. Without delimiters, the format file looks like this:[code="plain"]<?xml version="1.0"?><BCPFORMAT xmlns="http://schemas.microsoft.com/sqlserver/2004/bulkload/format" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <RECORD> <FIELD ID="Column1Name" xsi:type="CharTerm" TERMINATOR="," MAX_LENGTH="4000"/> <FIELD ID="Column2Name" xsi:type="CharTerm" TERMINATOR="," MAX_LENGTH="4000"/> <FIELD ID="Column3Name" xsi:type="CharTerm" TERMINATOR="\r" MAX_LENGTH="4000"/> </RECORD> <ROW> <COLUMN SOURCE="Column1Name" NAME="Column1Name" xsi:type="SQLVARYCHAR"/> <COLUMN SOURCE="Column2Name" NAME="Column2Name" xsi:type="SQLVARYCHAR"/> <COLUMN SOURCE="Column3Name" NAME="Column3Name" xsi:type="SQLVARYCHAR"/> </ROW></BCPFORMAT>[/code]The bulk insert command I use to get the data in looks like this:[code="sql"]BULK INSERT BulkInsertedData FROM 'E:\1.txt' WITH ( FORMATFILE = 'E:\FormatFile.txt', FIRSTROW = 2, -- first row has column headings KEEPIDENTITY )[/code]My problem is - I can't figure out how to change the format file so that the bulk insert works when the header data is surrounded by text qualifiers as per the text file described above. I'm guessing the problem is escaping certain characters, but after hacking at it for a few hours, I haven't been able to figure out where to put the escape characters. Has anyone else successfully done this and if so, how did your XML format file differ from mine?

Convert comma to single quotes

Posted: 16 Aug 2012 09:15 PM PDT

Hi Experts:This is my table [code="sql"]Condition Columnnameakraft,crunckel TNAL,AZ StateAtlanta,Austin-San Marcos MACaney,aventura Area Alexandria, VA,Arlington, TX MarketAlpharetta,Alexandria City001,002 StoreSSameStore,HeitmanI Store TypeInLast6Months,CS-CC Space Types[/code]I want to convert the column like as [code="sql"]Condition Columnname'akraft','crunckel' TN'AL','AZ' State'Atlanta','Austin-San Marcos' MAC'aney','aventura' Area 'Alexandria', 'VA','Arlington', 'TX' Market'Alpharetta','Alexandria' City'001','002' Stores'SameStore','HeitmanI' StoreType'InLast6Months','CS-CC' SpaceTypes [/code]To add the single quotation to start and End of the position .. any one help to achieve this goal ThanksFAJ

updating a column based on a min(value) in the where clause

Posted: 08 Oct 2013 08:48 AM PDT

I would like to update a column in a table from a field in another table but based on the minimum value of a column, exampletable: carinfoid, car, color, rank1, vw, grey, 11, vw, white, 21, vw, red, 31, vw, blue,42, audi, black, 12, audi, white, 2into a table so the query i have is as follows:update table cars set color = color from carinfo where cars.id = carinfo.id and [min (rank)?]i can't seem to figure it out, any help is appreciated

[SQL Server 2008 issues] Catch invalid object and rollback transaction

[SQL Server 2008 issues] Catch invalid object and rollback transaction


Catch invalid object and rollback transaction

Posted: 08 Oct 2013 02:22 PM PDT

I have some code with transaction logic in it. When an update fails, I want the transaction to rollback.The problem I'm having is if my transaction fails because of an invalid object name, it doesn't capture an error number and leaves my transaction open. Is there a better way to capture this update error and rollback the transaction?[code]BEGIN TRANSACTION print 'error1'print @@Error-- @@Error is 0 UPDATE a SET b = 1 FROM a,c WHERE a.t = c.t-- table c does not exist, so the update statement failsprint 'error2'print @@Error-- nothintg prints IF @@ERROR <> 0 BEGIN GOTO ErrorTrans ENDErrorTrans:print 'in errortrans' ROLLBACK TRANSACTION [/code]

user permissions to only for view

Posted: 08 Oct 2013 04:27 PM PDT

I need to give user 'dhii' select permissions only to a view not for all tables. The view has columns from two tables from the same database (ABC). 'dhii' should only see the columns in this view, not any underlying tables. what are the steps to follow?

Sql Server Help Online - ????

Posted: 08 Oct 2013 12:40 PM PDT

I find it difficult to navigate thru the Helpviewer Home (SQLServer Help online - local)I looked up data types, and it listed a bunch of types, but geospatial data types weren't even listed listed. Only when i searched for Geospatiol Data did the type and help display.. When it did display, it doesn't even provide any sort of "path" that would help me know where or in what section of the Help 'book' that info is categorized in. If I could figure out the big picture of where things are located, I might better be able to locate the resources for 'categories' of info, instead of just looking everything up word by word. It just seems like it's organized in pieces all over the place. The Contents display of Help itself, nowhere, even lists Datatypes that I can see.Does anyone have any helpful suggestions?thxf

Date range query problem

Posted: 08 Oct 2013 04:43 AM PDT

Does anyone know what is wrong with this query? I'm trying to make a date range 7 days before 'today' and 21 days after 'today':"SELECT tbl_pac2k_remarks.[Change Request] as ChangeRequest, tbl_pac2k_remarks.[Vetting Status] as VettingStatus, tbl_pac2k_remarks.[Clarification Remark] as ClarificationRemark, tbl_pac2k.[CHANGE REQUEST], tbl_pac2k.STATUS, tbl_pac2k.[Installation Start Date], tbl_pac2k.[Installation End Date], tbl_pac2k.[REQUESTER NAME], tbl_pac2k.DESCRIPTION, tbl_pac2k.[OUTAGE REQUIRED], tbl_pac2k.[BUSINESS IMPACT], tbl_pac2k.[Approval Type], tbl_pac2k.[Approval Group], tbl_pac2k.[OWNER GROUP], tbl_pac2k.[OWNER TECHNOLOGY GROUP], tbl_pac2k.CRL FROM tbl_pac2k_remarks INNER JOIN tbl_pac2k ON tbl_pac2k_remarks.[Change Request] = tbl_pac2k.[CHANGE REQUEST] [b]WHERE (tbl_pac2k.[Installation Start Date] > DATE_SUB([Installation Start Date],INTERVAL 7 DAY)) AND WHERE (tbl_pac2k.[Installation Start Date] < DATE_ADD ([Installation Start Date],INTERVAL 21 DAY))[/b] ORDER BY tbl_pac2k.[OWNER GROUP], tbl_pac2k.[Installation Start Date]"

ERROR IN SSIS

Posted: 08 Oct 2013 05:30 AM PDT

I have package i am running and i get the below once a week i am inserting data through lookup i (look up match output) in sql 2008 database..Error: Directing the row to the error output failed with error code 0x80070057.[SSIS.Pipeline] Error: SSIS Error Code DTS_E_PROCESSINPUTFAILED. The ProcessInput method on component "Insert 1" (3223) failed with error code 0xC0209022 while processing input "OLE DB Destination Input" (3236). The identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running. There may be error messages posted before this with more information about the failure.[SSIS.Pipeline] Error: SSIS Error Code DTS_E_PROCESSINPUTFAILED. The ProcessInput method on component "Merge Join 3 1" (3039) failed with error code 0xC0047020 while processing input "Merge Join Left Input" (3044). The identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running. There may be error messages posted before this with more information about the failure.Any ideaShaun

Slowdown After Moving to New Virtual Machine

Posted: 08 Oct 2013 05:23 AM PDT

Hi there! I'm having a bit of a conundrum with one of our servers currently. Last night, we moved our SQL box from its previous virtual machine to a new one, and provisioned some extra resources for it (more CPU, slightly faster disks).However, for some reason, everything's moving really sloooooowly on the server now. The SQL instance has grabbed up all the resources it usually does (28GB of RAM), and the CPU is spinning at about 25-40% usage, with occasional spikes up to 100% for a few seconds if things get heavy. All settings and so on are as they were previously, and these readings match what I usually see. However, one of our heavier procedures that usually takes about 5 minutes has now been running for around 40 minutes without finishing.Because of this, I'm not really sure what's slowing the server down; I'm not seeing any memory pressure from DMV queries, and I've checked to make sure the power-saving CPU limiter is off, so I don't have much of an idea as to what else I could tune. Is there something special that needs to be done when moving to a new virtual machine? I wasn't involved with the move, so I'm not entirely certain what transpired during it, but I don't think it should've shredded our performance quite this badly.Thank you for any assistance you can provide!

query to concatenate/pivot? base on group by

Posted: 08 Oct 2013 04:35 AM PDT

Hi,Is it possible to achieve result that illustrated below in nice single query? I started doing pivot, but still missing this concatenation, other whay go with XML Path??bit lost here.THanks all to all[code="sql"];with t as (select * from (select 21 id1, 200 id2, 'prod1' prodID unionselect 21 id1, 200 id2, 'prod22' prodID unionselect 21 id1, 200 id2, 'prod333' prodID unionselect 11 id1, 400 id2, 'prod11' prodID unionselect 11 id1, 400 id2, 'prod22' prodID unionselect 11 id1, 900 id2, 'prod01' prodID unionselect 33 id1, 300 id2, 'prod3' prodID ) a )-- select * From tid1 | id2 | ProdIDs |21 | 200 | prod1, prod22, prod33 |11 | 400 | prod11, prod22 |11 | 900 | prod01 |33 | 300 | prod3 |-- group by id1, id2[/code]

Replication issue - Never seen this before

Posted: 08 Oct 2013 04:06 AM PDT

When I tried to locate Xact Seqno for the article causing the issue . I get this error , can any one help me to explain what could be the cause . EXEC sp_browsereplcmds @xact_seqno_start = '0x00077C630003F4150005' , @xact_seqno_end = '0x00077C630003F4150005' -------------------------------------------------------------------------------------(0 row(s) affected)Message: Invalid attempt to read when no data is present.Call Stack: at System.Data.SqlClient.SqlDataReaderSmi.EnsureOnRow(String operationName) at System.Data.SqlClient.SqlDataReaderSmi.GetBytes(Int32 ordinal, Int64 fieldOffset, Byte[] buffer, Int32 bufferOffset, Int32 length) at Microsoft.SqlServer.Replication.ReplCmdsReader.convert_command(Boolean& fFirstChuck_in_cmd, Int32& iParamMarker, ParamToken& tokenReader, Object[]& rgobj, SqlDataReader& reader) at Microsoft.SqlServer.Replication.ReplCmdsReader.sp_printstatement(String strCmd)Debug dump: cbTSQL: 1702127982 wParam: 0 iParam: 0 oDatum: 0 oNextToken: 0<Reset />

sp_generate_inserts

Posted: 07 Oct 2013 11:45 PM PDT

I am still using sp_generate_inserts :[code="sql"] (Copyright © 2002 Narayana Vyas Kondreddi. All rights reserved.) Purpose: To generate INSERT statements from existing data. These INSERTS can be executed to regenerate the data at some other location. This procedure is also useful to create a database setup, where in you can script your data along with your table definitions.Written by: Narayana Vyas Kondreddi http://vyaskn.tripod.com[/code]This code has not changed for a number of years, it is still functioning, but is there a alternative for this within the SQL-server product ?I am aware of the 'database task generate scripts'.Thanks for your time and attention,Ben

DecryptByKey - returns me only 2

Posted: 07 Oct 2013 09:05 PM PDT

Hi all,I have a SQL Server which encrypt me / decrypts sensitive data in nvarchar column to binary one with insted of trigger.When I try to decode them with same key, the sensitive data looks like that '2 'Some ideas what happens ?Best regards,Krastio Kostov

Issues with certain domain users accessing a SQL server

Posted: 08 Oct 2013 01:52 AM PDT

I no longer can connect via SSMS to a specific server using my domain account. here are the facts and what I have checked. This is a sql 2008 r2 RTM machine. SP2 to be installed next month. --I can remote into the server and connect using my accoung to SSMS--TCP enabled on 1433--NP disabled--I can telnet into the ip 1433--Other domain users can connect just fine - there is one other users getting the same thing--ran xp_logininfo and my login is part of a a group with sysadmin rights--I can login from my laptop to SMSS using a sql user--Made sure sql browser was active--I have the maxtokensize set--It is not just on my laptop - I remoted into a 3rd server (not the sql) and I wasnt able to connect on that server to the other Sql server either. Errors I encounter:[b]SMSS connection error [/b]- A connection was established, but then an error occured during the login processs (provider: TCP provider, error: 0 - The specified network name is no longer available.) SQL error 64[b]ODBC connection [/b]- error 64 specified network name is no longer available[b]SQL error log message [/b]- The login packet used to open the connection is structurally invalid; the connection has been closed. Please contact the vendor of the client library. Error 17832 serverity 20 state 8. Again others can access just fine.

Update Based on Select with Multiple Returns

Posted: 08 Oct 2013 12:07 AM PDT

Good Morning!I'm trying to update a column based on a WHERE statement that uses a SELECT statement as the qualifier.My goal is to find duplicate records in my table, then have a value in one of the columns change so that the users can search for and view these duplicate records and decide how they'd like to deal with them.Here's the SELECT statement that does a nice job of finding my duplicates:select IDX_Invoice_Number, IDX_Vendor_Number, status from _obj_3group by IDX_Invoice_Number, IDX_Vendor_number, statushaving COUNT (*)>1Here's what I'd use to accomplish my goal, if it were possible:update _obj_3set status='7'where idx_invoice_number in(select IDX_Invoice_Number, IDX_Vendor_Number, status from _obj_3group by IDX_Invoice_Number, IDX_Vendor_number, statushaving COUNT (*)>1)But, I get an error because the SELECT statement yeilds multiple returns:'Only one expression can be specified in the select list when the subquery is not introduced with EXISTS.'Which it will, because they're duplicates that it's finding.I've tried using an INNER JOIN back to itself with no luck, and just created a VIEW using my SELECT statement, hoping to be able to update the VIEW.No luck there, as the VIEW won't allow me to update it.Well, thanks taking a look, and I can't wait to see what comes back.Thanks!

.NET Framework 3.5 install fails when installing SQL 2008R2 on Windows Server 2003

Posted: 08 Oct 2013 12:52 AM PDT

Hello,I am having issues installing SQL 2008 R2. After launching the install file, the setup tries to install .NET framework 3.5. After a few prompts it fails with the following error: Microsoft .NET Framework 3.5 SP1 has encountered a problem during setup. Setup did not complete correctly.I have seen plenty of blogs with the same issue but nothing seems to work. I have a high priority project due and I cannot seem to get SQL 2008 installed. Please help.Here is what came out of my logs:[10/07/13,09:52:30] Microsoft .NET Framework 2.0a: [2] Error: Installation failed for component Microsoft .NET Framework 2.0a. MSI returned error code 1624[10/07/13,09:52:52] WapUI: [2] DepCheck indicates Microsoft .NET Framework 2.0a is not installed.

Export a table to a flat file. Can't get the quotes correctly.

Posted: 07 Oct 2013 10:35 PM PDT

While exporting a table using the export wizard, I tried using double quotes as a text qualifier and got "" Column value "" as output. How do I get a single set of quotes around the value?

Display columns as rows

Posted: 07 Oct 2013 09:02 PM PDT

Hi,I have one row in table like below.c1 c2 c3 c42 4 6 5now i want result like belowcolumn1 column2c1 2c2 4c3 6c4 5Thnak you in advance.

Tuesday, October 8, 2013

[SQL Server] migrating of data from 2008r2 to 2012

[SQL Server] migrating of data from 2008r2 to 2012


migrating of data from 2008r2 to 2012

Posted: 07 Oct 2013 09:07 PM PDT

hi all, we are using sql server2008r2 for our application..we want to integrate with another application which is using sql server 2012...so we need to send few tables...can i send data between the application? does it possible...?

Search This Blog