Wednesday, October 9, 2013

[how to] calculate days and equally distribute remaining

[how to] calculate days and equally distribute remaining


calculate days and equally distribute remaining

Posted: 09 Oct 2013 01:37 PM PDT

I'm having difficulty with calculating remaining days from hours being distributed over a date range within a given time period. For this example, the time period would be 2013-11-01 through 2013-05-01 (it will always be a 6 month period).

The calendar table has a 30 year set. 1 Value of 1 for weekday indicates Monday - Friday.

Setup

CREATE TABLE #calendar (    [d] date PRIMARY KEY,    [weekday] tinyint DEFAULT 0,  );    INSERT INTO #calendar ([d]) VALUES  ('2006-01-01', 0),  ...  ('2013-01-01', 1),  ('2013-01-02', 1),  ('2013-01-03', 1),  ...  ('2014-12-28', 0),  ('2014-12-29', 1),  ('2014-12-30', 1),  ('2014-12-31', 1),  ...  ('2035-12-31', 1);    CREATE TABLE #tasks (    [task] int PRIMARY KEY,    [begin] date,    [end] date,    [hours] decimal(6,3)  );    INSERT INTO #tasks ([task], [begin], [end], [hours]) VALUES  (1, '2013-11-05', '2013-11-05', 5.0),  (2, '2013-10-01', '2013-11-15', 2.0),  (3, '2013-11-15', '2013-12-31', 80.0),  (4, '2013-05-01', '2013-12-04', 2.0),  (5, '2013-07-01', '2013-11-27', 15),  (6, '2013-11-01', '2013-12-10', 40),  (7, '2013-07-01', '2013-08-01', 50.0),  (8, '2014-06-01', '2014-07-01', 10.0);  

Part 1

Getting the total weekdays for the tasks:

DECLARE @periodStart date, @periodEnd date;    SET @periodStart = '2013-11-01';  SET @periodEnd = '2014-05-01';    SELECT a.[task], a.[begin], a.[end], a.[hours],      (SELECT SUM([weekday])      FROM #calendar      WHERE [d] BETWEEN a.[begin] AND a.[end]) AS weekdays  FROM #tasks a  WHERE (a.[begin] < @periodStart AND a.[end] > @periodEnd)      OR (a.[begin] >= @periodStart AND a.[end] < @periodEnd)      OR (a.[begin] < @periodEnd AND a.[end] > @periodStart)  

Result

task        begin      end        hours    weekdays  ----------- ---------- ---------- -------  --------  1           2013-11-05 2013-11-05 5.000    1  2           2013-10-01 2013-11-15 2.000    34  3           2013-11-15 2013-12-31 80.000   33  4           2013-05-01 2013-12-04 2.000    156  5           2013-07-01 2013-11-27 15.000   108  6           2013-11-01 2013-12-10 40.000   28

Part 2

Getting the workdays for the time period:

DECLARE @periodStart date, @periodEnd date;    SET @periodStart = '2013-11-01';  SET @periodEnd = '2014-05-01';    SELECT a.[task], a.[begin], a.[end], a.[hours],      (SELECT SUM([weekday])      FROM #calendar      WHERE [d] BETWEEN a.[begin] AND a.[end]) AS weekdays,      m1.weekdays AS m1wd,      m2.weekdays AS m2wd,      m3.weekdays AS m3wd,      m4.weekdays AS m4wd,      m5.weekdays AS m5wd,      m6.weekdays AS m6wd  FROM #tasks a  CROSS JOIN      -- Month 1 Weekdays      (SELECT CAST(SUM([weekday]) AS decimal(3,1)) AS weekdays FROM #calendar      WHERE [d] BETWEEN DATEADD(MONTH, DATEDIFF(MONTH, 0, @periodStart)+1, 0)           AND DATEADD(MONTH, DATEDIFF(MONTH, 0, @periodStart)+2, -1)) m1  CROSS JOIN      -- Month 2 Weekdays      (SELECT CAST(SUM([weekday]) AS decimal(3,1)) AS weekdays FROM #calendar      WHERE [d] BETWEEN DATEADD(MONTH, DATEDIFF(MONTH, 0, @periodStart)+2, 0)           AND DATEADD(MONTH, DATEDIFF(MONTH, 0, @periodStart)+3, -1)) m2  CROSS JOIN      -- Month 3 Weekdays      (SELECT CAST(SUM([weekday]) AS decimal(3,1)) AS weekdays FROM #calendar      WHERE [d] BETWEEN DATEADD(MONTH, DATEDIFF(MONTH, 0, @periodStart)+3, 0)           AND DATEADD(MONTH, DATEDIFF(MONTH, 0, @periodStart)+4, -1)) m3  CROSS JOIN      -- Month 4 Weekdays      (SELECT CAST(SUM([weekday]) AS decimal(3,1)) AS weekdays FROM #calendar      WHERE [d] BETWEEN DATEADD(MONTH, DATEDIFF(MONTH, 0, @periodStart)+4, 0)           AND DATEADD(MONTH, DATEDIFF(MONTH, 0, @periodStart)+5, -1)) m4  CROSS JOIN      -- Month 5 Weekdays      (SELECT CAST(SUM([weekday]) AS decimal(3,1)) AS weekdays FROM #calendar      WHERE [d] BETWEEN DATEADD(MONTH, DATEDIFF(MONTH, 0, @periodStart)+5, 0)           AND DATEADD(MONTH, DATEDIFF(MONTH, 0, @periodStart)+6, -1)) m5  CROSS JOIN      -- Month 6 Weekdays      (SELECT CAST(SUM([weekday]) AS decimal(3,1)) AS weekdays FROM #calendar      WHERE [d] BETWEEN DATEADD(MONTH, DATEDIFF(MONTH, 0, @periodStart)+6, 0)           AND DATEADD(MONTH, DATEDIFF(MONTH, 0, @periodStart)+7, -1)) m6  WHERE (a.[begin] < @periodStart AND a.[end] > @periodEnd)      OR (a.[begin] >= @periodStart AND a.[end] < @periodEnd)      OR (a.[begin] < @periodEnd AND a.[end] > @periodStart)  

Result

task   begin      end        hours   weekdays    m1wd  m2wd  m3wd  m4wd  m5wd  m6wd  ------ ---------- ---------- ------- ----------- ----- ----- ----- ----- ----- -----  1      2013-11-05 2013-11-05 5.000   1           22.0  23.0  20.0  21.0  22.0  22.0  2      2013-10-01 2013-11-15 2.000   34          22.0  23.0  20.0  21.0  22.0  22.0  3      2013-11-15 2013-12-31 80.000  33          22.0  23.0  20.0  21.0  22.0  22.0  4      2013-05-01 2013-12-04 2.000   156         22.0  23.0  20.0  21.0  22.0  22.0  5      2013-07-01 2013-11-27 15.000  108         22.0  23.0  20.0  21.0  22.0  22.0  6      2013-11-01 2013-12-10 40.000  28          22.0  23.0  20.0  21.0  22.0  22.0

Problem

Since each task can start/end at different times, how do I calculate and distribute the hours over the remaining days in the months? In part 2, would it be easier to just return a table between the period dates with the workdays and then pivot it?

Desired Result

task   begin      end        hours   weekdays    m1h       m2h       ...   m6h  ------ ---------- ---------- ------- ----------- --------  --------  ----- ---  1      2013-11-05 2013-11-05 5.000   1           5           2      2013-10-01 2013-11-15 2.000   34          0.647059    3      2013-11-15 2013-12-31 80.000  33          26.66667  53.33333  4      2013-05-01 2013-12-04 2.000   156         0.269231  0.038462  5      2013-07-01 2013-11-27 15.000  108         2.638889    6      2013-11-01 2013-12-10 40.000  28          30        10

1 Generate a set or sequence without loops

The specified @job_id does not exist

Posted: 09 Oct 2013 12:52 PM PDT

After deleting a SQL Server Agent job I continue to get notifications that the job attempted to run and failed. I am receiving the following error message.

[000] Unable to retrieve steps for job my_job
[298] SQLServer Error: 14262, The specified @job_id ('CC65FEFB-0033-442E-B55E-6209E25C5039') does not exist. [SQLSTATE 42000]

I have checked the following tables in the msdb database and none of them have the specified job_id except the dbo.sysjobhistory table.

  • dbo.sysjobs
  • dbo.sysjobschedules
  • dbo.sysjobsteps
  • dbo.sysjobstepslogs
  • dbo.sysjobservers
  • dbo.sysjobactivity
  • dbo.sysjobhistory

Is there anywhere else I can look?
Also, today I have stopped and restarted the SQL Server Agent and will update this post tomorrow with the result.

Thank you in advance!

mysql is locked; can't kill query

Posted: 09 Oct 2013 08:52 PM PDT

I have a mediawiki running on my local machine and just started running mysqldump last night:

$ mysqldump -h localhost -u wikiuser --default-character-set=UTF8 wikidb --add-locks --skip-lock-table> ~/Dropbox/admin/wikimedia_backup/backup.sql

But now mediawiki protests

Warning: The database has been locked for maintenance, so you will not be able to save your edits right now. You may wish to cut-n-paste the text into a text file and save it for later. The administrator who locked it offered this explanation: Dumping Database, Access will be restored shortly

This sent me looking for locks in the wikidb database. I ran

UNLOCK TABLES;  Query OK, 0 rows affected (0.00 sec)  

, but problem persists.

My sqldump statement runs with the --skip-lock-table options (I think I should add or replace with --add-locks?)

I reran the command with both options.

I also tried to kill the process outright:

mysql> SHOW FULL PROCESSLIST \G;  *************************** 1. row ***************************       Id: 77     User: wikiuser     Host: localhost       db: wikidb  Command: Query     Time: 0    State: NULL     Info: SHOW FULL PROCESSLIST  1 row in set (0.00 sec)    ERROR:   No query specified    mysql> kill query 77;  ERROR 1317 (70100): Query execution was interrupted  

So, two questions:

  1. how do I lift the existing lock;
  2. how should my options be in the mysqldump call to avoid the problem, prospectively?

Multi-master and multi slave

Posted: 09 Oct 2013 01:05 PM PDT

I have been having some issues creating a redundant multi master setup. Here's the proposed layout:

dbsetup

The problem I'm having is when something is written to master1 it replicates to slave1 and master2 but not slave2. And when something is written to master2 it replicates to slave2 and master1 but not to slave 1.

Is there any way I can get this to work where both slaves are updated no matter which master is written to?

Is there a way to determine automatic storage table spaces?

Posted: 09 Oct 2013 12:23 PM PDT

First off, we are on both Windows and AIX and are using DB2 ESE 9.7 FP 4.

I am trying to determine dynamically whether or not table spaces are set up via automatic storage or not (so we can dynamically script how to lower the high water mark when necessary). I can see whether they are defined SMS or DMS "under the covers" via statements like db2 list tablespaces or db2 list tablespaces show detail.

I have also issued queries against the system catalog tables and views (ie, select * from syscat.tablespaces and select * from sysibm.systablespaces).

All I can tell is that the tablespace is defined as SMS or DMS. How can I tell whether or not the tablespace is configured for automatic storage? I know there must be a way I can do it dynamically, because if I use IBM Data Studio and visit the tablespaces "view" and click on properties, I can see whether or not the table space is automatic storage or not. I am assuming there must be a configuration or a query behind that.

Just wondering, how do we figure this out?

How to move partitions between filesystems

Posted: 09 Oct 2013 11:13 AM PDT

We have ~9TB MySQL database which consists from a few tables which are partitioned by day. Sum of daily information is ~20GB. Database is practically write only. There are at most tens of queries against the data per day.

What I would like to achieve is to use fast disks for current day partitions while there is data load activity. And somehow move those partitions to slow and cheap disks when day changes and new partition is started.

How can I do that with minimal or no downtime? Data load activity is 24/7 but can be delayed for a few minutes.

Is it possible to optimize a range query so an explain statement would state "using index" instead of "using index condition"?

Posted: 09 Oct 2013 11:13 AM PDT

I have a very large table where specific queries made by my application are timing out. The table looks something like this:

LargeTable  ------------  Id [BIGINT(20)]  UtcTimestamp [BIGINT(20)]  Pivot [BIGINT(20)]  Details [Text]  

I have an index over 1) Pivot 2) UtcTimestamp.

Now, the query I'm running looks something like this:

SELECT Id, Pivot, UtcTimestamp, Details FROM  LargeTable WHERE   UtcTimestamp>=1380067200 AND UtcTimestamp<=1380153599 AND Pivot=2;  

Since there is an index over the pivot and timestamp, this query should be blazing fast, but it doesn't appear to be. I ran an explain on this query expecting "using index", but instead I get "using index condition". Here is the explain:

+----+-------------+-------------+-------+------------------------+-----------------------+---------+------+------+-----------------------+  | id | select_type | table       | type  | possible_keys          | key                   | key_len | ref  | rows | Extra                 |  +----+-------------+-------------+-------+------------------------+-----------------------+---------+------+------+-----------------------+  |  1 | SIMPLE      | LargeTable  | range | pivot_and_timestamp_id | pivot_and_timestamp_id| 18      | NULL |  963 | Using index condition |  +----+-------------+-------------+-------+------------------------+-----------------------+---------+------+------+-----------------------+   

It's definitely selecting the right index, but why is this query not as fast as it should be?

The MySQL server version is 5.6.14.

Issue In INSERT INTO SELECT Statement [duplicate]

Posted: 09 Oct 2013 10:31 AM PDT

This question already has an answer here:

Can anyone please advise how to rectify below code:

INSERT INTO table1 values(10, SELECT col1 FROM table2, null, null);

---Here table1 has only 4 column(In actual table there are 100 columns and I don't want to mention each column and then enter value)

Is it possible to mention select statement inside values during table insertion ?

Many to many link tables with history?

Posted: 09 Oct 2013 09:55 AM PDT

I'm new here.

While creating a PostgreSQL database of customer subscribed services with DNS domains, I want to:

  • enable customers to subscribe to multiple services
  • each service can have multiple domain names per customer
  • a domain name must be unique per customer per timestamp

Example customers:

1 | Sun Microsystems (USA) Inc.  2 | Sun Microsystems (UK) Ltd.  

Example domains:

1 | sun.com  2 | sun.co.nz  3 | sun.co.uk  

If Sun USA subscribes with the .com & .nz domains in January, then decides in April to have the offshore UK office manage these services instead, along with the .uk domain. Therefore the customer ID changes at a timestamp for billing in the different currency.

A domain name must be live (have no deleted_at timestamp) for only 1 customer per time. A customer may delete a domain & re-add it later, or another customer may add it later.

For billing, I need to know which customer had what domains active on what dates.

How do I manage the link between domains & subscriptions, and maintain a history?

CREATE TABLE subscription  (    id serial NOT NULL,    customer integer NOT NULL,    service integer NOT NULL,    created timestamp with time zone NOT NULL DEFAULT now(),    suspended timestamp with time zone,    ceased timestamp with time zone,    CONSTRAINT subscription_pk PRIMARY KEY (id),    CONSTRAINT subscription_customer_fk FOREIGN KEY (customer)        REFERENCES customer (id) MATCH SIMPLE        ON UPDATE CASCADE ON DELETE RESTRICT,    CONSTRAINT subscription_service_fk FOREIGN KEY (service)        REFERENCES service (id) MATCH SIMPLE        ON UPDATE CASCADE ON DELETE RESTRICT  )      CREATE TABLE dns_domain  (    id serial NOT NULL,    name character varying(256) NOT NULL,    added_at timestamp with time zone NOT NULL DEFAULT now(),    CONSTRAINT dns_domain_pk PRIMARY KEY (id),    CONSTRAINT dns_domain_uk UNIQUE (name)  )  

This simple link table works, but without maintaining any history:

CREATE TABLE subscribed_dns_domain  (    subscription integer NOT NULL,    dns_domain integer NOT NULL,    CONSTRAINT subscribed_dns_domain_pk PRIMARY KEY (subscription, dns_domain),    CONSTRAINT subscribed_dns_domain_dns_domain_fk FOREIGN KEY (dns_domain)        REFERENCES dns_domain (id) MATCH SIMPLE        ON UPDATE CASCADE ON DELETE RESTRICT,    CONSTRAINT subscribed_dns_domain_subscription_fk FOREIGN KEY (subscription)        REFERENCES subscription (id) MATCH SIMPLE        ON UPDATE CASCADE ON DELETE RESTRICT  )  

If I want to allow multiple instances of the same subscribed domain on different dates (only 1 having a NULL deleted_at timestamp), how do I do that? This doesn't work:

DROP TABLE subscribed_dns_domain;  CREATE TABLE subscribed_dns_domain  (    subscription integer NOT NULL,    dns_domain integer NOT NULL,    added_at timestamp with time zone NOT NULL DEFAULT now(),    deleted_at timestamp with time zone,    customer_add_reference character varying(40),    customer_delete_reference character varying(40),    CONSTRAINT subscribed_dns_domain_pk PRIMARY KEY (subscription, dns_domain, added_at),    CONSTRAINT subscribed_dns_domain_dns_domain_fk FOREIGN KEY (dns_domain)        REFERENCES dns_domain (id) MATCH SIMPLE        ON UPDATE CASCADE ON DELETE RESTRICT,    CONSTRAINT subscribed_dns_domain_subscription_fk FOREIGN KEY (subscription)        REFERENCES subscription (id) MATCH SIMPLE        ON UPDATE CASCADE ON DELETE RESTRICT  )  

Should I revert to the first subscribed_dns_domain table, and also have a subscribed_dns_domain_history table, with timestamps populated by triggers on the simple subscribed_dns_domain table? Not sure how to populate customer reference/ticket numbers in there too...

How about partial unique indexes or EXCLUDE USING gist () on the above table?

Note: The preferred way to add a unique constraint to a table is ALTER TABLE ... ADD CONSTRAINT. The use of indexes to enforce unique constraints could be considered an implementation detail that should not be accessed directly.

From http://www.postgresql.org/docs/9.2/static/indexes-unique.html

Thoughts/tutorial websites?

Oracle 11g Enterprise downgrade to Standard One

Posted: 09 Oct 2013 06:05 AM PDT

I'm checking possibilty of downgrade Oracle 11g from Enterpise Edition to Standard One edition. I've cross-checked edition comparison with

select name from dba_feature_usage_statistics where currently_used='TRUE'  

Query result from dba_feature_usage_statistics:

Job Scheduler  Oracle Java Virtual Machine (user)  Oracle Java Virtual Machine (system)  Automatic Memory Tuning  Oracle Utility Metadata API  Segment Advisor  Automatic SQL Tuning Advisor  Object  Materialized Views (User)  Audit Options  Automatic Maintenance - Optimizer Statistics Gathering  Automatic Maintenance - Space Advisor  Automatic Maintenance - SQL Tuning Advisor  Automatic Segment Space Management (system)  Automatic Segment Space Management (user)  Automatic Undo Management  Client Identifier  Character Set  Locally Managed Tablespaces (system)  Locally Managed Tablespaces (user)  Parallel SQL Query Execution  Partitioning (system)  Recovery Area  Recovery Manager (RMAN)  RMAN - Disk Backup  Backup BASIC Compression  Logfile Multiplexing  Server Parameter File  Shared Server  Streams (user)  LOB  SecureFiles (system)  

I think Parallel SQL Query Execution and Partitioning (system) are only in Enterprise Edition, but I'm not sure about rest, especially:

Automatic Memory Tuning  Segment Advisor  Automatic SQL Tuning Advisor  Automatic Maintenance - SQL Tuning Advisor  Recovery Area  

Do you guys know which edition of Oracle is needed to use these features?

Get unmatched rows for foreign key

Posted: 09 Oct 2013 01:31 PM PDT

Consider two tables:

CREATE TABLE Users(    id INT PRIMARY KEY,    name VARCHAR(128)  ) Engine=InnoDB    CREATE TABLE Tags(    id INT PRIMARY KEY,    user REFERENCES Users(id),    tag VARCHAR(128)  ) Engine=InnoDB  

I would like to know which Users do not have any Tags assigned to them. I could do a LEFT JOIN and then add a WHERE clause for no Tags, but as there are tens of thousands of Users and each could have a few tens of Tags, I think that would be impractical. Is there a better way?

This is running on MySQL 5.5 and 5.6 on Ubuntu Server 12.04 running in Amazon EC2.

Cant see SQL Server Agent in SSMS [duplicate]

Posted: 09 Oct 2013 06:29 AM PDT

This question already has an answer here:

I have SSMS 2012 installed on my laptop, and SQL Server 2012 on my hosting, but when i try to schedule a sql job i didn't find "SQL Server Agent" in the left side list!!?

How to store schema-less data efficiently in a relational database?

Posted: 09 Oct 2013 02:07 PM PDT

I'm trying to get a nice balance of searchability and speed. I'm experimenting with different methods.

Option 1: Serialize the array as JSON/PHP serialized array and store it in a 'meta' column.

   id  |  name   | meta    1       Bob       {"city":"GoTown","birthdate":"1980\/8\/14"}  2       Alice     {"city":"Streamville","birthdate":"1986\/6\/6"}  

Option 2: Store keys and values together in a stack.

user_id  |    key   |   value     1         name       Bob  1         city       GoTown  1         birthdate  1980/8/14  2         name       Alice  2         city       Streamville  2         birthdate  1986/6/6  

Option 3: 3 tables: entites, keys, values. Store each key name only once.

user_id  |   name     1         Bob  2         Alice    key_id   |   keyname     1         city  2         birthdate    user_id   |   key_id   |   value  1          1            GoTown  1          2            1980/8/14  2          1            Streamville  2          2            1986/6/6  

Are there any pitfalls with using any of these strategies? I'd like to eliminate some of them if they have insurmountable disadvantages.

How to restrict users to select only query through application textbox field?

Posted: 09 Oct 2013 06:40 AM PDT

Say that I have a web application that lets users to type in a SQL query (SQL Server 2008+). I want users to be able to run any SELECT query and read whatever is in the database, but I do not want them to be able to insert/update/drop/truncate/exec etc. (in short: only select). Please note that there are no data or even database metadata that are considered confidential in the entire database.

The rules currently in place:

  1. No ";" character. If it is there, it must be exactly one and at the end of the trimmed query string.
  2. Query must begin with select (case ignored, white-spaces are part of regex).
  3. All newlines are removed from the query.

Is this enough? Is there any way to do anything else than just select?

EDIT: Thank you Thomas Stringer for the note about SELECT ... INSERT. This is exactly the information I was after. I will remove all the newlines from the query string, which should help solve this problem. Is there any other way to get around this and be able to modify the data?

NOTE: CTE is not relevant for this case. Lets assume that query has to begin with SELECT.

TPS too high in PostgreSQL

Posted: 09 Oct 2013 06:56 AM PDT

I created a table:

CREATE TABLE foo (      id integer primary key,      bar integer NOT NULL  );  

and inserted one row:

INSERT INTO foo (id, bar) VALUES (1, 1);  

and ran a pgbench of the following simple update statement.

script.txt:

update foo set bar=bar+1 where id=1  

pgbench result:

C:\my-temp>"c:\Program Files\PostgreSQL\9.3\bin\pgbench.exe" -U postgres -c 1 -t 10000 -n -f script.txt testdb  Password:  transaction type: Custom query  scaling factor: 1  query mode: simple  number of clients: 1  number of threads: 1  number of transactions per client: 10000  number of transactions actually processed: 10000/10000  tps = 2052.384567 (including connections establishing)  tps = 2058.699883 (excluding connections establishing)  

Given that it's serially executing transactions, this tps result seems to be too high for my system:

OS: Windows 8  CPU: Intel Core i3-2100 3.10GHz  RAM: 4GB  Disk: SATA II 7200 rpm  PostgreSQL: 9.3  

I haven't changed the default settings, so fsync and synchronous_commit options should be both on.

Here is my postgresql.conf:

# -----------------------------  # PostgreSQL configuration file  # -----------------------------  #  # This file consists of lines of the form:  #  #   name = value  #  # (The "=" is optional.)  Whitespace may be used.  Comments are introduced with  # "#" anywhere on a line.  The complete list of parameter names and allowed  # values can be found in the PostgreSQL documentation.  #  # The commented-out settings shown in this file represent the default values.  # Re-commenting a setting is NOT sufficient to revert it to the default value;  # you need to reload the server.  #  # This file is read on server startup and when the server receives a SIGHUP  # signal.  If you edit the file on a running system, you have to SIGHUP the  # server for the changes to take effect, or use "pg_ctl reload".  Some  # parameters, which are marked below, require a server shutdown and restart to  # take effect.  #  # Any parameter can also be given as a command-line option to the server, e.g.,  # "postgres -c log_connections=on".  Some parameters can be changed at run time  # with the "SET" SQL command.  #  # Memory units:  kB = kilobytes        Time units:  ms  = milliseconds  #                MB = megabytes                     s   = seconds  #                GB = gigabytes                     min = minutes  #                                                   h   = hours  #                                                   d   = days      #------------------------------------------------------------------------------  # FILE LOCATIONS  #------------------------------------------------------------------------------    # The default values of these variables are driven from the -D command-line  # option or PGDATA environment variable, represented here as ConfigDir.    #data_directory = 'ConfigDir'       # use data in another directory                      # (change requires restart)  #hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file                      # (change requires restart)  #ident_file = 'ConfigDir/pg_ident.conf' # ident configuration file                      # (change requires restart)    # If external_pid_file is not explicitly set, no extra PID file is written.  #external_pid_file = ''         # write an extra PID file                      # (change requires restart)      #------------------------------------------------------------------------------  # CONNECTIONS AND AUTHENTICATION  #------------------------------------------------------------------------------    # - Connection Settings -    listen_addresses = '*'      # what IP address(es) to listen on;                      # comma-separated list of addresses;                      # defaults to 'localhost'; use '*' for all                      # (change requires restart)  port = 5432             # (change requires restart)  max_connections = 100           # (change requires restart)  # Note:  Increasing max_connections costs ~400 bytes of shared memory per  # connection slot, plus lock space (see max_locks_per_transaction).  #superuser_reserved_connections = 3 # (change requires restart)  #unix_socket_directories = ''   # comma-separated list of directories                      # (change requires restart)  #unix_socket_group = ''         # (change requires restart)  #unix_socket_permissions = 0777     # begin with 0 to use octal notation                      # (change requires restart)  #bonjour = off              # advertise server via Bonjour                      # (change requires restart)  #bonjour_name = ''          # defaults to the computer name                      # (change requires restart)    # - Security and Authentication -    #authentication_timeout = 1min      # 1s-600s  #ssl = off              # (change requires restart)  #ssl_ciphers = 'DEFAULT:!LOW:!EXP:!MD5:@STRENGTH'   # allowed SSL ciphers                      # (change requires restart)  #ssl_renegotiation_limit = 512MB    # amount of data between renegotiations  #ssl_cert_file = 'server.crt'       # (change requires restart)  #ssl_key_file = 'server.key'        # (change requires restart)  #ssl_ca_file = ''           # (change requires restart)  #ssl_crl_file = ''          # (change requires restart)  #password_encryption = on  #db_user_namespace = off    # Kerberos and GSSAPI  #krb_server_keyfile = ''  #krb_srvname = 'postgres'       # (Kerberos only)  #krb_caseins_users = off    # - TCP Keepalives -  # see "man 7 tcp" for details    #tcp_keepalives_idle = 0        # TCP_KEEPIDLE, in seconds;                      # 0 selects the system default  #tcp_keepalives_interval = 0        # TCP_KEEPINTVL, in seconds;                      # 0 selects the system default  #tcp_keepalives_count = 0       # TCP_KEEPCNT;                      # 0 selects the system default      #------------------------------------------------------------------------------  # RESOURCE USAGE (except WAL)  #------------------------------------------------------------------------------    # - Memory -    shared_buffers = 128MB          # min 128kB                      # (change requires restart)  #temp_buffers = 8MB         # min 800kB  #max_prepared_transactions = 0      # zero disables the feature                      # (change requires restart)  # Note:  Increasing max_prepared_transactions costs ~600 bytes of shared memory  # per transaction slot, plus lock space (see max_locks_per_transaction).  # It is not advisable to set max_prepared_transactions nonzero unless you  # actively intend to use prepared transactions.  #work_mem = 1MB             # min 64kB  #maintenance_work_mem = 16MB        # min 1MB  #max_stack_depth = 2MB          # min 100kB    # - Disk -    #temp_file_limit = -1           # limits per-session temp file space                      # in kB, or -1 for no limit    # - Kernel Resource Usage -    #max_files_per_process = 1000       # min 25                      # (change requires restart)  #shared_preload_libraries = ''      # (change requires restart)    # - Cost-Based Vacuum Delay -    #vacuum_cost_delay = 0          # 0-100 milliseconds  #vacuum_cost_page_hit = 1       # 0-10000 credits  #vacuum_cost_page_miss = 10     # 0-10000 credits  #vacuum_cost_page_dirty = 20        # 0-10000 credits  #vacuum_cost_limit = 200        # 1-10000 credits    # - Background Writer -    #bgwriter_delay = 200ms         # 10-10000ms between rounds  #bgwriter_lru_maxpages = 100        # 0-1000 max buffers written/round  #bgwriter_lru_multiplier = 2.0      # 0-10.0 multipler on buffers scanned/round    # - Asynchronous Behavior -    #effective_io_concurrency = 1       # 1-1000; 0 disables prefetching      #------------------------------------------------------------------------------  # WRITE AHEAD LOG  #------------------------------------------------------------------------------    # - Settings -    #wal_level = minimal            # minimal, archive, or hot_standby                      # (change requires restart)  #fsync = on             # turns forced synchronization on or off  #synchronous_commit = on        # synchronization level;                      # off, local, remote_write, or on  #wal_sync_method = fsync        # the default is the first option                      # supported by the operating system:                      #   open_datasync                      #   fdatasync (default on Linux)                      #   fsync                      #   fsync_writethrough                      #   open_sync  #full_page_writes = on          # recover from partial page writes  #wal_buffers = -1           # min 32kB, -1 sets based on shared_buffers                      # (change requires restart)  #wal_writer_delay = 200ms       # 1-10000 milliseconds    #commit_delay = 0           # range 0-100000, in microseconds  #commit_siblings = 5            # range 1-1000    # - Checkpoints -    #checkpoint_segments = 3        # in logfile segments, min 1, 16MB each  #checkpoint_timeout = 5min      # range 30s-1h  #checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0  #checkpoint_warning = 30s       # 0 disables    # - Archiving -    #archive_mode = off     # allows archiving to be done                  # (change requires restart)  #archive_command = ''       # command to use to archive a logfile segment                  # placeholders: %p = path of file to archive                  #               %f = file name only                  # e.g. 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'  #archive_timeout = 0        # force a logfile segment switch after this                  # number of seconds; 0 disables      #------------------------------------------------------------------------------  # REPLICATION  #------------------------------------------------------------------------------    # - Sending Server(s) -    # Set these on the master and on any standby that will send replication data.    #max_wal_senders = 0        # max number of walsender processes                  # (change requires restart)  #wal_keep_segments = 0      # in logfile segments, 16MB each; 0 disables  #wal_sender_timeout = 60s   # in milliseconds; 0 disables    # - Master Server -    # These settings are ignored on a standby server.    #synchronous_standby_names = '' # standby servers that provide sync rep                  # comma-separated list of application_name                  # from standby(s); '*' = all  #vacuum_defer_cleanup_age = 0   # number of xacts by which cleanup is delayed    # - Standby Servers -    # These settings are ignored on a master server.    #hot_standby = off          # "on" allows queries during recovery                      # (change requires restart)  #max_standby_archive_delay = 30s    # max delay before canceling queries                      # when reading WAL from archive;                      # -1 allows indefinite delay  #max_standby_streaming_delay = 30s  # max delay before canceling queries                      # when reading streaming WAL;                      # -1 allows indefinite delay  #wal_receiver_status_interval = 10s # send replies at least this often                      # 0 disables  #hot_standby_feedback = off     # send info from standby to prevent                      # query conflicts  #wal_receiver_timeout = 60s     # time that receiver waits for                      # communication from master                      # in milliseconds; 0 disables      #------------------------------------------------------------------------------  # QUERY TUNING  #------------------------------------------------------------------------------    # - Planner Method Configuration -    #enable_bitmapscan = on  #enable_hashagg = on  #enable_hashjoin = on  #enable_indexscan = on  #enable_indexonlyscan = on  #enable_material = on  #enable_mergejoin = on  #enable_nestloop = on  #enable_seqscan = on  #enable_sort = on  #enable_tidscan = on    # - Planner Cost Constants -    #seq_page_cost = 1.0            # measured on an arbitrary scale  #random_page_cost = 4.0         # same scale as above  #cpu_tuple_cost = 0.01          # same scale as above  #cpu_index_tuple_cost = 0.005       # same scale as above  #cpu_operator_cost = 0.0025     # same scale as above  #effective_cache_size = 128MB    # - Genetic Query Optimizer -    #geqo = on  #geqo_threshold = 12  #geqo_effort = 5            # range 1-10  #geqo_pool_size = 0         # selects default based on effort  #geqo_generations = 0           # selects default based on effort  #geqo_selection_bias = 2.0      # range 1.5-2.0  #geqo_seed = 0.0            # range 0.0-1.0    # - Other Planner Options -    #default_statistics_target = 100    # range 1-10000  #constraint_exclusion = partition   # on, off, or partition  #cursor_tuple_fraction = 0.1        # range 0.0-1.0  #from_collapse_limit = 8  #join_collapse_limit = 8        # 1 disables collapsing of explicit                      # JOIN clauses      #------------------------------------------------------------------------------  # ERROR REPORTING AND LOGGING  #------------------------------------------------------------------------------    # - Where to Log -    log_destination = 'stderr'      # Valid values are combinations of                      # stderr, csvlog, syslog, and eventlog,                      # depending on platform.  csvlog                      # requires logging_collector to be on.    # This is used when logging to stderr:  logging_collector = on      # Enable capturing of stderr and csvlog                      # into log files. Required to be on for                      # csvlogs.                      # (change requires restart)    # These are only used if logging_collector is on:  #log_directory = 'pg_log'       # directory where log files are written,                      # can be absolute or relative to PGDATA  #log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'    # log file name pattern,                      # can include strftime() escapes  #log_file_mode = 0600           # creation mode for log files,                      # begin with 0 to use octal notation  #log_truncate_on_rotation = off     # If on, an existing log file with the                      # same name as the new log file will be                      # truncated rather than appended to.                      # But such truncation only occurs on                      # time-driven rotation, not on restarts                      # or size-driven rotation.  Default is                      # off, meaning append to existing files                      # in all cases.  #log_rotation_age = 1d          # Automatic rotation of logfiles will                      # happen after that time.  0 disables.  #log_rotation_size = 10MB       # Automatic rotation of logfiles will                      # happen after that much log output.                      # 0 disables.    # These are relevant when logging to syslog:  #syslog_facility = 'LOCAL0'  #syslog_ident = 'postgres'    # This is only relevant when logging to eventlog (win32):  #event_source = 'PostgreSQL'    # - When to Log -    #client_min_messages = notice       # values in order of decreasing detail:                      #   debug5                      #   debug4                      #   debug3                      #   debug2                      #   debug1                      #   log                      #   notice                      #   warning                      #   error    #log_min_messages = warning     # values in order of decreasing detail:                      #   debug5                      #   debug4                      #   debug3                      #   debug2                      #   debug1                      #   info                      #   notice                      #   warning                      #   error                      #   log                      #   fatal                      #   panic    #log_min_error_statement = error    # values in order of decreasing detail:                      #   debug5                      #   debug4                      #   debug3                      #   debug2                      #   debug1                      #   info                      #   notice                      #   warning                      #   error                      #   log                      #   fatal                      #   panic (effectively off)    #log_min_duration_statement = -1    # -1 is disabled, 0 logs all statements                      # and their durations, > 0 logs only                      # statements running at least this number                      # of milliseconds      # - What to Log -    #debug_print_parse = off  #debug_print_rewritten = off  #debug_print_plan = off  #debug_pretty_print = on  #log_checkpoints = off  #log_connections = off  #log_disconnections = off  #log_duration = off  #log_error_verbosity = default      # terse, default, or verbose messages  #log_hostname = off  log_line_prefix = '%t '         # special values:                      #   %a = application name                      #   %u = user name                      #   %d = database name                      #   %r = remote host and port                      #   %h = remote host                      #   %p = process ID                      #   %t = timestamp without milliseconds                      #   %m = timestamp with milliseconds                      #   %i = command tag                      #   %e = SQL state                      #   %c = session ID                      #   %l = session line number                      #   %s = session start timestamp                      #   %v = virtual transaction ID                      #   %x = transaction ID (0 if none)                      #   %q = stop here in non-session                      #        processes                      #   %% = '%'                      # e.g. '<%u%%%d> '  #log_lock_waits = off           # log lock waits >= deadlock_timeout  #log_statement = 'none'         # none, ddl, mod, all  #log_temp_files = -1            # log temporary files equal or larger                      # than the specified size in kilobytes;                      # -1 disables, 0 logs all temp files  log_timezone = 'Asia/Seoul'      #------------------------------------------------------------------------------  # RUNTIME STATISTICS  #------------------------------------------------------------------------------    # - Query/Index Statistics Collector -    #track_activities = on  #track_counts = on  #track_io_timing = off  #track_functions = none         # none, pl, all  #track_activity_query_size = 1024   # (change requires restart)  #update_process_title = on  #stats_temp_directory = 'pg_stat_tmp'      # - Statistics Monitoring -    #log_parser_stats = off  #log_planner_stats = off  #log_executor_stats = off  #log_statement_stats = off      #------------------------------------------------------------------------------  # AUTOVACUUM PARAMETERS  #------------------------------------------------------------------------------    #autovacuum = on            # Enable autovacuum subprocess?  'on'                      # requires track_counts to also be on.  #log_autovacuum_min_duration = -1   # -1 disables, 0 logs all actions and                      # their durations, > 0 logs only                      # actions running at least this number                      # of milliseconds.  #autovacuum_max_workers = 3     # max number of autovacuum subprocesses                      # (change requires restart)  #autovacuum_naptime = 1min      # time between autovacuum runs  #autovacuum_vacuum_threshold = 50   # min number of row updates before                      # vacuum  #autovacuum_analyze_threshold = 50  # min number of row updates before                      # analyze  #autovacuum_vacuum_scale_factor = 0.2   # fraction of table size before vacuum  #autovacuum_analyze_scale_factor = 0.1  # fraction of table size before analyze  #autovacuum_freeze_max_age = 200000000  # maximum XID age before forced vacuum                      # (change requires restart)  #autovacuum_vacuum_cost_delay = 20ms    # default vacuum cost delay for                      # autovacuum, in milliseconds;                      # -1 means use vacuum_cost_delay  #autovacuum_vacuum_cost_limit = -1  # default vacuum cost limit for                      # autovacuum, -1 means use                      # vacuum_cost_limit      #------------------------------------------------------------------------------  # CLIENT CONNECTION DEFAULTS  #------------------------------------------------------------------------------    # - Statement Behavior -    #search_path = '"$user",public'     # schema names  #default_tablespace = ''        # a tablespace name, '' uses the default  #temp_tablespaces = ''          # a list of tablespace names, '' uses                      # only default tablespace  #check_function_bodies = on  #default_transaction_isolation = 'read committed'  #default_transaction_read_only = off  #default_transaction_deferrable = off  #session_replication_role = 'origin'  #statement_timeout = 0          # in milliseconds, 0 is disabled  #lock_timeout = 0           # in milliseconds, 0 is disabled  #vacuum_freeze_min_age = 50000000  #vacuum_freeze_table_age = 150000000  #bytea_output = 'hex'           # hex, escape  #xmlbinary = 'base64'  #xmloption = 'content'    # - Locale and Formatting -    datestyle = 'iso, ymd'  #intervalstyle = 'postgres'  timezone = 'Asia/Seoul'  #timezone_abbreviations = 'Default'     # Select the set of available time zone                      # abbreviations.  Currently, there are                      #   Default                      #   Australia                      #   India                      # You can create your own file in                      # share/timezonesets/.  #extra_float_digits = 0         # min -15, max 3  #client_encoding = sql_ascii        # actually, defaults to database                      # encoding    # These settings are initialized by initdb, but they can be changed.  lc_messages = 'Korean_Korea.949'            # locale for system error message                      # strings  lc_monetary = 'Korean_Korea.949'            # locale for monetary formatting  lc_numeric = 'Korean_Korea.949'         # locale for number formatting  lc_time = 'Korean_Korea.949'                # locale for time formatting    # default configuration for text search  default_text_search_config = 'pg_catalog.simple'    # - Other Defaults -    #dynamic_library_path = '$libdir'  #local_preload_libraries = ''      #------------------------------------------------------------------------------  # LOCK MANAGEMENT  #------------------------------------------------------------------------------    #deadlock_timeout = 1s  #max_locks_per_transaction = 64     # min 10                      # (change requires restart)  # Note:  Each lock table slot uses ~270 bytes of shared memory, and there are  # max_locks_per_transaction * (max_connections + max_prepared_transactions)  # lock table slots.  #max_pred_locks_per_transaction = 64    # min 10                      # (change requires restart)      #------------------------------------------------------------------------------  # VERSION/PLATFORM COMPATIBILITY  #------------------------------------------------------------------------------    # - Previous PostgreSQL Versions -    #array_nulls = on  #backslash_quote = safe_encoding    # on, off, or safe_encoding  #default_with_oids = off  #escape_string_warning = on  #lo_compat_privileges = off  #quote_all_identifiers = off  #sql_inheritance = on  #standard_conforming_strings = on  #synchronize_seqscans = on    # - Other Platforms and Clients -    #transform_null_equals = off      #------------------------------------------------------------------------------  # ERROR HANDLING  #------------------------------------------------------------------------------    #exit_on_error = off            # terminate session on any error?  #restart_after_crash = on       # reinitialize after backend crash?      #------------------------------------------------------------------------------  # CONFIG FILE INCLUDES  #------------------------------------------------------------------------------    # These options allow settings to be loaded from files other than the  # default postgresql.conf.    #include_dir = 'conf.d'         # include files ending in '.conf' from                      # directory 'conf.d'  #include_if_exists = 'exists.conf'  # include file only if it exists  #include = 'special.conf'       # include file      #------------------------------------------------------------------------------  # CUSTOMIZED OPTIONS  #------------------------------------------------------------------------------    # Add settings for extensions here  

pg_test_fsync result:

C:\temp>"C:\Program Files\PostgreSQL\9.3\bin\pg_test_fsync"  5 seconds per test  O_DIRECT supported on this platform for open_datasync and open_sync.    Compare file sync methods using one 8kB write:  (in wal_sync_method preference order, except fdatasync  is Linux's default)          open_datasync                   81199.920 ops/sec      12 usecs/op          fdatasync                                     n/a          fsync                              45.337 ops/sec   22057 usecs/op          fsync_writethrough                 46.470 ops/sec   21519 usecs/op          open_sync                                     n/a    Compare file sync methods using two 8kB writes:  (in wal_sync_method preference order, except fdatasync  is Linux's default)          open_datasync                   41093.981 ops/sec      24 usecs/op          fdatasync                                     n/a          fsync                              38.569 ops/sec   25927 usecs/op          fsync_writethrough                 36.970 ops/sec   27049 usecs/op          open_sync                                     n/a    Compare open_sync with different write sizes:  (This is designed to compare the cost of writing 16kB  in different write open_sync sizes.)           1 * 16kB open_sync write                     n/a           2 *  8kB open_sync writes                    n/a           4 *  4kB open_sync writes                    n/a           8 *  2kB open_sync writes                    n/a          16 *  1kB open_sync writes                    n/a    Test if fsync on non-write file descriptor is honored:  (If the times are similar, fsync() can sync data written  on a different descriptor.)          write, fsync, close                45.564 ops/sec   21947 usecs/op          write, close, fsync                33.373 ops/sec   29964 usecs/op    Non-Sync'ed 8kB writes:          write                             889.800 ops/sec    1124 usecs/op  

Doesn't this fsync result mean that maximum write tps cannot exceed 45 tps?

Another question: Shouldn't 7200 rpm disk be able to perform about 120 tps? Why is fsync so slow?

mysql innodb space x did not exist in memory

Posted: 09 Oct 2013 11:25 AM PDT

Into my innodb log I got the errors below. How to fix? What did it mean? Some tables are corrupted but not all.

InnoDB: space id 1753 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/eav_attribute,  InnoDB: space id 1777 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/catalog_eav_attribute,  InnoDB: space id 1626 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/catalog_category_entity_int,  InnoDB: space id 1609 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/catalog_category_entity_text,  InnoDB: space id 1610 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/catalog_category_entity_varchar,  InnoDB: space id 1611 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/customer_eav_attribute,  InnoDB: space id 1746 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/customer_eav_attribute_website,  InnoDB: space id 1747 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/customer_form_attribute,  InnoDB: space id 1754 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/eav_attribute_label,  InnoDB: space id 1779 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/eav_attribute_option,  InnoDB: space id 1780 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/eav_attribute_option_value,  InnoDB: space id 1781 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/eav_entity_attribute,  InnoDB: space id 1784 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/eav_form_element,  InnoDB: space id 1792 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/googleshopping_attributes,  InnoDB: space id 1804 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/salesrule_product_attribute,  InnoDB: space id 1516 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/googlebase_attributes,  InnoDB: space id 1798 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/eav_attribute_set,  

Troubleshotting Mirroring Configration

Posted: 09 Oct 2013 08:08 AM PDT

tcp server cannot be reached or does not exist.Check the network address name and the ports for the local and remote are operational.

an Exception occured while executing a Transactional-SQL Statment.

I am configuring database Mirroring on Sql Server 2012 after congiruation am getting the abover Error while Starting Mirroring

MySQL shutdown unexpectedly

Posted: 09 Oct 2013 08:25 PM PDT

2013-09-09 10:21:44 5776 [Note] Plugin 'FEDERATED' is disabled.  2013-09-09 10:21:44 1624 InnoDB: Warning: Using innodb_additional_mem_pool_size is DEPRECATED. This option may be removed in future releases, together with the option innodb_use_sys_malloc and with the InnoDB's internal memory allocator.  2013-09-09 10:21:44 5776 [Note] InnoDB: The InnoDB memory heap is disabled  2013-09-09 10:21:44 5776 [Note] InnoDB: Mutexes and rw_locks use Windows interlocked functions  2013-09-09 10:21:44 5776 [Note] InnoDB: Compressed tables use zlib 1.2.3  2013-09-09 10:21:44 5776 [Note] InnoDB: Not using CPU crc32 instructions  2013-09-09 10:21:44 5776 [Note] InnoDB: Initializing buffer pool, size = 16.0M  2013-09-09 10:21:44 5776 [Note] InnoDB: Completed initialization of buffer pool  2013-09-09 10:21:44 5776 [Note] InnoDB: Highest supported file format is Barracuda.  2013-09-09 10:21:44 5776 [Note] InnoDB: The log sequence numbers 0 and 0 in ibdata files do not match the log sequence number 19862295 in the ib_logfiles!  2013-09-09 10:21:44 5776 [Note] InnoDB: Database was not shutdown normally!  2013-09-09 10:21:44 5776 [Note] InnoDB: Starting crash recovery.  2013-09-09 10:21:44 5776 [Note] InnoDB: Reading tablespace information from the .ibd files...  2013-09-09 10:21:44 5776 [ERROR] InnoDB: Attempted to open a previously opened tablespace. Previous tablespace casualme/scraped_user uses space ID: 2 at filepath: .\casualme\scraped_user.ibd. Cannot open tablespace mysql/innodb_index_stats which uses space ID: 2 at filepath: .\mysql\innodb_index_stats.ibd  InnoDB: Error: could not open single-table tablespace file .\mysql\innodb_index_stats.ibd  InnoDB: We do not continue the crash recovery, because the table may become  InnoDB: corrupt if we cannot apply the log records in the InnoDB log to it.  InnoDB: To fix the problem and start mysqld:  InnoDB: 1) If there is a permission problem in the file and mysqld cannot  InnoDB: open the file, you should modify the permissions.  InnoDB: 2) If the table is not needed, or you can restore it from a backup,  InnoDB: then you can remove the .ibd file, and InnoDB will do a normal  InnoDB: crash recovery and ignore that table.  InnoDB: 3) If the file system or the disk is broken, and you cannot remove  InnoDB: the .ibd file, you can set innodb_force_recovery > 0 in my.cnf  InnoDB: and force InnoDB to continue crash recovery here.  

Hello guys.I need your help. my phpmyadmin is not working. How can i make it working again, i have a very large data, please help me how to fix this. i just want to get my database.

I have this following files. db.opt , scraped_user.frm , scraped_user.ibd is there a way to recover my database from this files?, I just want to get the tables data.

I want to reinstall my xampp but is there a way to back up my db when i can't run it? like copying this folder 'DBNAME' with files db.opt , scraped_user.frm , scraped_user.ibd

Select on several {name,value} pairs

Posted: 09 Oct 2013 12:25 PM PDT

I have a this schema:

CREATE TABLE Pairs(     id    VARCHAR(8)  NOT NULL     ,name  VARCHAR(40) NOT NULL     ,value VARCHAR(1000) NOT NULL     ,PRIMARY KEY (id,name));  

I want to write a query to retrieve the id which is present in several rows with several constraints on name and value like:

  • name like 'forname%' AND value like 'CAMI%'
  • name like 'projet%' AND value like '%gociation'
  • name ... AND value ...
  • ...

The cardinality of the constraints is unknown at compile time, it comes from a GUI.

I wrote this query:

select * from (        SELECT * FROM Pairs WHERE ( name   =  'projet'   ) AND ( value like '%gociation' )  union SELECT * FROM Pairs WHERE ( name like 'forname%' ) AND ( value like 'CAMI%' )  ) as u order by id  

the result is:

id      name    value  AEMIMA  projet  renegociation  AENABE  projet  renegociation  AEREDH  projet  renegociation  AGOGLY  projet  renegociation  AHOGAL  projet  renegociation  AHSPID  projet  renegociation       <<<<<<<<<<<<  AHSPID  fornameE    CAMILLE         <<<<<<<<<<<<  AIOSAP  projet  renegociation  AIPNEU  projet  renegociation  

Only the <<<<<<<<<<<< marked lines are good and the id I want is AHSPID.

How can I eliminate the lines which doesn't match ALL the constraints?

Restoring of subscriber database failed

Posted: 09 Oct 2013 08:25 AM PDT

We have replicated a database from live to test in SQL Server 2005. Now we need to restore the subscriber database which is in test to the same server without replication setting in the restored database. How can we achieve it?

SqlPackage does not pick up variables from profile

Posted: 09 Oct 2013 06:25 PM PDT

I want to upgrade a database using .dacpac and sqlpackage.exe

here is how I run sqlpackage:

SqlPackage.exe      /Action:Publish      /SourceFile:"my.dacpac"      /Profile:"myprofile.publish.xml"  

The error I get is:

* The following SqlCmd variables are not defined in the target scripts: foo.

I have verified that myprofile.publish.xml file does contain that var:

<ItemGroup>    <SqlCmdVariable Include="foo">      <Value>bc\local</Value>    </SqlCmdVariable>  

I also verified that project that creates dacpac does publish successfully from within visual studio using myprofile.publish.xml

What else could I be missing?

(I'm using SQL Server 2012)

Speeding up mysqldump / reload

Posted: 09 Oct 2013 10:25 AM PDT

Converting a large schema to file-per-table and I will be performing a mysqldump/reload with --all-databases. I have edited the my.cnf and changed "innod_flush_log_at_trx_commit=2" to speed up the load. I am planning to "SET GLOBAL innodb_max_dirty_pages_pct=0;" at some point before the dump. I am curious to know which combination of settings will get me the fastest dump and reload times?

SCHEMA stats:

26 myisam tables 413 innodb ~240GB of data

[--opt= --disable-keys; --extended-insert; --quick, etc] --no-autocommit ??

vs prepending session vars like: "SET autocommit=0; SET unique_checks=0; SET foreign_key_checks=0;"

Are the mysqldump options equivalent or not really?

Thanks for your advice!

Repeated values in group_concat

Posted: 09 Oct 2013 04:25 PM PDT

I have two tables, first the table food and Second is Activity:

INSERT INTO food      (`id`, `foodName`)  VALUES      (1, 'food1'),      (2, 'food2'),      (3, 'food3'),      (4, 'food4'),      (5, 'food5'),      (6, 'food6'),  ;  CREATE TABLE Activity      (`id` int,`place` varchar(14),`food_id` int,`timing` TIME,`date_and_time` DATETIME)  ;  INSERT INTO Activity      (`id`,`place`, `food_id`,`timing`,`date_and_time`)  VALUES      (1, 'place1', 1, '10:30am','2013-05-01'),      (2, 'place1', 1, '12:30pm','2013-05-01'),      (3, 'place1', 1, '04:30pm','2013-05-01'),      (4, 'place2', 2, '10:30am','2013-05-02'),      (5, 'place2', 2, '12:30pm','2013-05-02'),      (6, 'place2', 2, '4:30pm','2013-05-02'),      (7, 'place1', 2, '10:30am','2013-05-02'),      (8, 'place1', 2, '12:30pm','2013-05-02'),      (9, 'place1', 2, '4:30pm','2013-05-02'),      (10, 'place2', 3, '10:30am','2013-05-03'),      (11, 'place2', 3, '12:30pm','2013-05-03'),      (12, 'place2', 3, '4:30pm','2013-05-03')  ;  

For now I'm using the following query:

SELECT       a.activity_type AS Activity,       COUNT(DISTINCT p.id) AS Products,      CONVERT(GROUP_CONCAT(p.category_id SEPARATOR ',  ') USING utf8)         AS Categories  FROM       food AS p    JOIN       ( SELECT activity_type             , prod_id        FROM activity         WHERE activity_type <> ''         GROUP BY activity_type               , prod_id      ) AS a      ON p.id = a.prod_id  GROUP BY       activity_type  ORDER BY       Products DESC ;  

Could you please help me, I need output in the below format:

place | food_id | Timings             |                              |        |         |---------------------|Date                          |        |         |Time1 |Time2 | Time3 |                              |  ---------------+----------+------------------------------------------|  place1 | 1      | 10:30am| 12:30pm| 4:30pm |2013-05-01(MAX timestamp)|            place2 | 1      | 10:30am| 12:30am| 4:30am |2013-05-01(MAX timestamp)|  

MySQL PDO Cannot assign requested address

Posted: 09 Oct 2013 02:25 PM PDT

Can someone help me with this error?

[08-Apr-2013 17:44:08 Europe/Berlin] PHP Warning:  PDO::__construct(): [2002]      Cannot assign requested address (trying to connect via tcp://****:3306) in       /var/www/***  [08-Apr-2013 17:44:08 Europe/Berlin] PHP Fatal error:  Uncaught exception       'PDOException' with message 'SQLSTATE[HY000] [2002] Cannot assign requested       address' in /var/www/***  

I have a Server with a lot connections per second; out of about 100 Connections, a single one got this error.

I've tried this recommendation from stackoverflow however it does not solve my problem.

How I prevent deadlock occurrence in my application?

Posted: 09 Oct 2013 07:25 AM PDT

I am developing an LMS application in PHP framework(Codeigniter 2.1.0). I am using MySQL database. All the tables in the database have innodb engine. I also created indexes on each tables. Now I am doing load testing using Jmeter version 2.9 locally for 200 users concurrently. During the load testing, in a specific page action I got Deadlock Found error. I changed my original query to the new one but again same error is occurring.

I have written save_interactions function which takes four parameters interaction array,module_id,course_id,user_id & is been called so many times by the AJAX script. The following script inserts the record if the specific interaction_id is not present in that table otherwise the update query will get fire.

public function save_interactions($interaction_array,$modid,$cid,$uid)  {      foreach($interaction_array as $key=>$interact_value)      {          $select_query = $this->db->query("SELECT COUNT(*) AS total FROM `scorm_interactions` WHERE `mod_id`='".$modid."' AND `course_id`='".$cid."' AND `user_id`='".$uid."' AND `interaction_id`='".$interact_value[0]."'");          $fetchRow = $select_query->row_array();            if($fetchRow['total']==1)          {              $update_data = array(                          "interaction_type"=>$interact_value[1],                          "time"=>$interact_value[2],                          "weighting"=>$interact_value[3],                          "correct_response"=>$interact_value[4],                          "learner_response"=>$interact_value[5],                          "result"=>$interact_value[6],                          "latency"=>$interact_value[7],                          "objectives"=>$interact_value[8],                          "description"=>$interact_value[9]              );              $this->db->where('mod_id', $modid);              $this->db->where('course_id', $cid);              $this->db->where('user_id', $uid);              $this->db->where('interaction_id', $interact_value[0]);              $this->db->update('scorm_interactions', $update_data);          }else          {              $insert_data = array(                          "user_id"=>$uid,                          "course_id"=>$cid,                          "mod_id"=>$modid,                          "interaction_id"=>$interact_value[0],                          "interaction_type"=>$interact_value[1],                          "time"=>$interact_value[2],                          "weighting"=>$interact_value[3],                          "correct_response"=>$interact_value[4],                          "learner_response"=>$interact_value[5],                          "result"=>$interact_value[6],                          "latency"=>$interact_value[7],                          "objectives"=>$interact_value[8],                          "description"=>$interact_value[9]              );              $this->db->insert('scorm_interactions', $insert_data);          }      }  }  

I got this type of error:

Deadlock found when trying to get lock; try restarting transaction

UPDATE `scorm_interactions` SET      `interaction_type` = 'choice',      `time` = '10:45:31',      `weighting` = '1',      `correct_response` = 'Knees*',      `learner_response` = 'Knees*',      `result` = 'correct',      `latency` = '0000:00:02.11',      `objectives` = 'Question2_1',      `description` = ''  WHERE      `mod_id` =  '4' AND      `course_id` =  '5' AND      `user_id` =  '185' AND      `interaction_id` =  'Question2_1'  ;    Filename: application/models/user/scorm1_2_model.php Line Number: 234  

Can anyone please suggest me how to avoid Deadlock?

How much data is needed to show MySQL Cluster's performance scaling vs. InnoDB

Posted: 09 Oct 2013 09:25 AM PDT

I am evaluating MySQL Cluster as a possible replacement for an InnoDB schema. So far, I have tested it with 10s of MB of data, and found MySQL Cluster slower than InnoDB; however, I have been told MySQL Cluster scales much better.

How much data does it take to show a performance benefit to MySQL Cluster vs. an InnoDB schema? Or, is there a better way to demonstrate MySQL Cluster's merits?

EDIT

Perhaps an important note: My cluster is currently a heterogeneous cluster with 4 machines. On each machine, I have given an equal amount of Data and Index Memory; 4GB, 2GB, 2GB, and 1GB respectively. The machines are running i7's and are connected over a Gigabit Lan. NumOfReplicas is set to 2.

EDIT

This application is a low-usage analytics database, which has roughly 3 tables >= 200M rows and 5 tables <= 10K rows. When we use it, it takes 15 seconds to run our aggregate functions. My boss asked me to research MySQL Cluster, to see if we could increase performance, since we thought aggregate functions could run pretty well in parallel.

Named Pipe Provider Error code 40

Posted: 09 Oct 2013 09:25 PM PDT

I have literally tried everything, from enabling named pipe to adding exception to ports in the firewall, to everything possible in surface configuration. I can connect to the SQL instance(using TCP and Named Pipes) with SQL Server Management Studio. But sqlcmd throws an error:

Login timeout expired  

Help!

No comments:

Post a Comment

Search This Blog