Wednesday, May 15, 2013

[how to] Sql Server 2008: The Login is from an untrusted domain - domain setup with computer level login

[how to] Sql Server 2008: The Login is from an untrusted domain - domain setup with computer level login


Sql Server 2008: The Login is from an untrusted domain - domain setup with computer level login

Posted: 15 May 2013 08:55 PM PDT

I have a Sql Server 2008 R2 database I'm attempting to connect to using ADO.NET using integrated security in the connection string.

The database resides on a 64 bit Windows Server 2003 machine. The client is a 64 bit Windows 7 Enterprise machine.

The specific error I receive is : SQL Server 2008: The login is from an untrusted domain and cannot be used with Windows authentication

The computer is added as a windows account on the sql server as DOMAIN.edu\ComputerName$ and has been granted full permissions on the target database.

Both the server and client are part of the same active directory domain. The computers can communicate with each other over remote desktop. The client machine can see network shares on the server machine, so I don't think network connectivity is a problem.

I didn't have a problem with this setup when the Sql Server 2008 R2 database was residing on a 64 Windows XP Professional machine and the client was the same machine.

What else can I verify or modify to make my current setup work for machine level authentication?

How to output query to file using SQLite3

Posted: 15 May 2013 06:57 PM PDT

Is there an equivalent to Oracle's spool for sqlite3?

When is it worth it to use Triggers and stored procedures/functions?

Posted: 15 May 2013 05:50 PM PDT

As an example, lets say I have a forum application and for every topic that is created, I have a trigger that fires a database function creating a post.(lets assume we have a topics table and a post table)

I know these things can be quite circumstantial, but generally, do simple tasks like this belong in triggers/functions, or should they just be done in the application code?

As for updating views and replies I was thinking of offloading them to a queue, since the systems functionality doesn't really depend on them.

Table partitioning filegroup

Posted: 15 May 2013 07:58 PM PDT

Can I use the same filegroup for multiple tables when partitioning a table? I partitioned a items table by year. It created five filegroups. Can I use the same file group for orders table or will I need to create separate filegroup?

mysqlnd: same mysql server and php version, can't connect from Windows

Posted: 15 May 2013 03:39 PM PDT

(This question is also in StackOverflow http://stackoverflow.com/questions/16575994/mysqlnd-same-mysql-server-and-php-version-cant-connect-from-windows)

I'm migrating mysql functions to mysqli. As I'm using PHP 5.4+, mysqlnd is used by default (is bundled and enabled with PHP).

I have two enviroments, dev and production. Both uses the same database (MySQL 5.5.24-log). old_passwords is set to OFF.

  • Production enviroment is a shared hosting, Linux, PHP 5.4.10. It connects OK to database.

  • Development enviroment is Win7, PHP 5.4.15, Apache 2.2. It does not connect. The error is:

The server requested authentication method unknown to the client [mysql_old_password]

I have no admin privileges for the database.

I've read a lot and all workarounds point to updating the user's password, in order to use 41-length encoded passwords, but old_passwords was already off. I tried changing the user's password from the hosting panel (in case that the original was created with old_passwords set in ON), but nothing happens.

Some others suggest downgrading PHP, but I'm trying not to. It's dumb :-/

The weird thing is that it works in Linux, but not in Win. The fact that old_passwords was set in OFF makes me thing that the problem may be related with some misconfiguration with mysqlnd.

Any clues?

Optimizing "Removing duplicates" thread state

Posted: 15 May 2013 03:00 PM PDT

I'm trying to hunt down MySQL variables/settings and metrics relevant to the "Removing duplicates" thread state.

Here's a snippet from show full processlist.

Id      User    Host    db      Command Time    State   Info    Rows_sent    Rows_examined   Rows_read  123456  user    123.123.132.123:12345      database        Query   12601   Removing duplicates     INSERT INTO table_a ... select distinct from table_b ... group by field_a, field_b, field_c ..     0       0       0  

All but Time has been redacted from the aforementioned output. You can see that its been running for quite some time (12601 seconds). Usually, the query runs within minutes. However, ever so often, the application will come across a set of id's (from the select) that have a larger result set (>100,000).

The MySQL documentation says ...

  • Removing duplicates

The query was using SELECT DISTINCT in such a way that MySQL could not optimize away the distinct operation at an early stage. Because of this, MySQL requires an extra stage to remove all duplicated rows before sending the result to the client.

My question is, what is this "extra stage" and what variables and/or metrics would be applicable for improving performance?

What's the most effective way to back-up / save data on Oracle 11g large data sets?

Posted: 15 May 2013 06:05 PM PDT

We have a database(actually more databases split by primary access path, modulo on a key.) with millions of records (about 400 millions ). Right now I need to save a specific table userData(which has specific data such username, address, city, state, zipcode, country) remotely(on a different server). I'm wondering which would be the best solution given the following requirements :

  • The data transfer between the host server (database server) and the remote server must be as small as possible.
  • The operation should not be CPU / resource intensive.
  • The operation must be fast.

    I was thinking if it would be possible to do some "compression" on the fly (e.g. using 7z) and what kind of query should I use to make it "silent " to do not disturb the network / CPU too much . 11g database is used on Solaris . On the remote server I'm planning to use redhat but that can be different of course .

File Maker Script Relative Cell Value

Posted: 15 May 2013 03:32 PM PDT

I'm new to FileMaker as well as Database administration in general.

In my table, every Record contains a link to an Adobe Illustrator file and two Photosho files. The layout speciies a button for each of those files for each Record. The button launches a script that would theoretically launch the Illustrator/Photoshop file that it corresponds to.

Here's the script I'm using for Illustrator:

Send Event ["aevt"; "odoc"; test::Illustrator]

where test is the name of the table and Illustrator is the name of the name of the column containing the Illustrator file. (That column is of type Container.)

So here's the problem I'm running into. Most of the time, the button doesn't work. It gives me an error that says, "The file "blah blah.ai" could not be found and is required to complete this operation."

When it does work, it is immediately after reassigning the file in theIllustrator cell. In that case, the specific Record that I most recently assigned works and all the others are broken. This leads me to wonder if the script I'm using is considering Illustrator relatively to the current Record. Do I need to explicitly specify that? Does anybody know what I'm missing?

EDIT: It seems that the problem actually arises when all the referenced files are not in the same folder.

Can I use the database and query it while full-text indexing?

Posted: 15 May 2013 01:02 PM PDT

I need to index my database, but I need to query it while it is indexing, it may take a long time, can I do that?

Configuring PostgreSQL for read performance

Posted: 15 May 2013 11:32 AM PDT

Our system write a lots of data (kind of Big Data system). The write performance is good enough for our needs but the read performance is really too slow.

The primary key (constraint) structure is similar for all our tables: timestamp(Timestamp) ; index(smallint) ; key(integer)

A table can have millions of row, even billion of rows, and a read request is usually for a specific period (timestamp / index) and tag. It's common to have a query that return around 50000 lines. Currently, we can read about 1000 lines per second but we need to be 10 times faster. Is this possible and if so, how?

Thanks a lot for help!!

How much RAM for a large database on a dedicated MySQL InnoDB server?

Posted: 15 May 2013 12:12 PM PDT

Hereunder my hardware config:

CPU: 1x Intel Xeon X3470 Quad core @ 2.93GHz  RAM: 4x 8GB DDR3  Disks: 2 x SATA 320GB for the system & 4 x SAS 300GB for the DB  

I've upgraded from 12Gb to 32Gb and switched the Java application server on another server in order to have a dedicated MySQL server.

Hereunder my MySQL config:

MySQL 5.5.28 with InnoDB plugin on RedHat 6. Database size is 130Gb.

  max_allowed_packet = 50M    max_connections = 100    back_log = 2048    skip-external-locking    tmp_table_size = 300M    max_heap_table_size = 300M    sort_buffer_size = 40M    read_buffer_size = 10M    binlog_cache_size = 1M    key_buffer_size = 10M    table_open_cache = 4096    table_definition_cache = 500    open_files_limit = 65535    thread_cache_size = 100    query_cache_size = 500M    innodb_additional_mem_pool_size = 50M    innodb_buffer_pool_size = 28000M    innodb_buffer_pool_instances = 5    innodb_commit_concurrency = 0    innodb_concurrency_tickets = 0    innodb_doublewrite    innodb_fast_shutdown = 0    innodb_flush_log_at_trx_commit = 2    innodb_flush_method = O_DIRECT    innodb_lock_wait_timeout = 50    innodb_log_buffer_size = 128M    innodb_log_file_size = 256M    innodb_thread_concurrency = 0    innodb_read_io_threads = 64    innodb_write_io_threads = 64    optimizer_switch='index_merge=off'    innodb_stats_on_metadata = 0    performance_schema = 0  

Since my RAM upgrade, I've updated "innodb_buffer_pool_size" from 8000M to 28000M. Since this upgrade, the response time for my SQL requests are a disaster (7x much slower).

What do you think ? Another options to update or to add ?

Many thanks in advance,

Bob'

Running the following query within a scheduled job

Posted: 15 May 2013 12:05 PM PDT

I am looking at running the following query within a scheduled job weekly:

Select TOP 10         s.database_name,         m.physical_device_name,        CAST(DATEDIFF(second, s.backup_start_date,s.backup_finish_date) AS VARCHAR(100)) + ' ' + 'Seconds' TimeTaken,s.backup_start_date,        CASE s.[type]        WHEN 'D' THEN 'Full'        WHEN 'I' THEN 'Differential'        WHEN 'L' THEN 'Transaction Log'        END AS BackupType,        s.server_name,        s.recovery_model   FROM msdb.dbo.backupset s   INNER JOIN msdb.dbo.backupmediafamily m ON s.media_set_id = m.media_set_id    ORDER BY backup_start_date DESC, backup_finish_date   GO  

Not sure what I am doing wrong but I continue to get an error, I have tried running it from within the @query tag for the sendmail option.

Thanks

Why does copied MySQL database have different data than source?

Posted: 15 May 2013 06:19 PM PDT

I am trying to migrate database A to database B. I have tried various methods for this, including...

  1. Using Navicat, do Tools > Data Transfer, copying structure and data from A to B
  2. Using Navicat, delete all tables from B, then drag and drop all tables (structure and data) from A to B
  3. Using Sequel Pro, export A to SQL file, then import SQL file into B

Option 1 always fails, and I'm not sure why. The error message just says "failed".

Options 2 and 3 seem to work fine, but then some tables have slightly different data, which makes absolutely no sense to me. For example, here is a row of data from database A...

node    book    0   438 2255    und 0   foo NULL    full_html  

...and here is the same row in database B...

node    book    0   438 2255    und 0   bar NULL    full_html  

The "foo" and "bar" are HTML content for web page #438 at revision #2255. I have no idea how the HTML content for the same exact revision can be different, when database B should be a straight copy of A.

Mysteriously, "foo" represents the most recent version of page #438, and "bar" actually reflects the prior version of page #438. How is this even possible?

Also, if any of you DBAs happen to be familiar with Drupal, note that this is for a Drupal site. Incidentally, when I try to restore a backup on this site (a ZIP or SQL file made with the Backup and Migrate module), I get an unhelpful "site encountered an unexpected error" message and nothing else in the error log. The same ZIP and SQL files restore just fine in other test environments, so something about this one site just seems wonky.

Missing values in INNER JOIN MS-Access?

Posted: 15 May 2013 11:24 AM PDT

When I run the following query in MS-Access:

SELECT a1.ConStateNumber, a2.ConStateNumber   FROM qryW2_yr_sum_slash_qryW2_q4_sum_result a1 INNER JOIN        qryW2_q3_sum a2 ON a1.ConStateNumber = a2.ConStateNumber  

I end up with values in the a2.ConStateNumber column, but only empty values in the a1.ConStateNumber column. Also this query results in 372 records.


Now I'm pretty sure that the keys are matching up and it is only displaying an empty value in a2 because when I run the following query I get more results than I do when I run the previous query...

SELECT a1.ConStateNumber, a2.ConStateNumber FROM   qryW2_yr_sum_slash_qryW2_q4_sum_result a1 RIGHT JOIN   qryW2_q3_sum a2 ON a1.ConStateNumber = a2.ConStateNumber  

...resulting in 402 records, but still no a1.ConStateNumber

And further still when I modify the query to display only the values that have NULL values the originals that appeared in the inner join do not appear.

SELECT a1.ConStateNumber, a2.ConStateNumber   FROM qryW2_yr_sum_slash_qryW2_q4_sum_result a1 RIGHT JOIN   qryW2_q3_sum a2 ON a1.ConStateNumber = a2.ConStateNumber   WHERE a1.ConStateNumber IS NULL  

...and this returns 30 rows.

But I still do not understand why the values are not being displayed, there are a bunch of other queries / VIEWs that this query pulls from, so could this in some way effect the queries ability to display the a1.ConStateNumber values?

It doesn't matter that much provided the next query will be able to use the values to key off of, but I just wondered what was going on.

How to rollback the identity seed after deadlock

Posted: 15 May 2013 11:43 AM PDT

Now that's an approximate sequence of operations Im performing:

SET IDENTITY_INSERT <table-name> ON;  INSERT SOMETHING to <table-name> with explicitly specifyed id  DECLARE @oldID bigint  SELECT @oldID = <some simple logic here>  DBCC CHECKIDENT ('<table-name>', RESEED, @oldID) WITH NO_INFOMSGS;  SET IDENTITY_INSERT <table-name> OFF;  

So, as you see, everything's simple: basically, I insert record with explicit ID, then I want to return the identity seed back to its previous value (because the insert will obviously shift it).

I need this kind of strange logic (from the first POW) becase I can have ID-s in a large range (billions of rows) of values where the first digit has some special meaning.

For instance, I have current identity seed 1000007777. Then I want to insert a record with ID 2000007777 (this will shift identity to 2000007778). But I want to turn it back to 1000007777, because I believe that I will never reach the point when an identity collision will occur (I just don't operate such volumes of data). As I said, the first digit serves special purposes.

Now, the problem is, if I rollback the transaction after insert, the identity will not get rolled back (as i got it, that's the design of SQL server, and I believe many DB providers).

I can handle this manually if I am the one who conciously performes the rollback. But what if there is deadlock (or some error) right at the point before reseed?

How can I handle that (except handling the exception and performing unconditional reseed)? Any ideas?

Streaming replication in postgresql 9.1 -- sender and receiver processes aren't running

Posted: 15 May 2013 02:25 PM PDT

I've followed this guide, and cross-referenced the primary postgresql wiki in order to set up replication. I've configured postgresql.conf and pg_hba.conf as directed, and then restarted the master and slave postgresql servers. However, when I run the commands ps -ef | grep sender or ps -ef | grep receiver to test whether the sender and receiver processes are running on their respective servers, neither of them are running. Any ideas on what I might be missing?

Thanks a lot for any help you might be able to offer!

CREATE DATABASE permission denied in database 'master'

Posted: 15 May 2013 07:40 PM PDT

I've developed a desktop application using WPF and Entity Framework code-first.

In my application the user can create multiple databases at run-time.

But Entity Framework can't create database if the user isn't admin on his machine.

I need to find a solution that could be done with coding. Management Studio is not installed on the user's machine.

What do you suggest?

Oracle RAC 11g R2 DB silent install: parameter needed for system class

Posted: 15 May 2013 07:52 PM PDT

I'm installing Oracle RAC DB 11gR2 using response file and silent mode. The installation is failing with the error message:

[INS-35421] This options installs a single instance database only
You have chosen to perform a Desktop class install on a cluster. This options will not install oracle RAC

Which parameter is needed for a server-class install? I tried changing the value for oracle.install.db.config.starterdb.control from DB_CONTROL to GRID_CONTROL, but I still get the same warning.

How do I identify the remote db agent name to use in create_database_destination on Oracle 11gR2?

Posted: 15 May 2013 12:30 PM PDT

I am trying to setup DBMS_SCHEDULER in Oracle 11g to run a remote database job.

I have a remote Oracle 11g R2 database on unix and a local one on Windows.

I read that you can install the oracle scheduler agent from the 11g client install for machines that don't have Oracle installed but this is not needed for running remote jobs if Oracle is present on both machines. With the remote agent installation, you run schagent and provide parameters to register the agent to the remote machine but I cant find any instructions on the web regarding how to register remote agents when both machines have Oracle installed or what to use as the agent name in this case.

I have added an entry to tnsnames.ora for the remote DB and can tnsping, etc.

If I run the dbms_scheduler.create_database_destination procedure, it requires an agent name but where can I find this for the database or how can I check that it is running on Unix or Windows?

Getting "BadImageFormatException" when attempting to connect through Visual Studio to Oracle

Posted: 15 May 2013 11:28 AM PDT

I get the following message: "Attempt to load Oracle client libraries threw BadImageFormatException. This problem will occur when running in 64 bit mode with the 32 bit Oracle client components installed." when I attempt to create a connection from both Visual Studio 2008 and 2012 to Oracle from my Windows 8 64bit workstation.

I'm not running a web app when this message happens. I'm only trying to establish an Oracle connection with Visual Studio. Any solutions posted on the internet that I have seen that resolve this issue with others, doesn't seem to apply to my circumstances.

  • I did not have this issue when I was using Widows 7 64bit.
  • I have set the Active Solution Platform to both Any PC and x86 when I have a solution open, but I still get the same message with either setting.
  • I have the latest Oracle 32 bit Client since 64 bit Oracle client won't work with 32 bit Visual Studio.
  • At this point, I am only trying to connect to Oracle from Visual Studio and not run a web application. So IIS settings have nothing to do with this issue.

My environment is

  • Windows 8 64bit
  • Visual Studio 2012 and 2008
  • Oracle 32 bit Client. But also tried 64 Bit Client.
  • Oracle 11.2.0.3 database.
  • Active Solution Platform was tried with both Any PC and x86.

Any suggestions would be appreciated.

SHOW TABLE STATUS very slow on InnoDB

Posted: 15 May 2013 12:07 PM PDT

Recently we've been migrating from MyISAM to InnoDB and I understand that MyISAM uses meta information against each table to track information and such - however, the database is about 1.8gb with 1+ million records across 400+ or so tables.

The problem comes with software like PHPMyAdmin runs "SHOW TABLE STATUS FROM dbase;" where it can take up to 50 seconds to execute that command. Is there a way to optimise this? (MySQL or MariaDB)

Thanks!

Find all the SP's where a particular column is not updated

Posted: 15 May 2013 06:29 PM PDT

I want to find stored procedures where a particular column is not updated.

As an example

SP1:

BEGIN        UPDATE YourTable        SET    Foo = @Foo,               Bar = @Bar        WHERE  Id = @Id    END   

SP2:

 BEGIN        UPDATE YourTable        SET    Foo = @Foo                         WHERE  Id = @Id    END   

So I want to get all the SP's where Bar is not updated in above example it should return SP2.

Note: one SP can have multiple update statements. Is there any possibility to get the table name of that update statement ?

Designing Simple Schema for Disaggregation of Demand Forecast

Posted: 15 May 2013 01:07 PM PDT

I am doing a simple database design task as a training exercise where I have to come up with a basic schema design for the following case:

I have a parent-child hierarchy of products (example, Raw Material > Work in Progress > End Product).

  • Orders are placed at each level.
  • Number of orders shall be viewable in weekly buckets for the next 6 months.
  • Demand forecast can be done for each product level.
  • Demand forecast for any week within next 6 months can be done today.
  • Demand forecast is done for weekly buckets, for the next 6 months.

Demand Forecast is usually done at the higher level in hierarchy (Raw Material or Work in Progress level) It has to be disaggregated to a lower level (End Product).

There are 2 ways in which demand forecast can be disaggregated from a higher level to lower level:

  1. User specifies percentage distribution for end product. Say, there's a forecast of 1000 for Work In Progress.. and user says I want 40% for End Product 1 and 60% for End Product 2 in bucket 10.. Then for 10th week (Sunday to Saturday) from now, forecast value for End Product 1 would be 400 and, for End Product 2 would be 600.
  2. User says, just disaggregate according to orders placed against end products in Bucket 5, and orders in bucket 5 for End Product 1 and 2 are 200 and 800 respectively, then forecast value for EP1 would be ((200/1000) * 100)% and for EP2 would be ((800/1000) * 100)% of forecast for 'Work in Progress'.

Forecast shall be viewable in weekly buckets for the next 6 months and the ideal format should be:

product name | bucket number | week start date | week end date | forecast value | created_on  

PRODUCT_HIERARCHY table could look like this:

id  |   name                |   parent_id  __________________________________________  1   |   raw material        |   (null)  2   |   work in progress    |   1  3   |   end product 1       |   2  4   |   end product 2       |   2  

ORDERS table might look like this:

id | prod_id | order_date | delivery_date | delivered_date  

where,

prod_id is foreign key that references id of PRODUCT_HIERARCHY table,

How to store forecast? What would be a good basic schema for such a requirement?


My idea to select orders for 26 weekly buckets is:

SELECT      COUNT(*) TOTAL_ORDERS,      WIDTH_BUCKET(          delivery_date,          SYSDATE,          ADD_MONTHS(sysdate, 6),           TO_NUMBER( TO_CHAR(SYSDATE,'DD-MON-YYYY') - TO_CHAR(ADD_MONTHS(sysdate, 6),'DD-MON-YYYY') ) / 7      ) BUCKET_NO  FROM      orders_table  WHERE      delivery_date BETWEEN SYSDATE AND ADD_MONTHS(sysdate, 6);  

But this will give weekly buckets starting from today irrespective of the day. How can I convert them to Sunday to Saturday weeks in Oracle?

Please help designing this database structure.

(will be using Oracle 11g)

SQL Server BPA 2008R2

Posted: 15 May 2013 11:01 AM PDT

I have been using SQL server BPA for getting good information from Microsoft. I was using its 2005 version in which i used to export the results in csv format but recently I got two new servers which has got SQL Server 2008 R2 installed on it and I know I can't run BPA 2005 on these, so chose the R2 version of it, but it doesn't have an option to save the report on csv format, only xml, I have tried the excel to convert it into csv but no use, even-though it display the details but can't narrow down the results any idea of converting the results to csv format?

Setting up DRBD on an active MySQL server

Posted: 15 May 2013 07:07 PM PDT

When it comes to setting up DRBD and MySQL, is the following possible?

  • Set up DRBD on an active MySQL server
  • Set up DRBD with no downtime allowed

MySQL optimization - year column grouping - using temporary table, filesort

Posted: 15 May 2013 02:07 PM PDT

I have a transactions table which is having 600,000 records, I need to list the count for the dashboard on financial year basis. The table used is MyISAM. I tried adding index for the transaction date (tran_date). Even though it is using the index it creates temporary table which is taking more time because of the temporary table and the filesort. Is there any way to optimize the query to improve the query time?

  SELECT COUNT( * ) AS cnt, CASE WHEN MONTH( tran_date ) >=3  THEN concat( YEAR( tran_date ) , '-', YEAR( tran_date ) +1 )  ELSE concat( YEAR( tran_date ) -1, '-', YEAR( tran_date ) )  END AS financial_year  FROM `transactions1`  WHERE tran_date >= '2010-06-01'  GROUP BY financial_year    Showing rows 0 - 4 (5 total, Query took 1.2095 sec)  
  id select_type  table       type  possible_keys     key key_len    ref  rows    Extra  1   SIMPLE    transactions1 range PRIMARY,tran_date tran_date 8  NULL   346485  Using where; Using index; Using temporary; Using filesort  
  Keyname     Type    Unique  Packed  Field       Cardinality   Collation   PRIMARY     BTREE   Yes       No    tran_date      205720         A                                           tran_ID        617162         A   coupon_No   BTREE   No        No    coupon_No      617162         A       account_typeBTREE   No        No    account_type   3              A       prodCode    BTREE   No        No    prodCode       430            A                                           tran_date      308581         A   tran_date   BTREE   No        No    tran_date      205720         A       cust_ID     BTREE   No        No    cust_ID        3265           A                                           tran_date      308581         A                                       account_type   308581         A                                       points_earned  617162         A  

Update :

Tried adding partition which is not that much helpful in comparison with non partitioned one. Does replication help in this case for reading this table?. There will be more grouping based on the dates (using the date functions) when reading the data.

Edit:

I altered the query and reduced the query execution time. The query I used is,

  SELECT SUM( count )  FROM (  SELECT COUNT( * ) AS count,  CASE WHEN MONTH( tran_date ) >=3  THEN concat( YEAR( tran_date ) , '-', YEAR( tran_date ) +1 )  ELSE concat( YEAR( tran_date ) -1, '-', YEAR( tran_date ) )  END AS format_date  FROM transactions1  GROUP BY tran_date  ) AS s  GROUP BY format_date    Showing rows 0 - 4 (5 total, Query took 0.5636 sec)  
  id  select_type     table     type  possible_keys   key     key_len     ref     rows    Extra  1   PRIMARY     <derived2>    ALL       NULL        NULL      NULL      NULL    229676  Using temporary; Using filesort  2   DERIVED     transactions1 index     NULL        tran_date   8       NULL    617162  Using index  

But when using

  SELECT COUNT( * ) AS count,  CASE WHEN MONTH( tran_date ) >=3  THEN concat( YEAR( tran_date ) , '-', YEAR( tran_date ) +1 )  ELSE concat( YEAR( tran_date ) -1, '-', YEAR( tran_date ) )  END AS format_date  FROM transactions1  GROUP BY tran_date    Showing rows 0 - 29 (229,676 total, Query took 0.0006 sec)  

gives less time without using the SUM(count) in the derived table. Is there any other way to get the sum without using the subquery in MySQL or can the subquery be optimized to get the index.

How can an identity primary key index become fragmented?

Posted: 15 May 2013 12:14 PM PDT

From what I understand about index fragmentation, this should not be possible. The cases I have found in my databases are non-clustered.

Example:

ALTER TABLE [dbo].[ClaimLineInstitutional] ADD  CONSTRAINT [PK_ClaimLineInsitutional]    PRIMARY KEY NONCLUSTERED   (      [ClaimLineInstitutionalID] ASC  )WITH (PAD_INDEX  = ON, STATISTICS_NORECOMPUTE  = OFF, SORT_IN_TEMPDB = OFF,     IGNORE_DUP_KEY = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON,      FILLFACTOR = 100) ON [PRIMARY]  

Update:

I am querying dm_db_index_physical_stats.avg_fragmentation_in_percent, so I believe it is physical fragmentation I am seeing.

No comments:

Post a Comment

Search This Blog