Monday, July 8, 2013

[how to] Mysql datatype for username of maximum 15 characters

[how to] Mysql datatype for username of maximum 15 characters


Mysql datatype for username of maximum 15 characters

Posted: 08 Jul 2013 09:20 PM PDT

I currently have a database with the "user" table having username as one of the columns.

I allow a maximum of 15 characters for the username in my system... Is it fine to have the username column as a varchar(15) datatype?

How does Berkeley DB manage its files?

Posted: 08 Jul 2013 08:02 PM PDT

I'm using Berkeley DB (BDB) as a persistent store for a JMS queue. When I consume an entry from the queue the underlying BDB files do not immediately shrink, but do eventually. I'm running into issues with the BDB files taking up lots of space on the file system while retrieval performance degrades.

My entry size varies considerably but it is not uncommon to have 400,000 messages of around 32kb each in the persistent queue.

I'd like to understand how BDB manages the files so that I can throttle the number of entries for file size/retrieval performance. Or so I can rule out BDB as my persistent store mechanism.

I am probably searching for the wrong terms but have not found what I'm looking for in the Oracle documentation or The Berkeley DB Book. I would not be surprised if BDB doesn't want me messing with its internals but I would be surprised if (at least) an overview of how it does handle its internals is not available.

Please forgive the misleading tag -- I didn't find ones for Berkeley, Sleepycat or BDB and I don't have the reputation to submit a new tag.

Lock hints on preconditions

Posted: 08 Jul 2013 07:59 PM PDT

Assuming for a moment that FK references are not available, what lock hints should be placed on the preconditions of the following example to prevent deadlocks and inconsistent data?

(This is a contrived example, of course, to illustrate a scenario in which a delete/update can only be performed dependent upon the state of some other table)

-- Schema  CREATE TABLE EntityTypes (      EntityTypeID int NOT NULL,       Name nvarchar(50) NOT NULL,       CONSTRAINT PK_EntityTypes PRIMARY KEY CLUSTERED (EntityTypeID)  );    CREATE TABLE Entities (      EntityID int IDENTITY(1,1) NOT NULL,      EntityTypeID int NOT NULL,      EntityName nvarchar(50) NOT NULL,      CONSTRAINT PK_Entities PRIMARY KEY CLUSTERED (EntityID)  );    -- Test data  INSERT INTO EntityTypes (EntityTypeID, Name) VALUES (1, 'Test 1');    -- Updater 1  SET XACT_ABORT ON;  BEGIN TRAN;    IF EXISTS (SELECT 1 FROM Entities WHERE EntityTypeID = 1)  BEGIN       RAISERROR('Cannot delete entity type which is referenced', 16, 1);      RETURN;  END    DELETE FROM EntityTypes WHERE EntityTypeID = 1;    COMMIT TRAN;    -- Updater 2  INSERT INTO Entities (EntityTypeID, EntityName) VALUES (1, 'Test Entity')  

Environment: SQL Server 2012

Sql Server 2012 Extended Events for Selective Xml Indexing not showing results

Posted: 08 Jul 2013 02:09 PM PDT

I created a Extended Event to monitor and tune a Selective Xml Index

CREATE EVENT SESSION [SelectiveXmlIndexing] ON SERVER   ADD EVENT sqlserver.selective_xml_index_no_compatible_sql_type,  ADD EVENT sqlserver.selective_xml_index_no_compatible_xsd_types,  ADD EVENT sqlserver.selective_xml_index_path_not_indexed,  ADD EVENT sqlserver.selective_xml_index_path_not_supported  WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=OFF,STARTUP_STATE=OFF)  GO  

In SSMS when I view live data and then execute the xml queries and views, no results are being returned.

All i see is:

enter image description here

Is there another step I need to acomplish before seeing the results?

I followed:

http://www.mssqltips.com/sqlservertip/2731/managing-sql-server-extended-events-in-management-studio/

http://www.sqlskills.com/blogs/bobb/choosing-paths-for-selective-xml-index-part-2-using-the-sxi-xevents/

http://www.sqlskills.com/blogs/jonathan/sql-server-2012-extended-events-update-3-viewing-target-data/

Can SQL Server 2008R2 SSIS packages access SQL Server 2012 databases with driver updates only?

Posted: 08 Jul 2013 02:21 PM PDT

We have two products running on SQL Server that we will be upgrading to SQL Server 2012 on a staggered schedule. Product B uses SSIS packages (using ODBC drivers) to extract data from Product A's database.

The scenario would be to move Product A to SQL Server 2012 first, with product B to follow in the coming months. Product B would continue to run in a 2008R2 environment while Product A would be moved to a new SQL Server 2012 environment.

The hope is that there would not be a need for package changes and that installing the new ODBC Driver (Microsoft ODBC Driver 11) on the 2008R2 SSIS environment would allow the packages to execute against Product A.

EDIT - The package would remain in the SQL Server 2008 format and would be using the SQL Server 2012 Database as it's source.

Does anyone know if this is in fact viable or are there known issues that would require package changes?

Thank you for any help you can provide.

Matt

Indexes and Optimizations for Logging System

Posted: 08 Jul 2013 06:42 PM PDT

I'm designing tables for a new log system and am having trouble figuring out what indexes I'll need (I'm a developer, not a DBA), and am also open to any suggestions/criticisms/etc. for making the tables as "fast" as possible (from my app's point of view).

Background info: the DB itself is a MySQL instance hosted on AWS. It's a shared server and I don't have tuning privileges beyond basic table setup, indexes, key constraints, etc. This is a logging system, so very write-intensive. Although I will write scripts to dump certain log records (errors, etc.), there won't be much reading going on.

The logging system will consist of two tables, log_entry which is the log record itself, and log_level which is a "static" lookup/reference table used by log_entry. Below, where I specify Type as just varchar, I haven't figured out what size to make the field (varchar(50), varchar(100), etc.). My proposed schema for the two tables is as follows:

log_levels:

Column          |    Type        |    Special Info                 |  Nullable?    |  Example  ============================================================================================================  id                   int              Primary Key, Auto-Increment      No             3  name                 varchar(20)                                       No             "Info"  tag                  varchar(20)                                       No             "INFO"  

log_entries:

Column          |    Type        |    Special Info                 |  Nullable?    |  Example  ============================================================================================================  id                   int              Primary Key, Auto-Increment      No             19348  app_name             varchar                                           No             "My App"  timestamp            long                                              No             1373316599 (UNIX Epoch time)  log_level_id         int              Foreign Key (log_levels)         No             3  origin_class         varchar                                           No             "com.me.myapp.SomeClass"  message              text                                              No             "Something log-worthy happened."  throwable_class      varchar                                           Yes            "java.lang.RuntimeException"  

The problem child here is log_entries.message. This field will store a lot of information, and for reasons outside the scope of this question, I can't break this information out into better-normalized fields/tables. It's just a big text blob that will store lots of different kinds of data, and again, there's nothing I can do about that.

When I do perform reads (for the purposes of analytics/debugging), these are the queries I plan on using the most (and hence perhaps a basis for creating indexes from):

// Give me all the ERROR log entries.  SELECT * FROM log_entries le INNER JOIN log_levels ll ON le.log_level_id = ll.id WHERE ll.tag = 'ERROR'    // All ERROR log entries for a particular user.  SELECT * FROM log_entries le INNER JOIN log_levels ll ON le.log_level_id = ll.id WHERE le.message LIKE '%user: john.doe@example.com%'      AND ll.tag = 'ERROR'    // All log entries for a particular user.  SELECT * FROM log_entries WHERE message LIKE '%user: john.doe@example.com%'    // All unique throwables in the system.  SELECT DISTINCT(throwable_class) FROM log_entries    // All messages in a given time range, or after a certain datetime.  SELECT * FROM log_entries WHERE timestamp >= 1373315311 AND timestamp <= 1439065711    // All log entries for a particular user within a time range.  SELECT * FROM log_entries WHERE message LIKE '%user: john.doe@example.com%' AND timestamp >= 1373315311 AND timestamp <= 1439065711  

So I ask:

  • What indexes should I be creating here, and is there any sort of management/housekeeping I should be performing with the indexes (re-indexing, vaccuming, etc.)?
  • Any other optimizations I can be making, in general (again, nothing I can do about log_entries.message)?
    • For instance, I've heard that with some DB servers (not necessarily MySQL), the order of the columns (when you go to create the table) can impact performance as well...?!? etc.

Thanks in advance!

Can't send PHP query to MySQL [migrated]

Posted: 08 Jul 2013 04:42 PM PDT

When i'm trying to send query from PHP 5.4.6 TS,VC9 to MySQL 5.5 (NTS,VC9) I get the message: Fatal error: Call to undefined function mysql_connect() after restarting Apache 2.2 on (Windows xp) it says:

PHP Startup: mysql: Unable to initialize module Module compiled with build ID=API20100525,NTS,VC9 PHP compiled with build ID=API20100525,TS,VC9 These options needs to match

How can I solve this issue?

SUSPECT in mdf files

Posted: 08 Jul 2013 01:42 PM PDT

when i run

ALTER DATABASE EMS102013 SET SINGLE_USER;

DBCC CHECKDB (EMS102013) WITH NO_INFOMSGS, ALL_ERRORMSGS

show message

Msg 8921, Level 16, State 1, Line 1 Check terminated. A failure was detected while collecting facts. Possibly tempdb out of space or a system table is inconsistent. Check previous errors.

how can fix this problem

MySQL - Randomly long DESCRIBE queries

Posted: 08 Jul 2013 01:07 PM PDT

I have several queries like this (running on MySQL 5.5):

DESCRIBE `TABLE`;  

And it randomly takes 5-10 seconds on my production servers. We have an ORM that caches this data once per day (per web server) and I'm wondering why I see these slow queries popping up from time to time. I'm thinking that I could just change the web servers to cache until their processes are restarted but it'd be nice to know if there's an easy fix or something I could look into that may be causing this.

RTF to Plain-Text? [on hold]

Posted: 08 Jul 2013 12:34 PM PDT

Did you ever get the library from this post? We have a need to convert whole, large, whole tables worth of RTF information into something usable and would rather not reinvent if possible.

Converting RTF in a text column to plain text in bulk

Partitioning vs Indexes

Posted: 08 Jul 2013 04:04 PM PDT

I'm very new to databases, so I apologize if my question is very basic...

Anyways, I'm creating a table with what seems like alot of data (500 million rows right now, and potentially twice as much in the future). Now, I need to have a way to access the data in this table quickly, so I'm researching partitions and indexes. However, I'm getting confused on when I'd want to create a partition vs. an index. I have three columns that seem like reasonable candidates for partitioning or indexing:

  • Time (day or week, data spans a 4 month period)
  • Customer "Token": The last two digits of a user's ID
  • Client ID (typically will have 5-20 clients in the dataset)

When running future selects against this table, it's likely that I'll be filtering on client id as well as wanting to do some sampling (which I would like to do by the "Token" variable). I may occasionally be filtering by the time variable as well.

So, my question is this: how should I organize my table? Should I partition by Client and Token, and then create an index on time? Or just partition on client and create indices on time and token? And, more importantly, what is the logic behind the strategy you recommend?

Also, after I've created the table, will the indexes break if I add more data into it (particularly new clients for the same date/token range)? Is recreating an index relatively simple?

Thanks so much for your help, and please let me know if you need any more information from me.

TempDB Version Store cleanup

Posted: 08 Jul 2013 04:17 PM PDT

From what I can find the version store will only clean up versions that are older than the oldest active transaction. Question: Is the oldest transaction database specific or will SQL Server keep all versions, regardless of the database, if there is an older transaction still active, period?

Backstory - SQL Server 2005 SP4 Enterprise hosting around 40 databases. TempDB is currently 102 GB, version store is around 98 GB. One of the applications hosted on the database maintains a connection until it is forced to exit (restart etc...) and the oldest transaction is 40 days old(last restart, based on sys.dm...database_transactions). Two separate large databases had extremely heavy use over the last month and we saw consistent TempDB growth coinciding with these operations. We expected some growth. We did not expect it to keep growing. Question: Are the versions stored in TempDB's version store from these two separate databases still there because a third independent database has a connection that is 40 days old and shows an active transaction_state?

Perfmon counters: Version store is continually growing in the few hours I have tracked it this morning. Version Generation Rate AVG is around 30 kb/s, Version Cleanup rate is 0 kb/s.

Plenty of space left for TempDB, there are around 300 GB of total data files for all user databases, TempDB has grown on average 350 MB per day for each of its 8 data files since the last restart.

Answers to comment questions so as not to have a long running comment section:

Q: Why auto-growth on tempdb? A: Its a cluster that supports important stuff. Through trial and error we found 8 GB sufficient to meet our needs and allow tempdb to come online quickly in event of failover. We monitor auto-growth, saw some that we expected, then saw more, did a large increase, then saw more, etc...We believe the version store is not getting cleaned properly, so....here we are

Q: How do you know the transaction is active and not just an active connection? A: transaction_state says active in sys.dm_tran_active_snapshot_database_transactions and other stuff. It will periodically run a query according to activity monitor when I catch it right. It is of course vendor written and the super user doesn't know how it works. Which leads back to the main question. Is this the culprit, am I barking up the wrong tree?

Referenced question: find-transactions-that-are-filling-up-the-version-store

How to migrate data between "all-surrogate" databases?

Posted: 08 Jul 2013 04:30 PM PDT

With some periodicity this needs to be done:

  • Inserting data from a production database into a test/development database, so that programmers have fresh data to test on.
  • Migrating data from a database into another, for whatever reason.

Note: databases are relational.

The problem is:

  • Databases were modeled with a "all-PK-are-surrogate" policy.
  • Moving data from a database to another involves programming to avoid colision of PK values ( generated by sequences ).
  • A table pairing the values of the PKs of origin and destination database's tables is needed in order to do the "equivalency" between surrogate keys.
  • Said table has to be created in advance of migration, by matching business keys (which are not PK, so don't govern FKs)
  • Data migration even of a single table in not trivial, as opossed to tables with natural keys.

What's the easiest way of copying rows from one database to another when all tables have surrogates keys ?

How can I search values, considering occurences, on more than one term in SQL Server CE?

Posted: 08 Jul 2013 02:58 PM PDT

I have a rather complex SQL query, that I actually received help from this SE site to come up with parts of. This query is designed to take a search term and search all relevant values in a database (this database stores all the relevant information for my partially CMS-ish site). It is a query that uses both JOINs and UNIONs. If I needed only to search one search term, it would be perfect, but alas, my boss knows that most people will expect a search bar to work more like what they might expect from Google (i.e., as it stands my query, so far, will treat whatever they type as the whole search term, as though it were wrapped in quotes, instead of separating each word).

While I know that the C# I use server-side will be needed to split there typed search term into different values (separating on white space), and stored in an array, this isn't really a problem, nor do I feel like any knowledge of C# is needed to answer this question. The real problem is that (given the query, shown below) I can't seem to wrap my head around how to compile a couple of the columns (formed by sql functions involving simple calculations) for use with more than one parameter. This is because I need ALL of the functionality of the searching of the database to be in one single (even if a bit oversized) query. I need this all to be in one single query because I also need to use both OFFSET and FETCH for pagination, and I can't see how I can implement that with multiple queries.

Okay, enough talk, here's the query I have so far (works perfect with only a single search term, but again, I need this to be dynamic enough to search on virtually any number of parameters):

SELECT NULL AS ObjectID, page AS location, 'pageSettings' AS type, page AS value, 'pageName' AS contentType, (len(page) - len(replace(UPPER(page), UPPER(@0), ''))) / len(@0) AS occurences, ((len(page) - len(replace(UPPER(page), UPPER(@0), ''))) / len(@0)) * 2 AS priority FROM PageSettings WHERE page LIKE '%' + CAST(@0 AS nvarchar) + '%'   UNION   SELECT NULL AS ObjectID, pageTitle AS location, 'ExternalSecondaryPages' AS type, pageTitle AS value, 'pageName' AS contentType, (len(pageTitle) - len(replace(UPPER(pageTitle), UPPER(@0), ''))) / len(@0) AS occurences, ((len(pageTitle) - len(replace(UPPER(pageTitle), UPPER(@0), ''))) / len(@0)) * 2 AS priority FROM ExternalSecondaryPages WHERE pageTitle LIKE '%' + CAST(@0 AS nvarchar) + '%'   UNION   SELECT NULL AS ObjectID, eventTitle AS location, 'MainStreetEvents' AS type, eventTitle AS value, 'pageName' AS contentType, (len(eventTitle) - len(replace(UPPER(eventTitle), UPPER(@0), ''))) / len(@0) AS occurences, ((len(eventTitle) - len(replace(UPPER(eventTitle), UPPER(@0), ''))) / len(@0)) * 2 AS priority FROM MainStreetEvents WHERE eventTitle LIKE '%' + CAST(@0 AS nvarchar) + '%'   UNION   SELECT ContentObjects.ObjectID, location, type, value, 'content' AS contentType, (len(value) - len(replace(UPPER(value), UPPER(@0), ''))) / len(@0) AS occurences, ((len(value) - len(replace(UPPER(value), UPPER(@0), ''))) / len(@0)) * 2 AS priority FROM ContentObjects JOIN TextObjects ON ContentObjects.ObjectID = TextObjects.ObjectID WHERE value LIKE '%' + CAST(@0 AS nvarchar) + '%'   UNION   SELECT ContentObjects.ObjectID, location, type, value, 'content' AS contentType, (len(value) - len(replace(UPPER(value), UPPER(@0), ''))) / len(@0) AS occurences, ((len(value) - len(replace(UPPER(value), UPPER(@0), ''))) / len(@0)) * 2 AS priority FROM ContentObjects JOIN ParagraphObjects ON ContentObjects.ObjectID = ParagraphObjects.ObjectID WHERE value LIKE '%' + CAST(@0 AS nvarchar) + '%'   UNION   SELECT ContentObjects.ObjectID, location, type, value, 'content' AS contentType, (len(value) - len(replace(UPPER(value), UPPER(@0), ''))) / len(@0) AS occurences, ((len(value) - len(replace(UPPER(value), UPPER(@0), ''))) / len(@0)) * 2 AS priority FROM ContentObjects JOIN TitleObjects ON ContentObjects.ObjectID = TitleObjects.ObjectID WHERE value LIKE '%' + CAST(@0 AS nvarchar) + '%'   UNION   SELECT ContentObjects.ObjectID, location, type, value, 'content' AS contentType, (len(value) - len(replace(UPPER(value), UPPER(@0), ''))) / len(@0) AS occurences, ((len(value) - len(replace(UPPER(value), UPPER(@0), ''))) / len(@0)) * 2 AS priority FROM ContentObjects JOIN SubtitleObjects ON ContentObjects.ObjectID = SubtitleObjects.ObjectID WHERE value LIKE '%' + CAST(@0 AS nvarchar) + '%'   UNION   SELECT ContentObjects.ObjectID, location, type, linkText AS value, 'content' AS contentType, (len(linkText) - len(replace(UPPER(linkText), UPPER(@0), ''))) / len(@0) AS occurences, ((len(linkText) - len(replace(UPPER(linkText), UPPER(@0), ''))) / len(@0)) * 2 AS priority FROM ContentObjects JOIN LinkObjects ON ContentObjects.ObjectID = LinkObjects.ObjectID WHERE linkText LIKE '%' + CAST(@0 AS nvarchar) + '%'   UNION   SELECT ContentObjects.ObjectID, location, type, value, 'content' AS contentType, (len(value) - len(replace(UPPER(value), UPPER(@0), ''))) / len(@0) AS occurences, ((len(value) - len(replace(UPPER(value), UPPER(@0), ''))) / len(@0)) * 2 AS priority FROM ContentObjects JOIN BulletLineObjects ON ContentObjects.ObjectID = BulletLineObjects.ObjectID WHERE value LIKE '%' + CAST(@0 AS nvarchar) + '%'   UNION   SELECT ContentObjects.ObjectID, location, type, caption AS value, 'content' AS contentType, (len(caption) - len(replace(UPPER(caption), UPPER(@0), ''))) / len(@0) AS occurences, ((len(caption) - len(replace(UPPER(caption), UPPER(@0), ''))) / len(@0)) * 2 AS priority FROM ContentObjects JOIN ImageObjects ON ContentObjects.ObjectID = ImageObjects.ObjectID WHERE caption LIKE '%' + CAST(@0 AS nvarchar) + '%'   UNION   SELECT ContentObjects.ObjectID, location, type, caption AS value, 'content' AS contentType, (len(caption) - len(replace(UPPER(caption), UPPER(@0), ''))) / len(@0) AS occurences, ((len(caption) - len(replace(UPPER(caption), UPPER(@0), ''))) / len(@0)) * 2 AS priority FROM ContentObjects JOIN ImageLinkObjects ON ContentObjects.ObjectID = ImageLinkObjects.ObjectID WHERE caption LIKE '%' + CAST(@0 AS nvarchar) + '%'   ORDER BY contentType DESC, location ASC, type DESC  

I guess the question in short, is how can I (or is it even possible to) still get the occurences column (and the priority column, though, If I get one, I get the other since they are based off of the same calculations) when I need to find all occurences of all of the arguments in their search text (that is, separate words in the typed search term)? I know I can use OR in the WHERE clause, but how can I use OR in an UPPER or REPLACE function?

ADDITIONAL INFORMATION, IF NEEDED OR CURIOUS ABOUT ANY OF THE ABOVE

There is probably a little bit of overhead using this query, but I really don't mind that a whole lot, honestly. I am way more worried about getting the functionality I need into a single query (which may not even be possible considering my needs, I guess).

Notice the (@0) which is just a placeholder used for parameterized queries (this is probably common knowledge, but just thought I would throw that out there just in case) where there search term is later thrown in by C# as an additional argument in the function that actually queries the database (again, probably obvious, but this is a common anti-SQL injection tactic, given my environment, if not many others).

Right now, the priority column simply always multiplies the occurences column by 2, but this is only until I work out exactly what each multiplier should be, considering the priority desired.

I am in a C#.net WebMatrix with web-pages environment. (Effectively this means I am using SQL Server CE).

I apologize if this question is a little confusing and am more than happy to clarify anything needed.

UPDATE

I am now more scared than ever that this may not be possible, as I have just discovered that using conditional logic (i.e., IF ... ELSE in my case) isn't supported by SQL SERVER CE. (hangs head in dissapointment...)

UPDATE AGAIN

It seems that, considering the limitations to SQL Server CE and my own requirements, this isn't entirely possible. SQL Server CE is still powerful, but just not as powerful as its bigger, more complete, brother. I believe, though, for my own environment, I may indeed have found a way to complete what's left of my task using my server-side language to manipulate the compilation of the SQL query string. As long as I end up with one query in the end, I can still use OFFSET and FETCH for pagination in the way I was taught. I just want to thank any who looked this over and tried to help. Now, come on, C#, we have some work to do...

Storing complex userdata

Posted: 08 Jul 2013 03:19 PM PDT

Assume a tool where one can create an useraccount, then store 'fields' for that account. Each field can contain zero or more 'sub-fields' and each of those can contain zero or more strings.

I wonder how a proffesional developer would design a database structure for such a use case.

My approach would be:

  • Create a table for each user
  • Add two rows, fields and id
  • store a big JSON object in the first row, which contains every subfield and its strings

But somehow i think that this won't be the best way to solve this and i'm having a giant brain fart.

I would greatly appreciate any suggestions.

How to change the default delimiter in PostgreSQL 9.1.9 using SET?

Posted: 08 Jul 2013 08:05 PM PDT

From within a bash script I tried to use a replace and a detection on a field containing a pipe like:

IF position ('|' in var) IS NOT NULL THEN ...    ...REPLACE(field, '|', '#')   

The data itself was loaded into the DB using the COPY statement e.g.

COPY records FROM '$datasource' WITH DELIMITER AS ',' CSV HEADER NULL as '';  

I am using psql (9.1.9) and the default field separator is pipe:

#> \f  #> Field separator is "|".  

Ideally I would like to SET the default limiter in a CREATE OR REPLACE function at runtime like:

...  END; \$\$ LANGUAGE plpgsql SET config_param TO '#';  

OR as a seperate statement just like SET config_param TO '#';

I searched the documentation and internet for the right name of the config parameter without luck. Any help appreciated.

Inserting images in database [on hold]

Posted: 08 Jul 2013 02:28 PM PDT

How can the images be inserted in a table? Var binary is not working in structured query language command prompt,what can be the syntax?

Extract data to another computer

Posted: 08 Jul 2013 12:46 PM PDT

I have SQL Server 2008 R2 Express edition, and I want to export the data in that database to another computer so I can import it again in that computer.

How please?

What I have tried

Right-click in the database -> Tasks -> Export -> Set server name + password + set database source.

My problem

I found that I have to set the destination, but I didn't find the destination as a file or something like that, I just found that the destination is the same database which I am extracting the data from.

Help please, thanks in advance

How to do incremental/differential backup every hour in Postgres 9.1?

Posted: 08 Jul 2013 04:39 PM PDT

Trying to do an hourly hot incremental backup of a single postgres server to s3.

I have the following setup in postgresql.conf:

max_wal_senders=2  wal_level=archive  archive_mode=on  archive_command='copy "%p" "c:\\postgres\\archive\\%f"'  

I did a base backup with pg_basebackup -U postgres -D ..\archive -Ft -l 2013-07-07 -x

Which made a big base.tar file in the archive folder and added some long file name files, which I assume are the WALs.

pg_start_backup('label') and pg_stop_backup() seem to create the WAL files in xp_log, and then copy them to the archive folder.

  1. what command(s) do I run to do a new incremental backup (pg_basebackup does a new base backup which I don't want right now)?

  2. What does the label in pg_basebackup and pg_start_backup() do exactly?

  3. WAL Files don't seem to be removed from pg_xlog. What should I do about that? It seems to keep 5 or so WALs in pg_xlog. Is that to be expected?

  4. Do I need to backup the .backup files in the archive folder or just the 16,384KB WAL files?

  5. should I use the --xlog parameter and if so do I need to change wal_keep_segments from 0?

Most efficient way to perform table operations (spec. drop) on tables having table names that match a certain criteria

Posted: 08 Jul 2013 07:55 PM PDT

I am performing a schema dump of certain tables. This accompanied by dropping the tables that will be dumped. For example I need to drop all tables that start with, let's say, "SRC_". You guys think of a more efficient way of dropping those tables than the script below? Thanks.

begin    for rec in (select TABLE_NAME                  from ALL_TABLES                 where TABLE_NAME like 'SRC|_%' escape '|'                   and OWNER = 'SOME_SCHEMA')    loop      execute immediate 'drop table ' || rec.TABLE_NAME;    end loop;  end;  /  

any documentation on tpcc-mysql benchmark tool

Posted: 08 Jul 2013 01:01 PM PDT

Where can I find tpcc-mysql documentation? I google it but I can't find any documentation about it.

How make convincing case to DBA to use Oracle external tables for 500 MB to 5 GB files?

Posted: 08 Jul 2013 04:40 PM PDT

In our data feeds, we have a stack of XML files as well as numerous flat files to put into an Oracle11g database. We flatten the XML into delimited files, and load the whole set using SQLLoader.

I want to try a proof-of-concept loading via TABLE ORGANIZED EXTERNALLY but I need to make a convincing case to DBA's that it won't do something evil to the server. The only plausible test files I have are 400-600 MB, in production we'd add some multi-gigabyte files.

What are the risks and how should I approach them, any ideas?

Thank you,

Andrew Wolfe

MYSQL matching one column to another in a different table via a FULLTEXT index

Posted: 08 Jul 2013 07:57 PM PDT

I current have two different tables.

The first table has a list of titles and IDs associated to these titles, the second table is a list of random heading.

What I would like to know is if there is a way to match up all the titles in table2 to the closest matching title in table1 is this possible?

Ive tried :

SELECT title_table .*,random_titles.*, MATCH (title_table.title)AGAINST (random_titles.title) AS relevance FROM title_table ORDER BY relevance DESC

But that did not work.

I know I could use this query as each title is being put in table2 with PHP but I already have a lot of titles in the database.

Any help would be amazing thanks

Foreign key with multiple references

Posted: 08 Jul 2013 05:08 PM PDT

I have the following three tables in mysql database named "THE_COLLEGE"

mysql> desc students;    +----------+-------------+------+-----+---------+-------+  | Field    | Type        | Null | Key | Default | Extra |  +----------+-------------+------+-----+---------+-------+  | STU_Id   | char(5)     | NO   | PRI |         |       |  | STU_Name | varchar(20) | YES  |     | NULL    |       |  +----------+-------------+------+-----+---------+-------+    2 rows in set (0.00 sec)  mysql> desc staff;    +----------+-------------+------+-----+---------+-------+  | Field    | Type        | Null | Key | Default | Extra |  +----------+-------------+------+-----+---------+-------+  | STF_Id   | char(5)     | NO   | PRI |         |       |  | STF_Name | varchar(20) | YES  |     | NULL    |       |  +----------+-------------+------+-----+---------+-------+    2 rows in set (0.00 sec)  mysql> desc users;    +------------+-------------+------+-----+---------+-------+  | Field      | Type        | Null | Key | Default | Extra |  +------------+-------------+------+-----+---------+-------+  | Id         | char(5)     | NO   |     |         |       |  | user_Name  | varchar(20) | YES  |     | NULL    |       |  | Password   | varchar(20) | YES  |     | NULL    |       |  +------------+-------------+------+-----+---------+-------+    3 rows in set (0.02 sec)  

If a new row want to be inserted into the table "users", that row must be found in students table or staff table.

Which means a staff or a student can be a user.

Is it possible?

I tried with following foreign keys.

ALTER TABLE users  ADD CONSTRAINT fky_stu     FOREIGN KEY (ID)     REFERENCES Students (STU_Id);    ALTER TABLE users  ADD CONSTRAINT fky_stf     FOREIGN KEY (ID)     REFERENCES Staff (STF_Id);  

But the user table is allow to insert a row, if the row found in both tables (Student and Staff).

But I need to allow if the row found in any of the table (Student or Staff).

"TASK MANAGER" process is taking over a single-user mode database. What is it?

Posted: 08 Jul 2013 08:34 PM PDT

This is a duplicate of the question I asked on stackoverflow, but I was advised, that someone here could have a better idea what is happening.

I have a sporadic problem, when upgrading SQL Server in a single-user mode, using .NET SqlConnection, some other application somehow logs into the database, while the SQL code is being executed, and kicks my process out. SqlConnection is not closed or disposed in any way. But some other application somehow ends up connected to the database and that kicks my connection out.

When I run sp_who, I could see that a process that took control of the database is Command="TASK MANAGER".

Anyone could tell me what is this process, what is it's purpose, and how in the world it could get into a database, which is in single-user mode, and there is an active connection?

User login error when trying to access secured SQL Server database

Posted: 08 Jul 2013 01:51 PM PDT

We have a username that was recently renamed from one username to another (think getting married). The Active Directory admin renamed the user because "it has always worked in the past".

One vendor package we use uses the built-in MS SQL Server security. Each module has three groups:

  • xxView = View Only
  • xxUser = Add, Update rows (cannot delete)
  • xxAdmin = Delete rows

So we can add a person to one of these groups an they get the appropriate access. I don't have the actual error message in front of me anymore, but it said that they are not authorized to table CriticalVendorTable. It worked before the rename. The admin removed the person from each group and re-added them. Still no go. I even restarted the server and it still doesn't work. My best guess is that there is UUID (or unique id) somewhere that is causing problems.

The vendor's response is to delete the user and then re-add them. I have only had time to do some brief searching, but I found this page; AD User SID Mis-mapping. Would this be worth trying? Would it be better to just delete the user and recreate them?

hstore for versioning fields in postgresql

Posted: 08 Jul 2013 08:51 PM PDT

I have a postgresql database that will hold about 50 tables, each of them having about 15 fields, it would have at least 300.000 rows on each table.

In order to track the changes done on each field I am thinking on create a table defined by:

CREATE TABLE fieldhistory(tableid int, fieldid int, id bigint,value hstore);  CREATE INDEX fieldhistory_index ON fieldhistory(tableid, fieldid, id);  

I would expect the table to grow and grow, and retrieve from fieldhistory data only by tableid, fieldid and id.

When the tables are modified I would add a new record with something like:

SELECT upserthistory(1,1,1,'first update','115435','3');  SELECT upserthistory(1,1,1,'second update','115435','3');  

where upserthistory is defined by :

CREATE OR REPLACE FUNCTION upserthistory(key1 INT, key2 INT, key3 BIGINT, data TEXT, theuser TEXT, whend TEXT) RETURNS VOID AS  $$  BEGIN      LOOP          -- first try to update the key          UPDATE fieldhistory SET value = value || ((whend||'.'||theuser) => data) WHERE tableid = key1 AND fieldid = key2 AND id = key3;          IF found THEN              RETURN;          END IF;          -- not there, so try to insert the key          -- if someone else inserts the same key concurrently,          -- we could get a unique-key failure          BEGIN              INSERT INTO fieldhistory(tableid,fieldid,id,value) VALUES (key1, key2, key3, hstore(array[whend||'.'||theuser,data]) );              RETURN;          EXCEPTION WHEN unique_violation THEN              -- Do nothing, and loop to try the UPDATE again.          END;      END LOOP;  END;  $$  LANGUAGE plpgsql;  

Are there better approaches to accomplish this? the table would grow max to 225 millions of rows, or would it be better to have 50 tables of 4.5 millions of rows at max each? noting that the actual trace of each field will go to the hstore.

SSIS Script Component - How to modify the Output0Buffer

Posted: 08 Jul 2013 01:14 PM PDT

I've a script component which accepts records from SQL Azure database table. The script then invokes a web service, which returns the number of failed and success records.

For all the records, I would like to add the Status field which has either "success" or "fail" and this gets output from script component.

I then log those output into a text file.

Problem: I'm not been able to add status for each input records since the web service call only happens on post execute.

I tried this but still doesn't work:

public override void Input0_ProcessInputRow(Input0Buffer Row)  {           listOfData.Add(new ClockData            {               TimeClockID=Row.TimeClockID,               PersonID=Row.EmployeeCode,               LocationCode=Row.ClockInServiceContextID,               ClockInDateTime=Row.ClockInDateTime,               ClockOutDateTime=Row.ClockOutDateTime             });   }     public override void CreateNewOutputRows()   {       MessageBox.Show("Test CreateNewOutputRows");       MessageBox.Show(listOfData.Count.ToString());       foreach (var item in listOfData)       {           Output0Buffer.AddRow();           Output0Buffer.EmployeeCode = item.PersonID;           MessageBox.Show(item.PersonID);       }   }  

Best practices on common person fields (Name, email, address, gender etc...)

Posted: 08 Jul 2013 12:55 PM PDT

What are the most common best practices on length and data type on common fields like:

  • First Name
  • Last Name
  • Address
  • Email
  • Sex
  • State
  • City
  • Country
  • Phone Number

etc....

[SQL Server] Upgrade SQL Server 2000 to SQL Server 2008

[SQL Server] Upgrade SQL Server 2000 to SQL Server 2008


Upgrade SQL Server 2000 to SQL Server 2008

Posted: 08 Jul 2013 06:07 AM PDT

I'm looking to upgrade 4-5 SQL Server 2000 database instances to 2008. Most are SP3 but one is RTM. Upgrade destination is SQL Server 2008 Enterprise edition (side by side upgrade). Three questions: 1. Is it necessary to upgrade SQL Server 2000 to SP4? 2. The supposed OS requirement before upgrading is Windows Server 2003, SP2. The upgrade candidates have Windows 2003 SP1 and Windows 2000 SP4. Is this a concern? 3. I want to copy the .bak file to the test SQL 2008 instance, then restore. Any issues or concerns here?Any comments would be appreciated - thank you.

Update Novell eDirectory from a SQL Server database

Posted: 08 Jul 2013 01:38 AM PDT

I'm really sorry, here's yet another of my terrible questions. I apologise in advance!I've been asked to provide some information from the HR database to update our Novell eDirectory system, the idea being that when a new starter starts, their email account etc. can be set up automatically. Likewise when they leave, their network account can be closed automatically, etc. (Yes, we're a good 10 years behind everybody else.) If they change their name, they want this to be reflected too. You get the idea.I am told by our directory person that eDirectory 'only works on updates', so I've been asked to write a series of triggers such that any time the data in a number of fields (about 25 so far!) is updated, these changes will be written to what they refer to as 'intermediate tables' (actually a separate database) in the form of some sort of event log which eDirectory can then check against. I don't know anything about eDirectory and I have no way of knowing that what I'm being told is right. I very strongly do NOT want to use triggers for this. They're a nightmare to maintain, and it's very hard to diagnose problems when they fail. I'd much rather grant access as a view, or maybe even a scheduled export of data, but I'm being told that eDirectory can't cope with this. I have no way of knowing if this is right or not; does anybody here know, please? Surely there must be a way to refresh the data in eDirectory every day?Thanks in advance.

Generate Script with Data

Posted: 08 Jul 2013 01:20 AM PDT

I am trying to generate a script with data.I have imported this data from Excel to SQL and now need to forward the script in SQL to my colleagues in notepad.I tried to follow the link : http://stackoverflow.com/questions/982568/what-is-the-best-way-to-auto-generate-insert-statements-for-a-sql-server-tableBut don't seem to locate the advance button.Kindly advise

[MS SQL Server] Copy reports from SharePoint development site to production site

[MS SQL Server] Copy reports from SharePoint development site to production site


Copy reports from SharePoint development site to production site

Posted: 08 Jul 2013 04:05 AM PDT

Hi,I want to copy reports build on SharePoint 2010 development server to a production SharePoint site.SSRS is configured in SharePoint integration mode, so reports were build on site itself. Tutorial followed: [url=http://www.youtube.com/watch?v=wyCm3HbxKiQ]http://www.youtube.com/watch?v=wyCm3HbxKiQ[/url]SharePoint 2010 and Reports server 2008 R2 both reside on same server for development.SharePoint 2010 and Reports server 2008 R2 both reside on different servers on production.Will copying data from SharePoint development to prod, also copy reports?As this is new reports development, no reports are yet scheduled or subscribed.Copying reports from development site to production will be done every time.Thanks in advance,Puja

Login failure to SQL instance in 2012

Posted: 07 Jul 2013 09:28 AM PDT

Hi All,I'm seeing login failures to our end users on our server which has recently migrated to Windows/SQL 2012. Strange thing is if user tries to connect with a service account is works fine. But if he tries to connect by himself it is throwing him login failure error.Have anyone seen this issue? I'm also seeing weird things like I'm not able to execute a .cmd file in a SQL job but when i run the same command locally in the server, it works.Please help me.

What can I do to an offline database?

Posted: 08 Jul 2013 01:39 AM PDT

I am in the process of migrating DBs from an old server to a new server. Currently, what I've been doing is:1. Take a full backup2. Set the old DB offline using [code="sql"]ALTER DATABASE [Whatever] SET OFFLINE WITH ROLLBACK IMMEDIATE[/code]3. Restore the backup over-top the "test" restore on the new server.My questions are:1. Can I backup a DB that is in an offline state? I'm inclined to think no.2. Can I restore over-top a DB that is in an offline state (using WITH REPLACE)?Although, seeing as no one uses the "test" restore on the new server, except the Dev to verify they can get to it, it's not a big deal if I can or can't restore over-top it, as I can leave it online for the restore...Thanks,Jason

Drop Database before Restore - Any Downside ?

Posted: 08 Jul 2013 12:12 AM PDT

I have a job that copies the production database backup to a reporting server every night, and then restores over the reporting database. Before the restore runs, there is a process to kill any connections. Rarely, but annoyingly, the restore fails because it could not get an exclusive lock on the database. I assume something is grabbing a lock right after the "kill spid" step runs and before the restore starts.Is there a better way to make sure my restore does not fail ?If I drop the database first, I may still have to kill connections. BOL says:".. You cannot drop a database currently being used. This means open for reading or writing by any user. To remove users from the database, use ALTER DATABASE to set the database to SINGLE_USER. .."Thanks

copy system database files

Posted: 07 Jul 2013 02:55 PM PDT

Dear All,I would like to have a backup of the SQL system databases (the files themselves) not the SQL backup. When I try to copy these files (master,model,msdb,tempdb), it says the files is in use and it does not allow me to copy.My clarifications:-1) Without having to bring down the database, how can I copy these files to another location?2) Is it possible to backup these files themselves during a full database backup? How do i perform that task?Many thanks,Jai

Notifying SQL of additional CPU additions

Posted: 07 Jul 2013 02:05 PM PDT

Dear All,Iam planning to add two more CPU's to each node on my SQL 2008 standard edition (Vmware virtualized 2 node failover cluster) on windows 2008 R2 from the current 4 CPU's. I have two clarifications:1) Will this affect my current SQL licensing in any way2) Am I required to run any scripts to let SQL know about this change in the hardware. How do I accomplish it?Thanks,Jai

SQL job or task prioritization

Posted: 07 Jul 2013 02:06 PM PDT

Dear All,My current SQL configuration is SQL 2008 standard edition (Vmware virtualized 2 node failover cluster) on windows 2008 R2 .Whenever I run my SQL backup the CPU shoots up over 90%. How do I restrain this so that the CPU does not go up above 60%. Is there any way I can prioritize this task ?Many thanks,Jai

SQL 2008 Express Edition Server Agent won't start

Posted: 07 Jul 2013 10:44 AM PDT

I can't get Server Agent to 'start' on my install of Express Edition. I did some research and it seems that server agent is not 'supported' or part of the software?Does that seem correct? How would a DBA setup a backup job in this environment?Any feedback would be appreciated........

[Articles] Your Frame of Reference

[Articles] Your Frame of Reference


Your Frame of Reference

Posted: 07 Jul 2013 11:00 PM PDT

How you examine and approach a problem might have a large impact on how you solve it. Steve Jones notes that it's important that all of us look at the problem from the point of view of the customer, as well as our own view.

[SQL 2012] SQL Server 2012 Express - Error 10060 sleeping connection

[SQL 2012] SQL Server 2012 Express - Error 10060 sleeping connection


SQL Server 2012 Express - Error 10060 sleeping connection

Posted: 07 Jul 2013 10:42 PM PDT

We have SQL Server 2012 Express installed on a VMware instance of MS Server 2012. Most (but not all) mornings I find that a connection cannot be established through the clients. SQL Server returns the 10060 error. I then open the SSMS and make the connection with a SQL authenticated account and all is well for all the clients who can then connect normally. It is like SQL is sleeping. I've read the 10060 error posts but since a connection [b]can[/b] be made the suggested resolutions do not seem applicable. Selected Server Settings:Protocols: Shared Memory, TCP/IP enabled; Named Pipes disabledSQL Server Services: SQL Service, Browser: Automatic; Server Agent: State-Stopped w/Start Mode set to OtherWindows Services: SQL Server (SQLEXPRESS): AutomaticSQL Server Agent: DisabledSQL Server Browser: AutomaticSQL Server VSS Writer: AutomaticAny ideas would be appreciated.

Manage server memory via Resource Governor/WSRM vs SQL server Properties

Posted: 07 Jul 2013 12:15 PM PDT

Hi,I am very new to dba and I have question about managing min/max server memory. What would be the reason of configuring server memory via Resource Governor instead of via SQL properties page? What is the benefit over each other?However, it does make sense though when configuring the CPU consumption via Resource Governor as Properties page does not offer the functionality.Can anyone please shed a light, will be much appreciated.Thanks.

Create a data base on specific data file or log file

Posted: 07 Jul 2013 07:07 AM PDT

Hello I'm going to create a data base on data file but i saw this errors ,I know one of them is relatetd to user permission but I don't know how can i change the permission.[b]Msg 5123, Level 16, State 1, Line 1CREATE FILE encountered operating system error 5(Access is denied.) while attempting to open or create the physical file 'C:\Program Files\Microsoft SQL Server\Data\testdatafile.mdf'.Msg 1802, Level 16, State 4, Line 1CREATE DATABASE failed. Some file names listed could not be created. Check related errors.[/b]thanks alot

Validation Errors and Error Propagation in SSIS 2012

Posted: 07 Jul 2013 05:44 AM PDT

Hi everyone!I got a question... here's the situation:Do the validation errors at runtime in SSIS packages have a particular behavior when is comes to error propagation?I have a package with a Send Mail Task that gets the recipients list (CC included) from a table in a database, so at design time I set the [i]DelayValidation[/i] property to true in order to execute the package.The send mail task is inside a foreach loop and I've set the [i]Propagate[/i] system variable to false in the loop's onerror event handler... therefore if an error ever occur inside the loop it wouldn't be propagated to higher levels.The thing is that at runtime could not there be recipients at all (just an empty string) and it seems that case is causing an error to propagate to the package level regarding the onerror handler of the loop.I've tried with other type of errors like setting an invalid mail (not empty) and the package works as expected: the fail in one iteration does not prevent the loop from keeping iterating.The error is "there were errors during task validation"Thanks in advance and warm regards :)

[T-SQL] time elapsed between sp completed and RPC completed

[T-SQL] time elapsed between sp completed and RPC completed


time elapsed between sp completed and RPC completed

Posted: 07 Jul 2013 11:59 PM PDT

I ran a trace on a slow running remote stored proc call:exec db1.mydb.dbo.spInsertSomeRowsThe trace ran on the desination server (db1). I captured the sp:completed and prc:completed events. Noticed that sometimes there was a duration difference between the two events for the same produre call, sometimes more than half a second. Could someone please explain to me what made the difference? Note though other times all were fine, either the duration of the two events were the same, or a couple of millisec away, which was expected. Thanks!

Split data

Posted: 07 Jul 2013 05:08 PM PDT

Hi all, I was trying to split the values using DENSE RANK but I faced some issues...here is the problem below:Create table tbl1( ID INT identity primary key,voucherno varchar(10),modifieddate datetime)insert into tbl1values('V00001','2013-07-07 11:32:44.810'),('V00002','2013-07-07 11:32:44.810'),('V00003','2013-07-08 12:32:44.810'),('V00004','2013-07-08 12:32:44.810'),('V00005','2013-07-07 11:32:44.810')OUTPUTMinVoucher MaxVoucher ModifieddateV00001 V00002 2013-07-07 11:32:44.810V00003 V00004 2013-07-08 12:32:44.810V00005 V00005 2013-07-07 11:32:44.810I tried using DENSE_RANK but i didnt get my desired output.

[SQL Server 2008 issues] Tempdb issue

[SQL Server 2008 issues] Tempdb issue


Tempdb issue

Posted: 07 Jul 2013 04:42 PM PDT

we are facing issue in tempdb file size , initial size we allocate 2 GB , but after restart sql server , size would be 1 MB , its gone 2 GB, plz anyone help , How size will keeping on 2 GB after restart sql server ..Any settings ? ThanksJerry

Difference between Full backup and Copy-only full backup

Posted: 07 Jul 2013 06:13 PM PDT

I saw in this [url=http://www.sqlservercentral.com/Forums/Topic567010-357-1.aspx#bm1311005]link[/url] that full backup does not truncate the log.So what is the difference between full backup and copy-only full backup?For the log backup there is copy-only backup which prevent the log chain from breaking without truncating the log. So what is copy-only full backup?

Maxium rownum groupwise

Posted: 07 Jul 2013 06:37 PM PDT

create table #Abc(ID int,Name varchar(20),SAl int,Rownum int)insert #Abc values(1,'Anees',1000,1)insert #Abc values(2,'Rick',1200,1)insert #Abc values(3,'John',1100,1)insert #Abc values(3,'John',1500,2)insert #Abc values(3,'John',1200,3)insert #Abc values(4,'stephen',1300,1)insert #Abc values(5,'Maria',1400,1)insert #Abc values(6,'MariaN',1400,1)insert #Abc values(6,'MariaN',60000,2)O/p:ID Name Sal Rownum1 Anees 1000.00 12 Rick 1200.00 13 John 1100.00 13 John 1500.00 23 John 1200.00 34 Stephen 1300.00 15 Maria 1400.00 16 MariaN 1400.00 16 MariaN 60000.00 2so i need the table like the given belowID Name Sal Rownum1 Anees 1000.00 12 Rick 1200.00 13 John 1200.00 34 Stephen 1300.00 15 Maria 1400.00 16 MariaN 60000.00 2so plz help me

Unknown error messages during generate script from SQL2008 database

Posted: 23 Mar 2009 12:11 PM PDT

I have a SQL2008 database, running Standard Edition 64-bit, database owns by sa, connected to Management Studio using Windows Authentication mode. When I tried to generate scripts from a database, I got the following error messages:-[Operation is not valid due to the current state of the object. (SqlManagerUI)]which happened at one particular table. I have reviewed this table definitions are normal, and I could select data from it. I couldn't find any information anywhere relating to this error messages, except that someone got it when they were trying to change the authentication mode or sa password. Anybody got this similar scenario ? any idea would be appreciated....Wai

Regular Mirroring Timeout Error Messages

Posted: 07 Jul 2013 11:22 AM PDT

We are running SQL Server 2008R2 RTM (10.50.1617)We get time out messages that read like "the mirroring connection for database has timed out after 90 seconds" too often. Now I have only been a DBA for 6 months of my life, but these messages pop up at least once a week, and if we had not increased the threshold fail overs would be happening regularly as well.I would like to know if a more seasoned DBA's opinion on this. I talked with the systems/networking guys and they see nothing unusual in terms of the computers performance and network traffic and have looked at the ip traffic logs on the switches.I am trying to push them to update to service pack 3 hoping that will smooth things out. I also read that upping the cpu priority could help. I would love to hear your opinions, and if i should pressure them to do this. Thanks

Auditing - no machine name? Really?

Posted: 07 Jul 2013 11:48 AM PDT

I'm going through trialing out Auditing using this example:http://blogs.msdn.com/b/sql_shep/archive/2012/06/27/sql-2012-security-audit.aspxWhat I can't understand though is that it doesn't store the source machine name or ip address. Am I missing something?

syntax for using /SET for SSIS config file in SQL 2008?

Posted: 07 Jul 2013 03:06 AM PDT

I used to use /CONF to represent an XML config file running SSIS packages via DTExec with SQL 2005. With the behavior change of SQL 2008 (some variables are no longer getting the CONFIG file value) I am attempting to use the /SET option to pull in the XML config file. Has anyone else done this? With the following code:set job=Run_DepartmentJobset dtsxName="DepartmentJob"set dtexecLocation="d:\DTExec.exe"set dtsxPath="X:\SIS\DepartmentJob.dtsx"set dtsxConfigPath="X:\SSIS\Config_files\DepartmentJob.dtsConfig"call %dtexecLocation% /F %dtsxPath% /SET "\Package.Configurations[Config].Properties[ConfigurationString]";%dtsxConfigPath% /CHECKPOINTING OFF >> %pathout%%job%.OUTI am getting the following error:DTExec: Could not set \Package.Configurations[Config].Properties[ConfigurationString] value to X:\SSIS\Config_files\DepartmentJob.dtsConfig.Is the syntax off?I LIKED the ability to use a CONFIG file in SQL 2005 to set environment specific values for my SSIS packages. That option made deploying to different environments very easy. I don't understand why Microsoft would change the behavior of this option. Are there any other alternatives that people are now using to access XML Config files when running SQL 2008 SSIS packages with DTExec?Thanks!

Search This Blog