Monday, July 8, 2013

[how to] Mysql datatype for username of maximum 15 characters

[how to] Mysql datatype for username of maximum 15 characters


Mysql datatype for username of maximum 15 characters

Posted: 08 Jul 2013 09:20 PM PDT

I currently have a database with the "user" table having username as one of the columns.

I allow a maximum of 15 characters for the username in my system... Is it fine to have the username column as a varchar(15) datatype?

How does Berkeley DB manage its files?

Posted: 08 Jul 2013 08:02 PM PDT

I'm using Berkeley DB (BDB) as a persistent store for a JMS queue. When I consume an entry from the queue the underlying BDB files do not immediately shrink, but do eventually. I'm running into issues with the BDB files taking up lots of space on the file system while retrieval performance degrades.

My entry size varies considerably but it is not uncommon to have 400,000 messages of around 32kb each in the persistent queue.

I'd like to understand how BDB manages the files so that I can throttle the number of entries for file size/retrieval performance. Or so I can rule out BDB as my persistent store mechanism.

I am probably searching for the wrong terms but have not found what I'm looking for in the Oracle documentation or The Berkeley DB Book. I would not be surprised if BDB doesn't want me messing with its internals but I would be surprised if (at least) an overview of how it does handle its internals is not available.

Please forgive the misleading tag -- I didn't find ones for Berkeley, Sleepycat or BDB and I don't have the reputation to submit a new tag.

Lock hints on preconditions

Posted: 08 Jul 2013 07:59 PM PDT

Assuming for a moment that FK references are not available, what lock hints should be placed on the preconditions of the following example to prevent deadlocks and inconsistent data?

(This is a contrived example, of course, to illustrate a scenario in which a delete/update can only be performed dependent upon the state of some other table)

-- Schema  CREATE TABLE EntityTypes (      EntityTypeID int NOT NULL,       Name nvarchar(50) NOT NULL,       CONSTRAINT PK_EntityTypes PRIMARY KEY CLUSTERED (EntityTypeID)  );    CREATE TABLE Entities (      EntityID int IDENTITY(1,1) NOT NULL,      EntityTypeID int NOT NULL,      EntityName nvarchar(50) NOT NULL,      CONSTRAINT PK_Entities PRIMARY KEY CLUSTERED (EntityID)  );    -- Test data  INSERT INTO EntityTypes (EntityTypeID, Name) VALUES (1, 'Test 1');    -- Updater 1  SET XACT_ABORT ON;  BEGIN TRAN;    IF EXISTS (SELECT 1 FROM Entities WHERE EntityTypeID = 1)  BEGIN       RAISERROR('Cannot delete entity type which is referenced', 16, 1);      RETURN;  END    DELETE FROM EntityTypes WHERE EntityTypeID = 1;    COMMIT TRAN;    -- Updater 2  INSERT INTO Entities (EntityTypeID, EntityName) VALUES (1, 'Test Entity')  

Environment: SQL Server 2012

Sql Server 2012 Extended Events for Selective Xml Indexing not showing results

Posted: 08 Jul 2013 02:09 PM PDT

I created a Extended Event to monitor and tune a Selective Xml Index

CREATE EVENT SESSION [SelectiveXmlIndexing] ON SERVER   ADD EVENT sqlserver.selective_xml_index_no_compatible_sql_type,  ADD EVENT sqlserver.selective_xml_index_no_compatible_xsd_types,  ADD EVENT sqlserver.selective_xml_index_path_not_indexed,  ADD EVENT sqlserver.selective_xml_index_path_not_supported  WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=OFF,STARTUP_STATE=OFF)  GO  

In SSMS when I view live data and then execute the xml queries and views, no results are being returned.

All i see is:

enter image description here

Is there another step I need to acomplish before seeing the results?

I followed:

http://www.mssqltips.com/sqlservertip/2731/managing-sql-server-extended-events-in-management-studio/

http://www.sqlskills.com/blogs/bobb/choosing-paths-for-selective-xml-index-part-2-using-the-sxi-xevents/

http://www.sqlskills.com/blogs/jonathan/sql-server-2012-extended-events-update-3-viewing-target-data/

Can SQL Server 2008R2 SSIS packages access SQL Server 2012 databases with driver updates only?

Posted: 08 Jul 2013 02:21 PM PDT

We have two products running on SQL Server that we will be upgrading to SQL Server 2012 on a staggered schedule. Product B uses SSIS packages (using ODBC drivers) to extract data from Product A's database.

The scenario would be to move Product A to SQL Server 2012 first, with product B to follow in the coming months. Product B would continue to run in a 2008R2 environment while Product A would be moved to a new SQL Server 2012 environment.

The hope is that there would not be a need for package changes and that installing the new ODBC Driver (Microsoft ODBC Driver 11) on the 2008R2 SSIS environment would allow the packages to execute against Product A.

EDIT - The package would remain in the SQL Server 2008 format and would be using the SQL Server 2012 Database as it's source.

Does anyone know if this is in fact viable or are there known issues that would require package changes?

Thank you for any help you can provide.

Matt

Indexes and Optimizations for Logging System

Posted: 08 Jul 2013 06:42 PM PDT

I'm designing tables for a new log system and am having trouble figuring out what indexes I'll need (I'm a developer, not a DBA), and am also open to any suggestions/criticisms/etc. for making the tables as "fast" as possible (from my app's point of view).

Background info: the DB itself is a MySQL instance hosted on AWS. It's a shared server and I don't have tuning privileges beyond basic table setup, indexes, key constraints, etc. This is a logging system, so very write-intensive. Although I will write scripts to dump certain log records (errors, etc.), there won't be much reading going on.

The logging system will consist of two tables, log_entry which is the log record itself, and log_level which is a "static" lookup/reference table used by log_entry. Below, where I specify Type as just varchar, I haven't figured out what size to make the field (varchar(50), varchar(100), etc.). My proposed schema for the two tables is as follows:

log_levels:

Column          |    Type        |    Special Info                 |  Nullable?    |  Example  ============================================================================================================  id                   int              Primary Key, Auto-Increment      No             3  name                 varchar(20)                                       No             "Info"  tag                  varchar(20)                                       No             "INFO"  

log_entries:

Column          |    Type        |    Special Info                 |  Nullable?    |  Example  ============================================================================================================  id                   int              Primary Key, Auto-Increment      No             19348  app_name             varchar                                           No             "My App"  timestamp            long                                              No             1373316599 (UNIX Epoch time)  log_level_id         int              Foreign Key (log_levels)         No             3  origin_class         varchar                                           No             "com.me.myapp.SomeClass"  message              text                                              No             "Something log-worthy happened."  throwable_class      varchar                                           Yes            "java.lang.RuntimeException"  

The problem child here is log_entries.message. This field will store a lot of information, and for reasons outside the scope of this question, I can't break this information out into better-normalized fields/tables. It's just a big text blob that will store lots of different kinds of data, and again, there's nothing I can do about that.

When I do perform reads (for the purposes of analytics/debugging), these are the queries I plan on using the most (and hence perhaps a basis for creating indexes from):

// Give me all the ERROR log entries.  SELECT * FROM log_entries le INNER JOIN log_levels ll ON le.log_level_id = ll.id WHERE ll.tag = 'ERROR'    // All ERROR log entries for a particular user.  SELECT * FROM log_entries le INNER JOIN log_levels ll ON le.log_level_id = ll.id WHERE le.message LIKE '%user: john.doe@example.com%'      AND ll.tag = 'ERROR'    // All log entries for a particular user.  SELECT * FROM log_entries WHERE message LIKE '%user: john.doe@example.com%'    // All unique throwables in the system.  SELECT DISTINCT(throwable_class) FROM log_entries    // All messages in a given time range, or after a certain datetime.  SELECT * FROM log_entries WHERE timestamp >= 1373315311 AND timestamp <= 1439065711    // All log entries for a particular user within a time range.  SELECT * FROM log_entries WHERE message LIKE '%user: john.doe@example.com%' AND timestamp >= 1373315311 AND timestamp <= 1439065711  

So I ask:

  • What indexes should I be creating here, and is there any sort of management/housekeeping I should be performing with the indexes (re-indexing, vaccuming, etc.)?
  • Any other optimizations I can be making, in general (again, nothing I can do about log_entries.message)?
    • For instance, I've heard that with some DB servers (not necessarily MySQL), the order of the columns (when you go to create the table) can impact performance as well...?!? etc.

Thanks in advance!

Can't send PHP query to MySQL [migrated]

Posted: 08 Jul 2013 04:42 PM PDT

When i'm trying to send query from PHP 5.4.6 TS,VC9 to MySQL 5.5 (NTS,VC9) I get the message: Fatal error: Call to undefined function mysql_connect() after restarting Apache 2.2 on (Windows xp) it says:

PHP Startup: mysql: Unable to initialize module Module compiled with build ID=API20100525,NTS,VC9 PHP compiled with build ID=API20100525,TS,VC9 These options needs to match

How can I solve this issue?

SUSPECT in mdf files

Posted: 08 Jul 2013 01:42 PM PDT

when i run

ALTER DATABASE EMS102013 SET SINGLE_USER;

DBCC CHECKDB (EMS102013) WITH NO_INFOMSGS, ALL_ERRORMSGS

show message

Msg 8921, Level 16, State 1, Line 1 Check terminated. A failure was detected while collecting facts. Possibly tempdb out of space or a system table is inconsistent. Check previous errors.

how can fix this problem

MySQL - Randomly long DESCRIBE queries

Posted: 08 Jul 2013 01:07 PM PDT

I have several queries like this (running on MySQL 5.5):

DESCRIBE `TABLE`;  

And it randomly takes 5-10 seconds on my production servers. We have an ORM that caches this data once per day (per web server) and I'm wondering why I see these slow queries popping up from time to time. I'm thinking that I could just change the web servers to cache until their processes are restarted but it'd be nice to know if there's an easy fix or something I could look into that may be causing this.

RTF to Plain-Text? [on hold]

Posted: 08 Jul 2013 12:34 PM PDT

Did you ever get the library from this post? We have a need to convert whole, large, whole tables worth of RTF information into something usable and would rather not reinvent if possible.

Converting RTF in a text column to plain text in bulk

Partitioning vs Indexes

Posted: 08 Jul 2013 04:04 PM PDT

I'm very new to databases, so I apologize if my question is very basic...

Anyways, I'm creating a table with what seems like alot of data (500 million rows right now, and potentially twice as much in the future). Now, I need to have a way to access the data in this table quickly, so I'm researching partitions and indexes. However, I'm getting confused on when I'd want to create a partition vs. an index. I have three columns that seem like reasonable candidates for partitioning or indexing:

  • Time (day or week, data spans a 4 month period)
  • Customer "Token": The last two digits of a user's ID
  • Client ID (typically will have 5-20 clients in the dataset)

When running future selects against this table, it's likely that I'll be filtering on client id as well as wanting to do some sampling (which I would like to do by the "Token" variable). I may occasionally be filtering by the time variable as well.

So, my question is this: how should I organize my table? Should I partition by Client and Token, and then create an index on time? Or just partition on client and create indices on time and token? And, more importantly, what is the logic behind the strategy you recommend?

Also, after I've created the table, will the indexes break if I add more data into it (particularly new clients for the same date/token range)? Is recreating an index relatively simple?

Thanks so much for your help, and please let me know if you need any more information from me.

TempDB Version Store cleanup

Posted: 08 Jul 2013 04:17 PM PDT

From what I can find the version store will only clean up versions that are older than the oldest active transaction. Question: Is the oldest transaction database specific or will SQL Server keep all versions, regardless of the database, if there is an older transaction still active, period?

Backstory - SQL Server 2005 SP4 Enterprise hosting around 40 databases. TempDB is currently 102 GB, version store is around 98 GB. One of the applications hosted on the database maintains a connection until it is forced to exit (restart etc...) and the oldest transaction is 40 days old(last restart, based on sys.dm...database_transactions). Two separate large databases had extremely heavy use over the last month and we saw consistent TempDB growth coinciding with these operations. We expected some growth. We did not expect it to keep growing. Question: Are the versions stored in TempDB's version store from these two separate databases still there because a third independent database has a connection that is 40 days old and shows an active transaction_state?

Perfmon counters: Version store is continually growing in the few hours I have tracked it this morning. Version Generation Rate AVG is around 30 kb/s, Version Cleanup rate is 0 kb/s.

Plenty of space left for TempDB, there are around 300 GB of total data files for all user databases, TempDB has grown on average 350 MB per day for each of its 8 data files since the last restart.

Answers to comment questions so as not to have a long running comment section:

Q: Why auto-growth on tempdb? A: Its a cluster that supports important stuff. Through trial and error we found 8 GB sufficient to meet our needs and allow tempdb to come online quickly in event of failover. We monitor auto-growth, saw some that we expected, then saw more, did a large increase, then saw more, etc...We believe the version store is not getting cleaned properly, so....here we are

Q: How do you know the transaction is active and not just an active connection? A: transaction_state says active in sys.dm_tran_active_snapshot_database_transactions and other stuff. It will periodically run a query according to activity monitor when I catch it right. It is of course vendor written and the super user doesn't know how it works. Which leads back to the main question. Is this the culprit, am I barking up the wrong tree?

Referenced question: find-transactions-that-are-filling-up-the-version-store

How to migrate data between "all-surrogate" databases?

Posted: 08 Jul 2013 04:30 PM PDT

With some periodicity this needs to be done:

  • Inserting data from a production database into a test/development database, so that programmers have fresh data to test on.
  • Migrating data from a database into another, for whatever reason.

Note: databases are relational.

The problem is:

  • Databases were modeled with a "all-PK-are-surrogate" policy.
  • Moving data from a database to another involves programming to avoid colision of PK values ( generated by sequences ).
  • A table pairing the values of the PKs of origin and destination database's tables is needed in order to do the "equivalency" between surrogate keys.
  • Said table has to be created in advance of migration, by matching business keys (which are not PK, so don't govern FKs)
  • Data migration even of a single table in not trivial, as opossed to tables with natural keys.

What's the easiest way of copying rows from one database to another when all tables have surrogates keys ?

How can I search values, considering occurences, on more than one term in SQL Server CE?

Posted: 08 Jul 2013 02:58 PM PDT

I have a rather complex SQL query, that I actually received help from this SE site to come up with parts of. This query is designed to take a search term and search all relevant values in a database (this database stores all the relevant information for my partially CMS-ish site). It is a query that uses both JOINs and UNIONs. If I needed only to search one search term, it would be perfect, but alas, my boss knows that most people will expect a search bar to work more like what they might expect from Google (i.e., as it stands my query, so far, will treat whatever they type as the whole search term, as though it were wrapped in quotes, instead of separating each word).

While I know that the C# I use server-side will be needed to split there typed search term into different values (separating on white space), and stored in an array, this isn't really a problem, nor do I feel like any knowledge of C# is needed to answer this question. The real problem is that (given the query, shown below) I can't seem to wrap my head around how to compile a couple of the columns (formed by sql functions involving simple calculations) for use with more than one parameter. This is because I need ALL of the functionality of the searching of the database to be in one single (even if a bit oversized) query. I need this all to be in one single query because I also need to use both OFFSET and FETCH for pagination, and I can't see how I can implement that with multiple queries.

Okay, enough talk, here's the query I have so far (works perfect with only a single search term, but again, I need this to be dynamic enough to search on virtually any number of parameters):

SELECT NULL AS ObjectID, page AS location, 'pageSettings' AS type, page AS value, 'pageName' AS contentType, (len(page) - len(replace(UPPER(page), UPPER(@0), ''))) / len(@0) AS occurences, ((len(page) - len(replace(UPPER(page), UPPER(@0), ''))) / len(@0)) * 2 AS priority FROM PageSettings WHERE page LIKE '%' + CAST(@0 AS nvarchar) + '%'   UNION   SELECT NULL AS ObjectID, pageTitle AS location, 'ExternalSecondaryPages' AS type, pageTitle AS value, 'pageName' AS contentType, (len(pageTitle) - len(replace(UPPER(pageTitle), UPPER(@0), ''))) / len(@0) AS occurences, ((len(pageTitle) - len(replace(UPPER(pageTitle), UPPER(@0), ''))) / len(@0)) * 2 AS priority FROM ExternalSecondaryPages WHERE pageTitle LIKE '%' + CAST(@0 AS nvarchar) + '%'   UNION   SELECT NULL AS ObjectID, eventTitle AS location, 'MainStreetEvents' AS type, eventTitle AS value, 'pageName' AS contentType, (len(eventTitle) - len(replace(UPPER(eventTitle), UPPER(@0), ''))) / len(@0) AS occurences, ((len(eventTitle) - len(replace(UPPER(eventTitle), UPPER(@0), ''))) / len(@0)) * 2 AS priority FROM MainStreetEvents WHERE eventTitle LIKE '%' + CAST(@0 AS nvarchar) + '%'   UNION   SELECT ContentObjects.ObjectID, location, type, value, 'content' AS contentType, (len(value) - len(replace(UPPER(value), UPPER(@0), ''))) / len(@0) AS occurences, ((len(value) - len(replace(UPPER(value), UPPER(@0), ''))) / len(@0)) * 2 AS priority FROM ContentObjects JOIN TextObjects ON ContentObjects.ObjectID = TextObjects.ObjectID WHERE value LIKE '%' + CAST(@0 AS nvarchar) + '%'   UNION   SELECT ContentObjects.ObjectID, location, type, value, 'content' AS contentType, (len(value) - len(replace(UPPER(value), UPPER(@0), ''))) / len(@0) AS occurences, ((len(value) - len(replace(UPPER(value), UPPER(@0), ''))) / len(@0)) * 2 AS priority FROM ContentObjects JOIN ParagraphObjects ON ContentObjects.ObjectID = ParagraphObjects.ObjectID WHERE value LIKE '%' + CAST(@0 AS nvarchar) + '%'   UNION   SELECT ContentObjects.ObjectID, location, type, value, 'content' AS contentType, (len(value) - len(replace(UPPER(value), UPPER(@0), ''))) / len(@0) AS occurences, ((len(value) - len(replace(UPPER(value), UPPER(@0), ''))) / len(@0)) * 2 AS priority FROM ContentObjects JOIN TitleObjects ON ContentObjects.ObjectID = TitleObjects.ObjectID WHERE value LIKE '%' + CAST(@0 AS nvarchar) + '%'   UNION   SELECT ContentObjects.ObjectID, location, type, value, 'content' AS contentType, (len(value) - len(replace(UPPER(value), UPPER(@0), ''))) / len(@0) AS occurences, ((len(value) - len(replace(UPPER(value), UPPER(@0), ''))) / len(@0)) * 2 AS priority FROM ContentObjects JOIN SubtitleObjects ON ContentObjects.ObjectID = SubtitleObjects.ObjectID WHERE value LIKE '%' + CAST(@0 AS nvarchar) + '%'   UNION   SELECT ContentObjects.ObjectID, location, type, linkText AS value, 'content' AS contentType, (len(linkText) - len(replace(UPPER(linkText), UPPER(@0), ''))) / len(@0) AS occurences, ((len(linkText) - len(replace(UPPER(linkText), UPPER(@0), ''))) / len(@0)) * 2 AS priority FROM ContentObjects JOIN LinkObjects ON ContentObjects.ObjectID = LinkObjects.ObjectID WHERE linkText LIKE '%' + CAST(@0 AS nvarchar) + '%'   UNION   SELECT ContentObjects.ObjectID, location, type, value, 'content' AS contentType, (len(value) - len(replace(UPPER(value), UPPER(@0), ''))) / len(@0) AS occurences, ((len(value) - len(replace(UPPER(value), UPPER(@0), ''))) / len(@0)) * 2 AS priority FROM ContentObjects JOIN BulletLineObjects ON ContentObjects.ObjectID = BulletLineObjects.ObjectID WHERE value LIKE '%' + CAST(@0 AS nvarchar) + '%'   UNION   SELECT ContentObjects.ObjectID, location, type, caption AS value, 'content' AS contentType, (len(caption) - len(replace(UPPER(caption), UPPER(@0), ''))) / len(@0) AS occurences, ((len(caption) - len(replace(UPPER(caption), UPPER(@0), ''))) / len(@0)) * 2 AS priority FROM ContentObjects JOIN ImageObjects ON ContentObjects.ObjectID = ImageObjects.ObjectID WHERE caption LIKE '%' + CAST(@0 AS nvarchar) + '%'   UNION   SELECT ContentObjects.ObjectID, location, type, caption AS value, 'content' AS contentType, (len(caption) - len(replace(UPPER(caption), UPPER(@0), ''))) / len(@0) AS occurences, ((len(caption) - len(replace(UPPER(caption), UPPER(@0), ''))) / len(@0)) * 2 AS priority FROM ContentObjects JOIN ImageLinkObjects ON ContentObjects.ObjectID = ImageLinkObjects.ObjectID WHERE caption LIKE '%' + CAST(@0 AS nvarchar) + '%'   ORDER BY contentType DESC, location ASC, type DESC  

I guess the question in short, is how can I (or is it even possible to) still get the occurences column (and the priority column, though, If I get one, I get the other since they are based off of the same calculations) when I need to find all occurences of all of the arguments in their search text (that is, separate words in the typed search term)? I know I can use OR in the WHERE clause, but how can I use OR in an UPPER or REPLACE function?

ADDITIONAL INFORMATION, IF NEEDED OR CURIOUS ABOUT ANY OF THE ABOVE

There is probably a little bit of overhead using this query, but I really don't mind that a whole lot, honestly. I am way more worried about getting the functionality I need into a single query (which may not even be possible considering my needs, I guess).

Notice the (@0) which is just a placeholder used for parameterized queries (this is probably common knowledge, but just thought I would throw that out there just in case) where there search term is later thrown in by C# as an additional argument in the function that actually queries the database (again, probably obvious, but this is a common anti-SQL injection tactic, given my environment, if not many others).

Right now, the priority column simply always multiplies the occurences column by 2, but this is only until I work out exactly what each multiplier should be, considering the priority desired.

I am in a C#.net WebMatrix with web-pages environment. (Effectively this means I am using SQL Server CE).

I apologize if this question is a little confusing and am more than happy to clarify anything needed.

UPDATE

I am now more scared than ever that this may not be possible, as I have just discovered that using conditional logic (i.e., IF ... ELSE in my case) isn't supported by SQL SERVER CE. (hangs head in dissapointment...)

UPDATE AGAIN

It seems that, considering the limitations to SQL Server CE and my own requirements, this isn't entirely possible. SQL Server CE is still powerful, but just not as powerful as its bigger, more complete, brother. I believe, though, for my own environment, I may indeed have found a way to complete what's left of my task using my server-side language to manipulate the compilation of the SQL query string. As long as I end up with one query in the end, I can still use OFFSET and FETCH for pagination in the way I was taught. I just want to thank any who looked this over and tried to help. Now, come on, C#, we have some work to do...

Storing complex userdata

Posted: 08 Jul 2013 03:19 PM PDT

Assume a tool where one can create an useraccount, then store 'fields' for that account. Each field can contain zero or more 'sub-fields' and each of those can contain zero or more strings.

I wonder how a proffesional developer would design a database structure for such a use case.

My approach would be:

  • Create a table for each user
  • Add two rows, fields and id
  • store a big JSON object in the first row, which contains every subfield and its strings

But somehow i think that this won't be the best way to solve this and i'm having a giant brain fart.

I would greatly appreciate any suggestions.

How to change the default delimiter in PostgreSQL 9.1.9 using SET?

Posted: 08 Jul 2013 08:05 PM PDT

From within a bash script I tried to use a replace and a detection on a field containing a pipe like:

IF position ('|' in var) IS NOT NULL THEN ...    ...REPLACE(field, '|', '#')   

The data itself was loaded into the DB using the COPY statement e.g.

COPY records FROM '$datasource' WITH DELIMITER AS ',' CSV HEADER NULL as '';  

I am using psql (9.1.9) and the default field separator is pipe:

#> \f  #> Field separator is "|".  

Ideally I would like to SET the default limiter in a CREATE OR REPLACE function at runtime like:

...  END; \$\$ LANGUAGE plpgsql SET config_param TO '#';  

OR as a seperate statement just like SET config_param TO '#';

I searched the documentation and internet for the right name of the config parameter without luck. Any help appreciated.

Inserting images in database [on hold]

Posted: 08 Jul 2013 02:28 PM PDT

How can the images be inserted in a table? Var binary is not working in structured query language command prompt,what can be the syntax?

Extract data to another computer

Posted: 08 Jul 2013 12:46 PM PDT

I have SQL Server 2008 R2 Express edition, and I want to export the data in that database to another computer so I can import it again in that computer.

How please?

What I have tried

Right-click in the database -> Tasks -> Export -> Set server name + password + set database source.

My problem

I found that I have to set the destination, but I didn't find the destination as a file or something like that, I just found that the destination is the same database which I am extracting the data from.

Help please, thanks in advance

How to do incremental/differential backup every hour in Postgres 9.1?

Posted: 08 Jul 2013 04:39 PM PDT

Trying to do an hourly hot incremental backup of a single postgres server to s3.

I have the following setup in postgresql.conf:

max_wal_senders=2  wal_level=archive  archive_mode=on  archive_command='copy "%p" "c:\\postgres\\archive\\%f"'  

I did a base backup with pg_basebackup -U postgres -D ..\archive -Ft -l 2013-07-07 -x

Which made a big base.tar file in the archive folder and added some long file name files, which I assume are the WALs.

pg_start_backup('label') and pg_stop_backup() seem to create the WAL files in xp_log, and then copy them to the archive folder.

  1. what command(s) do I run to do a new incremental backup (pg_basebackup does a new base backup which I don't want right now)?

  2. What does the label in pg_basebackup and pg_start_backup() do exactly?

  3. WAL Files don't seem to be removed from pg_xlog. What should I do about that? It seems to keep 5 or so WALs in pg_xlog. Is that to be expected?

  4. Do I need to backup the .backup files in the archive folder or just the 16,384KB WAL files?

  5. should I use the --xlog parameter and if so do I need to change wal_keep_segments from 0?

Most efficient way to perform table operations (spec. drop) on tables having table names that match a certain criteria

Posted: 08 Jul 2013 07:55 PM PDT

I am performing a schema dump of certain tables. This accompanied by dropping the tables that will be dumped. For example I need to drop all tables that start with, let's say, "SRC_". You guys think of a more efficient way of dropping those tables than the script below? Thanks.

begin    for rec in (select TABLE_NAME                  from ALL_TABLES                 where TABLE_NAME like 'SRC|_%' escape '|'                   and OWNER = 'SOME_SCHEMA')    loop      execute immediate 'drop table ' || rec.TABLE_NAME;    end loop;  end;  /  

any documentation on tpcc-mysql benchmark tool

Posted: 08 Jul 2013 01:01 PM PDT

Where can I find tpcc-mysql documentation? I google it but I can't find any documentation about it.

How make convincing case to DBA to use Oracle external tables for 500 MB to 5 GB files?

Posted: 08 Jul 2013 04:40 PM PDT

In our data feeds, we have a stack of XML files as well as numerous flat files to put into an Oracle11g database. We flatten the XML into delimited files, and load the whole set using SQLLoader.

I want to try a proof-of-concept loading via TABLE ORGANIZED EXTERNALLY but I need to make a convincing case to DBA's that it won't do something evil to the server. The only plausible test files I have are 400-600 MB, in production we'd add some multi-gigabyte files.

What are the risks and how should I approach them, any ideas?

Thank you,

Andrew Wolfe

MYSQL matching one column to another in a different table via a FULLTEXT index

Posted: 08 Jul 2013 07:57 PM PDT

I current have two different tables.

The first table has a list of titles and IDs associated to these titles, the second table is a list of random heading.

What I would like to know is if there is a way to match up all the titles in table2 to the closest matching title in table1 is this possible?

Ive tried :

SELECT title_table .*,random_titles.*, MATCH (title_table.title)AGAINST (random_titles.title) AS relevance FROM title_table ORDER BY relevance DESC

But that did not work.

I know I could use this query as each title is being put in table2 with PHP but I already have a lot of titles in the database.

Any help would be amazing thanks

Foreign key with multiple references

Posted: 08 Jul 2013 05:08 PM PDT

I have the following three tables in mysql database named "THE_COLLEGE"

mysql> desc students;    +----------+-------------+------+-----+---------+-------+  | Field    | Type        | Null | Key | Default | Extra |  +----------+-------------+------+-----+---------+-------+  | STU_Id   | char(5)     | NO   | PRI |         |       |  | STU_Name | varchar(20) | YES  |     | NULL    |       |  +----------+-------------+------+-----+---------+-------+    2 rows in set (0.00 sec)  mysql> desc staff;    +----------+-------------+------+-----+---------+-------+  | Field    | Type        | Null | Key | Default | Extra |  +----------+-------------+------+-----+---------+-------+  | STF_Id   | char(5)     | NO   | PRI |         |       |  | STF_Name | varchar(20) | YES  |     | NULL    |       |  +----------+-------------+------+-----+---------+-------+    2 rows in set (0.00 sec)  mysql> desc users;    +------------+-------------+------+-----+---------+-------+  | Field      | Type        | Null | Key | Default | Extra |  +------------+-------------+------+-----+---------+-------+  | Id         | char(5)     | NO   |     |         |       |  | user_Name  | varchar(20) | YES  |     | NULL    |       |  | Password   | varchar(20) | YES  |     | NULL    |       |  +------------+-------------+------+-----+---------+-------+    3 rows in set (0.02 sec)  

If a new row want to be inserted into the table "users", that row must be found in students table or staff table.

Which means a staff or a student can be a user.

Is it possible?

I tried with following foreign keys.

ALTER TABLE users  ADD CONSTRAINT fky_stu     FOREIGN KEY (ID)     REFERENCES Students (STU_Id);    ALTER TABLE users  ADD CONSTRAINT fky_stf     FOREIGN KEY (ID)     REFERENCES Staff (STF_Id);  

But the user table is allow to insert a row, if the row found in both tables (Student and Staff).

But I need to allow if the row found in any of the table (Student or Staff).

"TASK MANAGER" process is taking over a single-user mode database. What is it?

Posted: 08 Jul 2013 08:34 PM PDT

This is a duplicate of the question I asked on stackoverflow, but I was advised, that someone here could have a better idea what is happening.

I have a sporadic problem, when upgrading SQL Server in a single-user mode, using .NET SqlConnection, some other application somehow logs into the database, while the SQL code is being executed, and kicks my process out. SqlConnection is not closed or disposed in any way. But some other application somehow ends up connected to the database and that kicks my connection out.

When I run sp_who, I could see that a process that took control of the database is Command="TASK MANAGER".

Anyone could tell me what is this process, what is it's purpose, and how in the world it could get into a database, which is in single-user mode, and there is an active connection?

User login error when trying to access secured SQL Server database

Posted: 08 Jul 2013 01:51 PM PDT

We have a username that was recently renamed from one username to another (think getting married). The Active Directory admin renamed the user because "it has always worked in the past".

One vendor package we use uses the built-in MS SQL Server security. Each module has three groups:

  • xxView = View Only
  • xxUser = Add, Update rows (cannot delete)
  • xxAdmin = Delete rows

So we can add a person to one of these groups an they get the appropriate access. I don't have the actual error message in front of me anymore, but it said that they are not authorized to table CriticalVendorTable. It worked before the rename. The admin removed the person from each group and re-added them. Still no go. I even restarted the server and it still doesn't work. My best guess is that there is UUID (or unique id) somewhere that is causing problems.

The vendor's response is to delete the user and then re-add them. I have only had time to do some brief searching, but I found this page; AD User SID Mis-mapping. Would this be worth trying? Would it be better to just delete the user and recreate them?

hstore for versioning fields in postgresql

Posted: 08 Jul 2013 08:51 PM PDT

I have a postgresql database that will hold about 50 tables, each of them having about 15 fields, it would have at least 300.000 rows on each table.

In order to track the changes done on each field I am thinking on create a table defined by:

CREATE TABLE fieldhistory(tableid int, fieldid int, id bigint,value hstore);  CREATE INDEX fieldhistory_index ON fieldhistory(tableid, fieldid, id);  

I would expect the table to grow and grow, and retrieve from fieldhistory data only by tableid, fieldid and id.

When the tables are modified I would add a new record with something like:

SELECT upserthistory(1,1,1,'first update','115435','3');  SELECT upserthistory(1,1,1,'second update','115435','3');  

where upserthistory is defined by :

CREATE OR REPLACE FUNCTION upserthistory(key1 INT, key2 INT, key3 BIGINT, data TEXT, theuser TEXT, whend TEXT) RETURNS VOID AS  $$  BEGIN      LOOP          -- first try to update the key          UPDATE fieldhistory SET value = value || ((whend||'.'||theuser) => data) WHERE tableid = key1 AND fieldid = key2 AND id = key3;          IF found THEN              RETURN;          END IF;          -- not there, so try to insert the key          -- if someone else inserts the same key concurrently,          -- we could get a unique-key failure          BEGIN              INSERT INTO fieldhistory(tableid,fieldid,id,value) VALUES (key1, key2, key3, hstore(array[whend||'.'||theuser,data]) );              RETURN;          EXCEPTION WHEN unique_violation THEN              -- Do nothing, and loop to try the UPDATE again.          END;      END LOOP;  END;  $$  LANGUAGE plpgsql;  

Are there better approaches to accomplish this? the table would grow max to 225 millions of rows, or would it be better to have 50 tables of 4.5 millions of rows at max each? noting that the actual trace of each field will go to the hstore.

SSIS Script Component - How to modify the Output0Buffer

Posted: 08 Jul 2013 01:14 PM PDT

I've a script component which accepts records from SQL Azure database table. The script then invokes a web service, which returns the number of failed and success records.

For all the records, I would like to add the Status field which has either "success" or "fail" and this gets output from script component.

I then log those output into a text file.

Problem: I'm not been able to add status for each input records since the web service call only happens on post execute.

I tried this but still doesn't work:

public override void Input0_ProcessInputRow(Input0Buffer Row)  {           listOfData.Add(new ClockData            {               TimeClockID=Row.TimeClockID,               PersonID=Row.EmployeeCode,               LocationCode=Row.ClockInServiceContextID,               ClockInDateTime=Row.ClockInDateTime,               ClockOutDateTime=Row.ClockOutDateTime             });   }     public override void CreateNewOutputRows()   {       MessageBox.Show("Test CreateNewOutputRows");       MessageBox.Show(listOfData.Count.ToString());       foreach (var item in listOfData)       {           Output0Buffer.AddRow();           Output0Buffer.EmployeeCode = item.PersonID;           MessageBox.Show(item.PersonID);       }   }  

Best practices on common person fields (Name, email, address, gender etc...)

Posted: 08 Jul 2013 12:55 PM PDT

What are the most common best practices on length and data type on common fields like:

  • First Name
  • Last Name
  • Address
  • Email
  • Sex
  • State
  • City
  • Country
  • Phone Number

etc....

No comments:

Post a Comment

Search This Blog