[how to] Mysql datatype for username of maximum 15 characters |
- Mysql datatype for username of maximum 15 characters
- How does Berkeley DB manage its files?
- Lock hints on preconditions
- Sql Server 2012 Extended Events for Selective Xml Indexing not showing results
- Can SQL Server 2008R2 SSIS packages access SQL Server 2012 databases with driver updates only?
- Indexes and Optimizations for Logging System
- Can't send PHP query to MySQL [migrated]
- SUSPECT in mdf files
- MySQL - Randomly long DESCRIBE queries
- RTF to Plain-Text? [on hold]
- Partitioning vs Indexes
- TempDB Version Store cleanup
- How to migrate data between "all-surrogate" databases?
- How can I search values, considering occurences, on more than one term in SQL Server CE?
- Storing complex userdata
- How to change the default delimiter in PostgreSQL 9.1.9 using SET?
- Inserting images in database [on hold]
- Extract data to another computer
- How to do incremental/differential backup every hour in Postgres 9.1?
- Most efficient way to perform table operations (spec. drop) on tables having table names that match a certain criteria
- any documentation on tpcc-mysql benchmark tool
- How make convincing case to DBA to use Oracle external tables for 500 MB to 5 GB files?
- MYSQL matching one column to another in a different table via a FULLTEXT index
- Foreign key with multiple references
- "TASK MANAGER" process is taking over a single-user mode database. What is it?
- User login error when trying to access secured SQL Server database
- hstore for versioning fields in postgresql
- SSIS Script Component - How to modify the Output0Buffer
- Best practices on common person fields (Name, email, address, gender etc...)
Mysql datatype for username of maximum 15 characters Posted: 08 Jul 2013 09:20 PM PDT I currently have a database with the "user" table having username as one of the columns. I allow a maximum of 15 characters for the username in my system... Is it fine to have the username column as a varchar(15) datatype? |
How does Berkeley DB manage its files? Posted: 08 Jul 2013 08:02 PM PDT I'm using Berkeley DB (BDB) as a persistent store for a JMS queue. When I consume an entry from the queue the underlying BDB files do not immediately shrink, but do eventually. I'm running into issues with the BDB files taking up lots of space on the file system while retrieval performance degrades. My entry size varies considerably but it is not uncommon to have 400,000 messages of around 32kb each in the persistent queue. I'd like to understand how BDB manages the files so that I can throttle the number of entries for file size/retrieval performance. Or so I can rule out BDB as my persistent store mechanism. I am probably searching for the wrong terms but have not found what I'm looking for in the Oracle documentation or The Berkeley DB Book. I would not be surprised if BDB doesn't want me messing with its internals but I would be surprised if (at least) an overview of how it does handle its internals is not available. Please forgive the misleading tag -- I didn't find ones for Berkeley, Sleepycat or BDB and I don't have the reputation to submit a new tag. |
Posted: 08 Jul 2013 07:59 PM PDT Assuming for a moment that FK references are not available, what lock hints should be placed on the preconditions of the following example to prevent deadlocks and inconsistent data? (This is a contrived example, of course, to illustrate a scenario in which a delete/update can only be performed dependent upon the state of some other table) Environment: SQL Server 2012 |
Sql Server 2012 Extended Events for Selective Xml Indexing not showing results Posted: 08 Jul 2013 02:09 PM PDT I created a Extended Event to monitor and tune a Selective Xml Index In SSMS when I All i see is: Is there another step I need to acomplish before seeing the results? I followed: http://www.mssqltips.com/sqlservertip/2731/managing-sql-server-extended-events-in-management-studio/ |
Can SQL Server 2008R2 SSIS packages access SQL Server 2012 databases with driver updates only? Posted: 08 Jul 2013 02:21 PM PDT We have two products running on SQL Server that we will be upgrading to SQL Server 2012 on a staggered schedule. Product B uses SSIS packages (using ODBC drivers) to extract data from Product A's database. The scenario would be to move Product A to SQL Server 2012 first, with product B to follow in the coming months. Product B would continue to run in a 2008R2 environment while Product A would be moved to a new SQL Server 2012 environment. The hope is that there would not be a need for package changes and that installing the new ODBC Driver (Microsoft ODBC Driver 11) on the 2008R2 SSIS environment would allow the packages to execute against Product A. EDIT - The package would remain in the SQL Server 2008 format and would be using the SQL Server 2012 Database as it's source. Does anyone know if this is in fact viable or are there known issues that would require package changes? Thank you for any help you can provide. Matt |
Indexes and Optimizations for Logging System Posted: 08 Jul 2013 06:42 PM PDT I'm designing tables for a new log system and am having trouble figuring out what indexes I'll need (I'm a developer, not a DBA), and am also open to any suggestions/criticisms/etc. for making the tables as "fast" as possible (from my app's point of view). Background info: the DB itself is a MySQL instance hosted on AWS. It's a shared server and I don't have tuning privileges beyond basic table setup, indexes, key constraints, etc. This is a logging system, so very write-intensive. Although I will write scripts to dump certain log records (errors, etc.), there won't be much reading going on. The logging system will consist of two tables,
The problem child here is When I do perform reads (for the purposes of analytics/debugging), these are the queries I plan on using the most (and hence perhaps a basis for creating indexes from): So I ask:
Thanks in advance! |
Can't send PHP query to MySQL [migrated] Posted: 08 Jul 2013 04:42 PM PDT When i'm trying to send query from PHP 5.4.6 TS,VC9 to MySQL 5.5 (NTS,VC9) I get the message: Fatal error: Call to undefined function mysql_connect() after restarting Apache 2.2 on (Windows xp) it says: PHP Startup: mysql: Unable to initialize module Module compiled with build ID=API20100525,NTS,VC9 PHP compiled with build ID=API20100525,TS,VC9 These options needs to match How can I solve this issue? |
Posted: 08 Jul 2013 01:42 PM PDT when i run ALTER DATABASE EMS102013 SET SINGLE_USER; DBCC CHECKDB (EMS102013) WITH NO_INFOMSGS, ALL_ERRORMSGS show message Msg 8921, Level 16, State 1, Line 1 Check terminated. A failure was detected while collecting facts. Possibly tempdb out of space or a system table is inconsistent. Check previous errors. how can fix this problem |
MySQL - Randomly long DESCRIBE queries Posted: 08 Jul 2013 01:07 PM PDT I have several queries like this (running on MySQL 5.5): And it randomly takes 5-10 seconds on my production servers. We have an ORM that caches this data once per day (per web server) and I'm wondering why I see these slow queries popping up from time to time. I'm thinking that I could just change the web servers to cache until their processes are restarted but it'd be nice to know if there's an easy fix or something I could look into that may be causing this. |
Posted: 08 Jul 2013 12:34 PM PDT Did you ever get the library from this post? We have a need to convert whole, large, whole tables worth of RTF information into something usable and would rather not reinvent if possible. |
Posted: 08 Jul 2013 04:04 PM PDT I'm very new to databases, so I apologize if my question is very basic... Anyways, I'm creating a table with what seems like alot of data (500 million rows right now, and potentially twice as much in the future). Now, I need to have a way to access the data in this table quickly, so I'm researching partitions and indexes. However, I'm getting confused on when I'd want to create a partition vs. an index. I have three columns that seem like reasonable candidates for partitioning or indexing:
When running future selects against this table, it's likely that I'll be filtering on client id as well as wanting to do some sampling (which I would like to do by the "Token" variable). I may occasionally be filtering by the time variable as well. So, my question is this: how should I organize my table? Should I partition by Client and Token, and then create an index on time? Or just partition on client and create indices on time and token? And, more importantly, what is the logic behind the strategy you recommend? Also, after I've created the table, will the indexes break if I add more data into it (particularly new clients for the same date/token range)? Is recreating an index relatively simple? Thanks so much for your help, and please let me know if you need any more information from me. |
Posted: 08 Jul 2013 04:17 PM PDT From what I can find the version store will only clean up versions that are older than the oldest active transaction. Question: Is the oldest transaction database specific or will SQL Server keep all versions, regardless of the database, if there is an older transaction still active, period? Backstory - SQL Server 2005 SP4 Enterprise hosting around 40 databases. TempDB is currently 102 GB, version store is around 98 GB. One of the applications hosted on the database maintains a connection until it is forced to exit (restart etc...) and the oldest transaction is 40 days old(last restart, based on sys.dm...database_transactions). Two separate large databases had extremely heavy use over the last month and we saw consistent TempDB growth coinciding with these operations. We expected some growth. We did not expect it to keep growing. Question: Are the versions stored in TempDB's version store from these two separate databases still there because a third independent database has a connection that is 40 days old and shows an active transaction_state? Perfmon counters: Version store is continually growing in the few hours I have tracked it this morning. Version Generation Rate AVG is around 30 kb/s, Version Cleanup rate is 0 kb/s. Plenty of space left for TempDB, there are around 300 GB of total data files for all user databases, TempDB has grown on average 350 MB per day for each of its 8 data files since the last restart. Answers to comment questions so as not to have a long running comment section: Q: Why auto-growth on tempdb? A: Its a cluster that supports important stuff. Through trial and error we found 8 GB sufficient to meet our needs and allow tempdb to come online quickly in event of failover. We monitor auto-growth, saw some that we expected, then saw more, did a large increase, then saw more, etc...We believe the version store is not getting cleaned properly, so....here we are Q: How do you know the transaction is active and not just an active connection? A: transaction_state says active in sys.dm_tran_active_snapshot_database_transactions and other stuff. It will periodically run a query according to activity monitor when I catch it right. It is of course vendor written and the super user doesn't know how it works. Which leads back to the main question. Is this the culprit, am I barking up the wrong tree? Referenced question: find-transactions-that-are-filling-up-the-version-store |
How to migrate data between "all-surrogate" databases? Posted: 08 Jul 2013 04:30 PM PDT With some periodicity this needs to be done:
Note: databases are relational. The problem is:
What's the easiest way of copying rows from one database to another when all tables have surrogates keys ? |
How can I search values, considering occurences, on more than one term in SQL Server CE? Posted: 08 Jul 2013 02:58 PM PDT I have a rather complex SQL query, that I actually received help from this SE site to come up with parts of. This query is designed to take a search term and search all relevant values in a database (this database stores all the relevant information for my partially CMS-ish site). It is a query that uses both While I know that the Okay, enough talk, here's the query I have so far (works perfect with only a single search term, but again, I need this to be dynamic enough to search on virtually any number of parameters): I guess the question in short, is how can I (or is it even possible to) still get the ADDITIONAL INFORMATION, IF NEEDED OR CURIOUS ABOUT ANY OF THE ABOVE There is probably a little bit of overhead using this query, but I really don't mind that a whole lot, honestly. I am way more worried about getting the functionality I need into a single query (which may not even be possible considering my needs, I guess). Notice the (@0) which is just a placeholder used for parameterized queries (this is probably common knowledge, but just thought I would throw that out there just in case) where there search term is later thrown in by C# as an additional argument in the function that actually queries the database (again, probably obvious, but this is a common anti-SQL injection tactic, given my environment, if not many others). Right now, the I am in a C#.net WebMatrix with web-pages environment. (Effectively this means I am using SQL Server CE). I apologize if this question is a little confusing and am more than happy to clarify anything needed. UPDATE I am now more scared than ever that this may not be possible, as I have just discovered that using conditional logic (i.e., IF ... ELSE in my case) isn't supported by SQL SERVER CE. (hangs head in dissapointment...) UPDATE AGAIN It seems that, considering the limitations to SQL Server CE and my own requirements, this isn't entirely possible. SQL Server CE is still powerful, but just not as powerful as its bigger, more complete, brother. I believe, though, for my own environment, I may indeed have found a way to complete what's left of my task using my server-side language to manipulate the compilation of the SQL query string. As long as I end up with one query in the end, I can still use OFFSET and FETCH for pagination in the way I was taught. I just want to thank any who looked this over and tried to help. Now, come on, C#, we have some work to do... |
Posted: 08 Jul 2013 03:19 PM PDT Assume a tool where one can create an useraccount, then store 'fields' for that account. Each field can contain zero or more 'sub-fields' and each of those can contain zero or more strings. I wonder how a proffesional developer would design a database structure for such a use case. My approach would be:
But somehow i think that this won't be the best way to solve this and i'm having a giant brain fart. I would greatly appreciate any suggestions. |
How to change the default delimiter in PostgreSQL 9.1.9 using SET? Posted: 08 Jul 2013 08:05 PM PDT From within a bash script I tried to use a replace and a detection on a field containing a pipe like: The data itself was loaded into the DB using the COPY statement e.g. I am using psql (9.1.9) and the default field separator is pipe: Ideally I would like to SET the default limiter in a CREATE OR REPLACE function at runtime like: OR as a seperate statement just like SET config_param TO '#'; I searched the documentation and internet for the right name of the config parameter without luck. Any help appreciated. |
Inserting images in database [on hold] Posted: 08 Jul 2013 02:28 PM PDT How can the images be inserted in a table? Var binary is not working in structured query language command prompt,what can be the syntax? |
Extract data to another computer Posted: 08 Jul 2013 12:46 PM PDT I have SQL Server 2008 R2 Express edition, and I want to export the data in that database to another computer so I can import it again in that computer. How please? What I have triedRight-click in the database -> Tasks -> Export -> Set server name + password + set database source. My problemI found that I have to set the destination, but I didn't find the destination as a file or something like that, I just found that the destination is the same database which I am extracting the data from. Help please, thanks in advance |
How to do incremental/differential backup every hour in Postgres 9.1? Posted: 08 Jul 2013 04:39 PM PDT Trying to do an hourly hot incremental backup of a single postgres server to s3. I have the following setup in postgresql.conf: I did a base backup with Which made a big base.tar file in the archive folder and added some long file name files, which I assume are the WALs.
|
Posted: 08 Jul 2013 07:55 PM PDT I am performing a schema dump of certain tables. This accompanied by dropping the tables that will be dumped. For example I need to drop all tables that start with, let's say, "SRC_". You guys think of a more efficient way of dropping those tables than the script below? Thanks. |
any documentation on tpcc-mysql benchmark tool Posted: 08 Jul 2013 01:01 PM PDT Where can I find tpcc-mysql documentation? I google it but I can't find any documentation about it. |
How make convincing case to DBA to use Oracle external tables for 500 MB to 5 GB files? Posted: 08 Jul 2013 04:40 PM PDT In our data feeds, we have a stack of XML files as well as numerous flat files to put into an Oracle11g database. We flatten the XML into delimited files, and load the whole set using SQLLoader. I want to try a proof-of-concept loading via TABLE ORGANIZED EXTERNALLY but I need to make a convincing case to DBA's that it won't do something evil to the server. The only plausible test files I have are 400-600 MB, in production we'd add some multi-gigabyte files. What are the risks and how should I approach them, any ideas? Thank you, Andrew Wolfe |
MYSQL matching one column to another in a different table via a FULLTEXT index Posted: 08 Jul 2013 07:57 PM PDT I current have two different tables. The first table has a list of titles and IDs associated to these titles, the second table is a list of random heading. What I would like to know is if there is a way to match up all the titles in table2 to the closest matching title in table1 is this possible? Ive tried :
But that did not work. I know I could use this query as each title is being put in table2 with PHP but I already have a lot of titles in the database. Any help would be amazing thanks |
Foreign key with multiple references Posted: 08 Jul 2013 05:08 PM PDT I have the following three tables in mysql database named "THE_COLLEGE" If a new row want to be inserted into the table "users", that row must be found in students table or staff table. Which means a staff or a student can be a user. Is it possible? I tried with following foreign keys. But the user table is allow to insert a row, if the row found in both tables (Student and Staff). But I need to allow if the row found in any of the table (Student or Staff). |
"TASK MANAGER" process is taking over a single-user mode database. What is it? Posted: 08 Jul 2013 08:34 PM PDT This is a duplicate of the question I asked on stackoverflow, but I was advised, that someone here could have a better idea what is happening. I have a sporadic problem, when upgrading SQL Server in a single-user mode, using .NET SqlConnection, some other application somehow logs into the database, while the SQL code is being executed, and kicks my process out. SqlConnection is not closed or disposed in any way. But some other application somehow ends up connected to the database and that kicks my connection out. When I run sp_who, I could see that a process that took control of the database is Command="TASK MANAGER". Anyone could tell me what is this process, what is it's purpose, and how in the world it could get into a database, which is in single-user mode, and there is an active connection? |
User login error when trying to access secured SQL Server database Posted: 08 Jul 2013 01:51 PM PDT We have a username that was recently renamed from one username to another (think getting married). The Active Directory admin renamed the user because "it has always worked in the past". One vendor package we use uses the built-in MS SQL Server security. Each module has three groups:
So we can add a person to one of these groups an they get the appropriate access. I don't have the actual error message in front of me anymore, but it said that they are not authorized to table CriticalVendorTable. It worked before the rename. The admin removed the person from each group and re-added them. Still no go. I even restarted the server and it still doesn't work. My best guess is that there is UUID (or unique id) somewhere that is causing problems. The vendor's response is to delete the user and then re-add them. I have only had time to do some brief searching, but I found this page; AD User SID Mis-mapping. Would this be worth trying? Would it be better to just delete the user and recreate them? |
hstore for versioning fields in postgresql Posted: 08 Jul 2013 08:51 PM PDT I have a postgresql database that will hold about 50 tables, each of them having about 15 fields, it would have at least 300.000 rows on each table. In order to track the changes done on each field I am thinking on create a table defined by: I would expect the table to grow and grow, and retrieve from When the tables are modified I would add a new record with something like: where Are there better approaches to accomplish this? the table would grow max to 225 millions of rows, or would it be better to have 50 tables of 4.5 millions of rows at max each? noting that the actual trace of each field will go to the hstore. |
SSIS Script Component - How to modify the Output0Buffer Posted: 08 Jul 2013 01:14 PM PDT I've a script component which accepts records from SQL Azure database table. The script then invokes a web service, which returns the number of failed and success records. For all the records, I would like to add the Status field which has either "success" or "fail" and this gets output from script component. I then log those output into a text file. Problem: I'm not been able to add status for each input records since the web service call only happens on post execute. I tried this but still doesn't work: |
Best practices on common person fields (Name, email, address, gender etc...) Posted: 08 Jul 2013 12:55 PM PDT What are the most common best practices on length and data type on common fields like:
etc.... |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment