[how to] Convert the IP Address range to two BIGINT for faster lookup |
- Convert the IP Address range to two BIGINT for faster lookup
- Why is this user "Admin" created in all databases?
- IP Address lookup star schema design
- Postgres copy data with \xYY as plain string instead of interpreting as encoded string
- How to make Greenplum 4.2.3 only scan the intended partition?
- In SSIS 2012, what is the purpose of flagging a package as an "Entry Point" package
- Time as a measure
- Blocking incoming database links
- Stop SQL Server service(s) before defragmenting drive?
- Representing SQL constraints on a table
- How to read this output from MySQL?
- Use single table with foreign key or two twin tables for text entries?
- What index to add in case of join by two optional fields
- Granting permissions only on a set list of objects
- What fillfactor for caching table?
- Cannot connect from the DMZ to a named instance of SQL Server
- PostgreSQL Sequential Scan instead of Index Scan Why?
- Oracle 11g listener fails with ORA-12514 and ORA-12505 errors
- What is the best way to store X509 certificate in PostgreSQL database?
- Most efficient way to sort data fields into SQL
- How should I tune Postgresql for 20 GB of RAM?
- Efficient way to move rows across the tables?
- Write differences between varchar and nvarchar
- How to properly kill MySQL?
- Why would mysql "show global status" query be taking 15 minutes?
- Is there a way to export Oracle's UNDO?
- MySQL-5.5 InnoDB memory issue
- MySQL table relations, inheritance or not?
- When should I think about upgrading our RDS MySQL instance based on memory usage?
Convert the IP Address range to two BIGINT for faster lookup Posted: 11 Mar 2013 10:01 PM PDT I am working on a project in which we need to do Problem Statement:- I am planning to store the So my question is we will be having separate I am confuse given a IP Address how should I device |
Why is this user "Admin" created in all databases? Posted: 11 Mar 2013 09:40 PM PDT When I create database in SQL server Management Studio, user with name "admin" is also created for all databases: Why this user is created? And how I can change this behavior? |
IP Address lookup star schema design Posted: 11 Mar 2013 09:04 PM PDT I am working on a project in which we need to do Problem Statement:- We are expecting traffic around And this dataset will be worldwide datasets meaning for all the countries. And we are planning to store these datasets in Now my question is should I create only a Basically I am trying to know how should I setup the schema for this table so that lookup doesn't take that much time with the traffic we are going to expect. And our service which will be doing lookups should return the response very fast. I was going through the star schema so If I need to go forward with Star Schema then how can I do that? |
Postgres copy data with \xYY as plain string instead of interpreting as encoded string Posted: 11 Mar 2013 09:24 PM PDT I have a log file full of URLs, generated by Bro IDS. When Bro logs a URL with non-ascii characters in it, it inserts \xYY where YY is the hexadecimal character code. Also, some URLs contain "\x". Is there a setting or flag I can use with the |
How to make Greenplum 4.2.3 only scan the intended partition? Posted: 11 Mar 2013 07:53 PM PDT When I uses unnest() in a View, and uses that View in a select statement, Greenplum seems to fail in only searching for the intended partition and search through all the partition of the main table instead. The same thing also apply when using a Subquery instead of a View. For example: We currently have 2 different servers running 2 different version of Greenplum. Server A run on an older version (4.2.1) while the Server B run on 4.2.3. Running the same query above will result differently. Server A (old) will return the query in few seconds while Server B (new) will take forever to return. Running an Explain of the query shows that Server A only do scan on one of the partitions (with the date and state in the where clause) while Server B will do a Scan on each partition, causing the slowness. The table structure for both DBs are the same. Running a query without the unnest will not have the problem. So, I'm suspecting that there is something to do with new version. Is there anything I can do to solve this problem? |
In SSIS 2012, what is the purpose of flagging a package as an "Entry Point" package Posted: 11 Mar 2013 03:26 PM PDT In the Visual Studio designer you can right click on a SSIS package and designate it as an "Entry Point" package" Doing a search I found this page on MSDN which states:
With this flag enabled and disabled I have been able to execute a package directly. What is the purpose of enabling or disabling this flag? Is it merely to document the intentions of your own SSIS packages or does SQL Server/SSIS behave differently when it enabled or disabled? |
Posted: 11 Mar 2013 06:51 PM PDT Is it possible to have times as measures in a cube? We're trying to view employee start times by day, aggregating as an average over time, but even with a No Aggregation measure type I'm getting an error when deploying saying that StartTime is a String value. Is this at all possible? It doesn't seem like such a crazy thing to want to do... |
Blocking incoming database links Posted: 11 Mar 2013 02:03 PM PDT Let's say there is an Oracle database Is there a way to block database links from |
Stop SQL Server service(s) before defragmenting drive? Posted: 11 Mar 2013 02:32 PM PDT Our production SQL Server 2005 database's data files live on a separate physical drive, which Microsoft Windows 2003's Disk Defragmenter tool reports as 99% fragmented. We scheduled a task to defragment this drive at 3:00 a.m. on a Saturday morning. The job completed after 40 minutes with no apparent errors. However, the drive remains heavily fragmented. Should we have stopped SQL Server service(s) before defragmenting? CONTEXT Per requests for context: We have a Microsoft SQL Server 2005 instance (9.00.5324.00) running 32-bit Windows Server 2003 (SP2) on Dell PowerEdge 2950 hardware, circa 2007, with 4GB RAM. The PowerEdge 2950 has four 68GB drives configured as RAID-1 to create two 68GB virtual disks: (1) C (boot and OS) & D (pagefile, miscellaneous other data); and (2) E (SQL data). To my knowledge, IT staff have never defragmented any of these drives...Disk Defragmenter reports file fragmentation of 66% (C), 77% (D), and 99% (E). Performance Monitor reports the following average results: "Paging file: % usage" = ~6.8%; "SQL Server: Buffer Manager - Page life expectancy" = 20 seconds; and "PhysicalDisk: Avg. disk sec/write, drive E" = between 300 and 1,100 ms. We're due for a much-needed hardware and SQL Server upgrade in a few months time (viz., new hardware, 64-bit Windows Server 2012, 64-bit SQL Server 2012, 12GB RAM), but, due to end-user performance, want to alleviate the issue as much as possible. Thus the thinking a file defrag might help for drive E, the main SQL data drive. As an aside, last week we pulled two failed drives and rebuilt the array...not sure that matters. We contract with another IT team to maintain the server, so we do not have direct access to the equipment...our organization just pays for services. We can afford the downtime during regularly scheduled maintenance windows (weekly) as well as out-of-band downtime, as necessary, overnight. |
Representing SQL constraints on a table Posted: 11 Mar 2013 08:21 PM PDT I have this table: This table represent parts sold each day, constraint says number of sales should be at least 25 and at most 100. I think it should start with something like this: |
How to read this output from MySQL? Posted: 11 Mar 2013 01:39 PM PDT Query that is being ran: Error that is being thrown: SQLSTATE[23000]: Integrity constraint violation: 1062 Duplicate entry '41721' for key 1 Where does |
Use single table with foreign key or two twin tables for text entries? Posted: 11 Mar 2013 06:38 PM PDT I have a table The content column is a varchar since we want to be able to search through the answer content quickly. However, we also want to be able to store full text as answers. There are two approaches: A. Using two "twin" table (with the same columns names) : a table with the column B. Using a single This approach seems cleaner, however it means doing two UPDATEs AND INSERTs for each entry modification and a JOIN on the two table to pull out the information. In general is this bad practice to have "twin" tables in a database? Thanks! |
What index to add in case of join by two optional fields Posted: 11 Mar 2013 01:09 PM PDT I have query similar to the one below, it joins two tables by field which can have NULL value. The result matchs the data if both tables have the same data or consider the condition as optional. The best index for table B I could think of is such which It helps a bit and from execution plan I can see the index is used but 90 % of the cost is spent in Nested Loops (Inner join). Is there any way how to get this query working faster? |
Granting permissions only on a set list of objects Posted: 11 Mar 2013 06:36 PM PDT I have a SQL Server 2005 database with a large number of tables in the dbo schema. I now created a new schema (call it myschema) that only has three table-valued functions and two stored procedures in it. All of that code has access to the tables in dbo. The code in myschema will ultimately be called from a web service and I am struggling to get the permissions right for the user I created for the web service. At first, I created the user with no roles except public and then gave it specific permissions on the securables in myschema. But then I could log on using that user and select from (and ever update) anything in dbo. So I gave the user the denydatareader and denydatawriter roles, which effectively restricted the access to the objects in dbo. The result of this is that I can execute the two stored procedures just fine. But if I try to use the table-valued functions, I get this error:
This is despite my use of: I'm guessing that's because of my brilliant use of denydatareader. So what is the correct way to give a user access only to a list of specific stored procedures and table-valued functions and not to anything else? |
What fillfactor for caching table? Posted: 11 Mar 2013 07:31 PM PDT I have heavily updated / accessed table where I store serialized java objects. They are in the table for 2-3 hours (also are being updated during that period) and then removed. Size of table is around 300MB. I have spotted it is very, very often VACUUMed and wonder if changing the |
Cannot connect from the DMZ to a named instance of SQL Server Posted: 11 Mar 2013 04:31 PM PDT I have a problem with my SQL Server.
If someone can help me, then I'll be happy :) Thanks. |
PostgreSQL Sequential Scan instead of Index Scan Why? Posted: 11 Mar 2013 04:05 PM PDT Hi All I've got a problem with my PostgreSQL database query and wondering if anyone can help. In some scenarios my query seems to ignore the index that I've created which is used for joining the two tables Sequential Scan (~5 minutes) Index Scan (~3 seconds) (on explain.depesz.com) Table Structure This is the table structure for the QUERY Returns I've tried doing an Could anyone spread and light on this or suggest anything else I should try? |
Oracle 11g listener fails with ORA-12514 and ORA-12505 errors Posted: 11 Mar 2013 08:41 PM PDT I run an instance of Oracle 11g locally on my development machine and can connect to the local instance directly via SqlPlus: But I cannot connect to it via the listener: Similarly, if I connect via SqlDeveloper I get an error (albeit This instance has been stable and working fine for a year or more until today, a Monday morning. Our corporate IT do sometimes push new policies and updates over the weekend, so I'm assuming that something has changed, but I've not been able to work out what. I've restarted the service and the listener several times, the listener log doesn't give any clues. The listener seems fine: Port 1521 seems ok: (PID 4368 is TNSLSNR.exe process.) Also, I can The Additionally, and I've no idea if it is related, I can't seem to access apex on So where else should I be looking? Update with requested information: |
What is the best way to store X509 certificate in PostgreSQL database? Posted: 11 Mar 2013 07:06 PM PDT I'm working on web authenticate system where users will digitally sign random tokens and this will be checked with X509 certificates stored on the server. Therefore I have to store several X509 certificates (PEM or DER format) in PostgreSQL database. Sounds easy, but I want to have possibility to search certificates with subject, issuer, notBefore, notAfter and similar criteria. My idea is to have following columns in database: X509data, notAfter, notBefore, subject, issuer etc. Than I will create object (in SQL alchemy) representing X509 certificate with methods like add_new_X509(), find_X509(search criteria) etc. So whenever I will add new certificate with method add_new_X509() it will automatically reads from certificate all data and fill up rest of the columns and put raw certificate into X509data column. Unfortunately this solution have two disadvantages:
So.. anybody have better idea, or suggestion? Maybe somebody see any other security issue that can arise with this solution? |
Most efficient way to sort data fields into SQL Posted: 11 Mar 2013 03:43 PM PDT I'm trying to decide on the most efficient way to sort various data values. Here's how the data arrives: Device X sends a text string "name=value&name2=value&name=value" On arrival that string is stuffed into a sql row along with the unique address of the sending device. This keeps data flowing in easily to a SQLite database. My parsing script first gets all unique device addresses. Those are put in a hash for the parser and inserted into a new database. (the hash contains the rowid from the db after the insert.) (with more logic to keep race conditions out of the mix) Then each row of string data is split up by the Here's the general table layout: Each row is read from the rawData, sorted and marked as processed. (makes it easy to muck around with the parsing script) Data is placed into these: I'm trying to decide on the most efficient method of inserting this data. It ends up as thousands of rows. I supposed I could keep a hash of known device/name pairs. Then if the hash doesn't know about a new name I can go ahead and insert it and refresh the hash... Am I missing something totally obvious? The goal is to keep selects to a minimum for efficiency! |
How should I tune Postgresql for 20 GB of RAM? Posted: 11 Mar 2013 03:44 PM PDT I've been fortunate enough to have the use of a 20GB Linode instance running Ubuntu 64 bit. I want to try to optimize PostGres for this service, but I don't know what I should prioritize changing. I have several datasets of 20,000 or so rows and the calculations that are being performed are memory intensive queries (spatial analyses) with a small number of rows being written after each request. The total number of users is very small (10 - 50). I've read through this article on the Postgresql site but I don't know enough about how this works, to know what I should prioritize. I've also looked at advice on what to change for geo type work here. For example, I tried changing the
I returned this to it's original value and tried changing: My problem is that I don't know which values I should try changing, or how I should prioritize which values are key to experiment with and test. What should I specifically do to optimize this database? |
Efficient way to move rows across the tables? Posted: 11 Mar 2013 07:10 PM PDT This is somewhat long question as I would like to explain all details of the problem. System DescriptionWe have a queue of incoming messages from external system(s). Messages are immediately stored in the e.g. INBOX table. Few thread workers fetch the job chunk from the table (first mark some messages with UPDATE, then SELECT marked messages). Workers do not process the messages, they dispatch them to different internal components (called 'processors'), depending on message command. Each message contains several text fields (longest is like 200 varchars), few ids and some timestamp(s) etc; 10-15 columns total. Each internal component (i.e. processor) that process messages works differently. Some process the message immediately, others triggers some long operation, even communicating via HTTP with other parts of the system. In other words, we can not just process message from the INBOX and then remove it. We must work with that message for a while (async task). Still, there are not too many processors in the system, up to 10. Messages are all internal, i.e. it is not important for user to browse them, paginate etc. User may require list of processed relevant messages, but that's not mission-critical feature, so it does not have to be fast. Some invalid message may be deleted sometimes. Its important to emphasize that expected traffic might be quite high - and we don't want bottlenecks because of bad database design. Database is MySql. DecisionThe one of the decisions is not to have one big table for all messages, with some flags column that will indicate various messages states. Idea is to have tables per processors; and to move messages around. For example, received messages will be stored in INBOX, then moved by dispatcher to some e.g. PROCESSOR_1 table, and finally moved to ARCHIVE table. There should not be more then 2 such movements. W While in processing state, we do allow to use flags for indicating processing-specific states, if any. In other words, PROCESSOR_X table may track the state of the messages; since the number of currently processing messages will be significantly smaller. The reason for this is not to use one BIG table for everything. QuestionSince we are moving messages around, I wonder how expensive this is with high volumes. Which of the following scenarios is better: (A) to have all separate similar tables, like explained, and move complete messages rows, e.g. read complete row from INBOX, write to PROCESSOR table (with some additional columns), delete from INBOX. or (B) to prevent physical movement of the content, how about to have one big MESSAGES table that just stores the content (and still not the state). We would still have other tables, as explained above, but they would contain just IDs to messages and additional columns. So now, when message is about to move, we physically move much less data - just IDs. The rest of the message remains in the MESSAGE table unmodified all the time. In other words, is there a penalty in sql join between one smaller and one huge table? Thank you for your patience, hope I was clear enough. |
Write differences between varchar and nvarchar Posted: 11 Mar 2013 02:19 PM PDT Currently in our SQL Server 2012 database, we're using My question is are there any differences in how SQL Server writes to Edit: |
Posted: 11 Mar 2013 09:06 PM PDT I have CentOS 64bit with CPanel installed and I use However, it keeps doing ..... for minutes and it never stops. It used to be instant. Any idea why it does that and how to fix? Right now I have to do The server is also very very active. Is this a config issue? Do I have memroy settings too high? |
Why would mysql "show global status" query be taking 15 minutes? Posted: 11 Mar 2013 03:36 PM PDT I'm reviewing the slow log, and on one of my slaves the average time for SHOW GLOBAL STATUS is 914s. Any idea how to determine the cause of this? |
Is there a way to export Oracle's UNDO? Posted: 11 Mar 2013 04:36 PM PDT I tried exp utility to dump all database. Looks like this exports only the last version of data skipping undo log. Using flashback queries I see: What I'm trying to do is to capture db changes, make backup for later use with the ability to flashback to timestamp. With rman backup I have similar situation: Update: I managed to do what I needed only by increasing undo retention and direct copying of data files and control file modification on cloned instance. |
Posted: 11 Mar 2013 07:37 PM PDT Version in use is mysql-5.5.24. In the Enterprise version of MySQL I am not seeing free space in Is there such a difference between the Enterprise and Community versions of MySQL? |
MySQL table relations, inheritance or not? Posted: 11 Mar 2013 02:36 PM PDT Im building a micro CMS. Using Mysql as RDMS, and Doctrine ORM for mapping. I would like to have two types of pages. Static Page, and Blog Page. Static page would have page_url, and page_content stored in database. Blog page would have page_url, but no page_content. Blog would have Posts, Categories... Lets say I have route like this: This is page, with page url that can be home, or news, or blog... That page can be either Static page, and then I would joust print page_content. But it can also be Blog Page, and then I would print latest posts as content. How should I relate these Static Page and Blog Page tables? Is this inheritance, since both are pages, with their URL, but they have different content? Should I use inheritance, so that both Static and Blog page extends Page that would have page_url? Or should I made another table page_types and there store information about available page types? |
When should I think about upgrading our RDS MySQL instance based on memory usage? Posted: 11 Mar 2013 01:36 PM PDT It seems like our DB server is doing garbage collection at a increasingly faster rate, which seem normal since it's growing. What's a good rule of thumb of when to switch to a bigger instance, I'm not a DBA and have no frame of reference. It seems to be doing garbage collection once every 2-3 days now whenever there's only 100mb left. The server itself has 1.7GB of RAM. |
You are subscribed to email updates from Recent Questions - Database Administrators - Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment