[how to] Advice on SQL fundamentals online training [on hold] |
- Advice on SQL fundamentals online training [on hold]
- Import CSV file into Mysql with Multiple Delimiters/Field Separators
- Bidireccional synchronization between a local Sql Server 2005 database and a SQL Azure database using Sql Data Sync
- How to move csv file into oracle database using sql developer?
- Creating PostGIS extension in single-user mode
- Database design question
- Which string variables in mysql support utf-8?
- Two possibilities of primary key
- Oracle: how to set only some roles as not default for a user (no GUI)
- Curious About SQL Server Registry Entries
- Dropped index from view still referenced in execution plans
- Review my simple database tables
- UPDATE SET REPLACE() matches but does not change
- Max Connection Pool capped at 100
- How to script all permissions on a schema
- Data Model: Parent, Child, Grandchild with Child Being Optional
- Importing Multipe Trace Files To A SQL Server Table
- Can't stop MySQL server on Raspberry pi
- I am trying to import a file but it gives me exception : ORA-20001: Invalid Operation. while calling a stored procedure, how can i solve this [on hold]
- Cross Join with Filter?
- Newly installed Postgresql 9.2 on same box as 9.1
- MYSQL LOAD DATA INFILE taking a long time
- Backup / Export data from MySQL 5.5 attachments table keeps failing!
- Constraint to one of the primary keys as foreign key
- disk I/O error in SQLite
- how to find similar word with more similarities
- best way to copy data from SQL Server db to MYSQL DB(remote)
- Creating the MySQL slow query log file
- SSIS Script to split string into columns
- Impact of changing the DB compatibility level for a Published replicated DB from 90 to 100
Advice on SQL fundamentals online training [on hold] Posted: 08 Oct 2013 07:17 PM PDT Sorry for the very noob question here, but can anyone tell me the best crash course in SQL fundamentals I could study for table maintenance (its an Oracle shop by the way) and minor config changes. I have the chance at a promotion and I need to know these skills (all the other skills required I have but SQL is where I fall over sadly) I will have a month to learn. Cheers |
Import CSV file into Mysql with Multiple Delimiters/Field Separators Posted: 08 Oct 2013 05:59 PM PDT I'm trying to import a large csv file into Mysql. Unfortunately, the data within the file is separated both by spaces and tabs. As a result, whenever I load the data into my table, I end up with countless empty cells (because Mysql only recognizes one field separator). Modifying the data before importing it is not an option. Here is an example of the data: (Where the second and third value of every row are separated by a tab) Any ideas? |
Posted: 08 Oct 2013 08:20 PM PDT I need to synchronize an Sql Server 2005 database in an SQL Azure database. In other posts I've read that I can do this using Sql Data Sync but I don't know if SQL Data Sync can make the synchronizations that I need without exceeding their limitations. This is the exact process I need to do: First: synchronize 1 table from Azure to Sql Server. Second: Execute some Store procedures in the SQL Server 2005 instance. And third: Synchronize various tables from Sql Server to Azure sequentially. Thanks! |
How to move csv file into oracle database using sql developer? Posted: 08 Oct 2013 09:02 PM PDT How to move csv file into oracle database without creating table firstly using sql developer rather than sql loader? |
Creating PostGIS extension in single-user mode Posted: 08 Oct 2013 03:55 PM PDT I am trying to create a PostGIS extension for my PostgreSQL database when running a single-user mode by using the following command:
which returns
I am doing it while deploying a Docker container, so I cannot use psql since the database is not running at this moment. After the deployment Docker deployment is finished and the database is started running What does this |
Posted: 08 Oct 2013 06:18 PM PDT Hello and thank you for reading my post. I work for a company that houses many gigabytes of data in SQL Server. The company is in the process of taking a huge step forward to re-architect and re-organize this data. The following facts can be summarized about the environment.
Questions Will SQL possibly be able to live up to this performance wise? Those huge tables will have a small record size consisting of about 8 fields that are integers and 2 text fields. Should the company be looking at more of a big data solution like Hadoop? Where the architecture looks more appropriate there is no internal knowledge of anything except SQL Server which is version 2012. Thank you for any insight you are able to provide. |
Which string variables in mysql support utf-8? Posted: 08 Oct 2013 03:35 PM PDT I have a table that stores string from a user and is displayed in a web form. But as I see when the user inputs e.g. in a language of cyrilic alphabet (I assume is UTF-8) garbage are displayed back. I did a show create table and I saw that the table is defined as LATIN-1 and the column that stores the string is defined as TEXT. I am not clear on what type of data does TEXT stores. Is it only for ASCII? Should I be using a different data type? Which would be the most appropriate? |
Two possibilities of primary key Posted: 08 Oct 2013 05:42 PM PDT I have a table where it holds the users identification. This id can be of two types, suppose: So i created two others tables with the primary keys of the types that i needed, add some informations for the specific types of users, and made relations with the only table that holds the rest of the users informations (that both types share): In the table that holds the similar users informations i had created an artificial primary key so i don't have to verify in which column is the id of the user everytime i need to manipulate it, and also to not have to create two columns in every other table where i want to make relation with the user id: The thing is: that's the best approach, or should i change the plan? For examples:
|
Oracle: how to set only some roles as not default for a user (no GUI) Posted: 08 Oct 2013 12:29 PM PDT Scenario:
Problem:
|
Curious About SQL Server Registry Entries Posted: 08 Oct 2013 12:08 PM PDT I'm working on building some documentation across my systems and planned on simply using Powershell to script out the legwork of what I wanted to do. While that's going all well and good, I ran into an issue with my SQL Server registry entries. The problem being that some of my SQL servers have all of their registry values while some don't. For example, I'm pulling down the SQL Shared Features entry so I know where that directory is on all servers. Now some of them are pulling data back (E:\yada\yada) while others aren't pulling anything back. After further investigation, I have noticed that several of my SQL servers don't have the required registration data saved within the registry. This happens with several different registry entries that should be there. Any reason it's like this? |
Dropped index from view still referenced in execution plans Posted: 08 Oct 2013 05:51 PM PDT On a SQL Server 2005 Enterprise edition server, an index has been dropped from a view. When running a select * query that includes this view, no results are shown. In addition, the execution plan references this index on the view (that no longer exists). By adding the We have tried to clear the cache plans with Edit: I have tried sp_refreshview, with no change. The SQL build version is 9.00.3042 (SP2). |
Review my simple database tables Posted: 08 Oct 2013 01:49 PM PDT Hey trying to create a simple database table for a small beer review project, it been awhile since i have created db's so could anyone just tell me if i am totally wrong here? Especially my use of unique identifiers, as i see it they would always be unique? Project description: A simple asp.net site containing beer data, and information on specific key elements like brewery etc. people should be able to search by name and read about the beer. |
UPDATE SET REPLACE() matches but does not change Posted: 08 Oct 2013 01:05 PM PDT I've seen another post about this exact thing but it's 1-1/2 years old so I thought I'd make my own...besides, the other post did not help me. What I am trying to do is pretty straight-forward. To be on the safe side, I tried a simple UPDATE statement on a different table, which was successful. So, here's my problem: I'm not using wildcards nor am I using a LIKE statement. I am telling it which record to update. Why isn't my text changed? |
Max Connection Pool capped at 100 Posted: 08 Oct 2013 10:20 AM PDT I'm running SQL Server 2008 R2 SP1, on a Windows Server 2008 box. I have a .NET script running from Visual Studio 2010 that does the following:
The total number of times it will iterate is 150, however it is stopping at 100 connections and I can't figure out why. I could adjust my script to just use a single thread, but I'd prefer to know where I'm missing a max connection setting as that will be more useful to know for future reference. Here's where I've checked so far:
I'm not sure where else to check, I know I have a lot of moving parts here but I'm getting the feeling I'm just missing a max pool setting somewhere. |
How to script all permissions on a schema Posted: 08 Oct 2013 09:14 AM PDT SQL management studio allows to create scripts for all db objects, however I so far couldn't find a way to correctly script a schema or user. The permissions of a user on a schema are not included in the script that is created. Did I make something wrong or is MSFT her a bit sloppy ? |
Data Model: Parent, Child, Grandchild with Child Being Optional Posted: 08 Oct 2013 08:42 AM PDT I posted the question in another forum and I was suggested to re-post it here which may be more appropriate. Thank you in advance. My organization structure: However, there are two departments which have no medium Sections and employees of these Sections report to Department Director directly. |
Importing Multipe Trace Files To A SQL Server Table Posted: 08 Oct 2013 10:22 AM PDT I am in the process of importing 200+ trace files (which are massive), and my current approach is to perform a loop and insert the trace data (see the below script). I looked around to see if there was a faster way to do this, whether through SSIS or C# and it appears that they still call the below function, similar to the script below this. Anyone have any other methods that they use to import multiple traces? Don't get me wrong, the below code works, but I'm curious if there's something faster that I'm not considering. Data Notes: 490MB (~.5G), which holds 11,700,000+ rows, requires 13:11 minutes to import. |
Can't stop MySQL server on Raspberry pi Posted: 08 Oct 2013 09:48 AM PDT I can't stop mysql server on raspberry pi. If I use mysql workbench I have been able to start and stop the server perfectly fine for months. However it now refuses to stop! How do I force it to stop? I am using raspbian OS |
Posted: 08 Oct 2013 05:04 PM PDT I am trying to import a file , but while calling the procedure i get an exception ORA-20001: Invalid Operation. Stored Procedure: On the below line I get the exception : I have another bit of code which executes correctly, but gives an exception for Store Procedure :
|
Posted: 08 Oct 2013 09:19 AM PDT i need to make Sp to distribute students to their sections the procedure take 2 string parameters StuID and SecID in case I've send '1,2,3,4,5' as StuID and 'a,b' as SecID i'm using spliting function which well return tables how can i get the following result I've tried to do it via cross join but it did not show the result i want |
Newly installed Postgresql 9.2 on same box as 9.1 Posted: 08 Oct 2013 12:46 PM PDT I have a new project at work that is using a PostgreSQL 9.2. But, I'm still having to support a project that uses 9.1. So, I'm trying to configure my local dev box to have both installed. I have gotten 9.2 installed and confirmed it runs fine. However, I can't connect to it. So, how do I connect to this new instance? I thought that the Ubuntu/OS postgres user would allow me to connect, but it doesn't. Other info:
|
MYSQL LOAD DATA INFILE taking a long time Posted: 08 Oct 2013 02:04 PM PDT I have a MYSQL DB running on a raspberry pi. Now under normal circumstances MYSQL actually runs slightly quicker than it did on my much more powerful desktop. However I am trying to insert 60 million records in to the database using LOAD DATA INFILE. I tried it all in one go (a 1.2GB File) and it was still trying to load the data 1.5 days later. So I tried loading in 100 000 chunks which was fine for the first 3 million records but soon started to grind to a halt. So I then removed the indexes from the table and it seems to run a bit quicker but I noticed that for each 100 000 rows I insert the time increases by about 20 seconds. What is strange is that when I did a database restore from my original desktop machines database (an identical db with 60million rows in the main table) the restore only took about 1 hour. What is causing the slowdown for LOAD DAT I should point out that I am using InnoDB EDIT: I reduced the chunks to 1000 records and left it running which did appear to speed things up as after about 1 hour it had inserted 24million records however each insert of 1000 was taking about 30 seconds. However I then decided to stop it running and restarted the raspberry pi. Then I ran the import again and low and behold the initial inserts were back to less than one second again. So my question is, do I need to clear a cache or something as MYSQL appears to be getting bogged down rather than the actual LOAD DATA INFILE being slow. It is almost as if it is filling up memory and not releasing it or something much more technical to do with MYSQL. EDIT----------- Just to give an idea of just how much it is slowing down. It inserts around 27million rows in the first 40 mins (the majority of which is inserted in the first 15 mins). Then I have estimated that it will take around 48 hours+ to insert the next 30million rows! However if I restart the raspberry pi it goes crazy quick again (despite the 27million rows still being in the table) |
Backup / Export data from MySQL 5.5 attachments table keeps failing! Posted: 08 Oct 2013 11:25 AM PDT Can anyone please help! - I have a large table in a MySQL 5.5 database. It is a table which holds a mixture of blobs/binary data and just data rows with links to file paths. It has just over a million rows. I am having desperate problems in getting the data out of this table to migrate it to another server. I have tried all sorts - mysqldump (with and without -quick), dumping the results of a query via the command line. Using a MySQL admin tool (Navicat) to open and export the data to file, CSV, or do a data transfer (line by line) to another DB and/or another server but all to no avail. When trying to use the DB admin tool (Navicat), it gets to approx 250k records and then fails with an "Out of memory" error. I am not able to get any error messages from the other processes I have tried, but they seem to fall over at approximately the same number of records. I have tried playing with the MySQL memory variables (buffer size, log file size, etc) and this does seem to have an effect on where the export stops (currently I have actually made it worse). Also - max_allowed_packet is set to something ridiculously large as I am aware this can be a problem too. I am really shooting in the dark, and I keep going round and round trying the same things and getting no further. Can anyone give me any specific guidance, or recommend perhaps any tools which I might be able to use to extract this data out?? Thanks in hope and advance! A little more information below - following some questions and advice: The size of the table I am trying to dump - it is difficult to say, but the sql dump gets to 27gb when the mysqldump dies. It could be approximately 4 times that in total. I have tried running the following mysqldump command: And this gives the error:
The server has 8gb RAM, Some of the relevant settings copied below. It is an INNODB database/table. |
Constraint to one of the primary keys as foreign key Posted: 08 Oct 2013 02:25 PM PDT I want to have a constraint for grid that, col_id should be present in grid_col. I can't have foriegn key constraint here. I can create a function constraint which scans the grid_col while inserting in grid but in that case it increases the chances of having deadlock. How to add a constriant here? |
Posted: 08 Oct 2013 03:25 PM PDT What are the possible things that would trigger the "disk I/O error"? I've been having this problem and I couldn't find a solution. I have a SQLite3 database, and I'm trying to insert data from a file that contains SQL inserts. Sample data in the file: I tried inserting that in the db file with the following command: See below the error that I get: The input lines that don't generate error are successfully included, but I don't understand why some lines have errors, and they are not inserted into the DB. There's nothing special in the lines with error, and if I run the command again I get errors in different lines, which means it's random (not related to the data itself). I tried adding |
how to find similar word with more similarities Posted: 08 Oct 2013 03:43 PM PDT how to Find words with length less than or equal... @inp not return any data, but i want show: |
best way to copy data from SQL Server db to MYSQL DB(remote) Posted: 08 Oct 2013 10:33 AM PDT I currently copy data between my local SQL Server to a remote MySQL database linking in SQL and doing a data transformation task but now I'm sending lots of data it is getting very slow. Just after some advice please on which way anyone would recommend I should go about getting large chunks of data from one to the other. I know there are various ways of doing this, I am just unsure if I should:
Just need a pointer in the right direction. |
Creating the MySQL slow query log file Posted: 08 Oct 2013 12:25 PM PDT What do I need to do to generate the slow logs file in MySQL? I did: What more do I need to do to? |
SSIS Script to split string into columns Posted: 08 Oct 2013 04:25 PM PDT I have a dataset (log file) with a number of columns; one of them is "Other-Data" below (unordered string) and need to parse the string to create the derived columns according the u value (U1, U2, U3, etc...). The output columns should be something like: Other-Data: Can anyone help with this? |
Impact of changing the DB compatibility level for a Published replicated DB from 90 to 100 Posted: 08 Oct 2013 10:25 AM PDT I have a SQL Server 2008 R2 server with a bunch of published databases that are currently operating under compatibility level 90 (2005). The subscription databases are also SQL Server 2008 R2, however the destination databases are set to compatibility level 100 and replication is working fine. If I change the compatibility level for the Published databases, will it affect replication in any way, or will it just be a case of reinitializing all the subscriptions and restarting replication? I suspect that changing the published database compatibility level may change how the replication stored procedures function slightly, but I'm not 100% sure. Is this the case? |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment