[how to] How to get Greenplum table defintion? |
- How to get Greenplum table defintion?
- How do I selectively purge MySQL table rows using a list of coloum values?
- Change URL of Erwin data model HTML export hyperlink of www.ca.com to something different
- SQL Server:: Auto Generate Scripts to create table, indexes etc
- Trying to write a system function sq server 2008r2
- Watch output of MySQL query over time
- Can I find out what version of MySQL from the data files?
- How to avoid lagging when enabling log-slave-updates?
- How to create a procedure through terminal
- Foreign Key to Multicolumn Primary Key?
- Oracle Express intermittently stops responding, requiring a OS reboot to fix
- Why do some DBAs avoid the use of null? [duplicate]
- Challenge. Split database 500 records in two smaller datasets, randomly
- Postgres - Schema information within Trigger?
- Force sql server to run query conditions as written?
- Disable trigger still fired
- Is there a better way than the ones listed within to join data in an excel spreadsheet to a table in a SQL Server database?
- SQL Server: Sync Indexes between two tables within Same database
- Teradata performance statistics
- MySQL Read Speed and Partitioning on Separate Drives
- Is it ever a good idea to denormalize for integrity?
- Bitmask Flags with Lookup Tables Clarification
- SQL Server 2012 catalog.executions to sysjobhistory - any way to join them?
- BIT columns all "1" after a phpMyAdmin export/import
- SQL Server replication subscriptions marked as inactive
- Error trying to run SSIS Packages in SQL Server 2008 SQL Server Agent
- Login failure when running a SSIS package from a SQL Server job
- error in Execute SQL Task with parameter
- Microsoft Visual Studio is Busy
- If I SET ARRAYSIZE 1 in SQL*Plus, the records are still fetched in pairs. Why?
How to get Greenplum table defintion? Posted: 01 May 2013 08:24 PM PDT There are already tables created in the schema but I don't know how to get the detailed information of how it was created. Like the configuration used, the orientation, etc. How do I get this information in Greenplum? |
How do I selectively purge MySQL table rows using a list of coloum values? Posted: 01 May 2013 07:44 PM PDT Am brand new to Databases and working with MySQL. I have a large MySQL table with the following coloms - Id, Link, Timestamp And I have a list of links. I want to keep only the rows with link colom that matches a entry in this list. Any pointers to how I accomplish this? |
Change URL of Erwin data model HTML export hyperlink of www.ca.com to something different Posted: 01 May 2013 03:17 PM PDT The website URL of http://www.ca.com appears in every fact group (subject area) of our Erwin file. How can we make a global change to this URL in Erwin, so we don't have to update each subject area data model? |
SQL Server:: Auto Generate Scripts to create table, indexes etc Posted: 01 May 2013 04:31 PM PDT In SSMS, we can right click a DB/Table/Index etc and select 'SCRIPT AS ' option and generate Drop and Create statement. Is there a way to automate this script generation and drop the scripts to a temp location , rather than manually right click each article and generate them. Thanks, |
Trying to write a system function sq server 2008r2 Posted: 01 May 2013 05:56 PM PDT I am trying to write a simple system function so that the dbo. is NOT required to access the function. I have found an example of this for SQL 2000 but when I run This is the error thrown. The expected usage would be Does anyone know the approach I should be taking or if I HAVE to create it as a regular used define function that must be called with the dbo. like : |
Watch output of MySQL query over time Posted: 01 May 2013 12:08 PM PDT I'd like to run a query every few seconds and show the output in a console window. From bash, using With the output: I'd like to not disconnect and reconnect every 2 seconds though. Is there a better way? |
Can I find out what version of MySQL from the data files? Posted: 01 May 2013 02:10 PM PDT I have a very odd situation going on here. I had a linux box running ubuntu 8.10 and MySQL server with a Wordpress installation on it hosting an internal blog for our company. The machine that this was running on has crapped out, but I was able to put the drive into an enclosure and gain access to the files and recover the MySQL data folder. We are mostly a Windows company and I have very limited linux experience. Before this machine crashed we were experimenting with moving this blog to a Windows Virtual Machine running PHP for IIS; and we have an older copy of the Wordpress database up and running on MySQL 5.1. I am currently trying to retrieve the latest data from the linux MySQL data folder and import it into the current database but I am having problems. Problems so far: 1) I do not know what version of MySQL was installed on the linux box 2) I see ib_logfile0, ib_logfile1, and ibdata1 on the linux MySQL data folder, but am not sure if it was truly running InnoDB or not as I also have .frm, .myd, and .myi files in the wordpress folder If I can figure out the version of the linux box MySQL I might be able to create a virtual machine to run linux and MySQL so I can copy the data folder and then do an export dump; unless someone else knows of an easier way to get the data into my current database. Any help or guidance on this issue would be greatly appreciated. Thanks, Nathan |
How to avoid lagging when enabling log-slave-updates? Posted: 01 May 2013 02:49 PM PDT Master:
Slave:
Both are running 5.5.28. For the incremental backup purpose, I need to enable the Is there any way to improve this? |
How to create a procedure through terminal Posted: 01 May 2013 11:34 AM PDT In mysql console we will use the delimiter command to change the delimiter and it is useful to define the procedures. I understood the command but I am using the mysql client which does not have the command for now consider: I tried the following : where proc_file contains the above procedure; but that is giving me the following error: Event I tried the following (added the trailing semicolon) but am getting the same error . Same I am experiencing with the |
Foreign Key to Multicolumn Primary Key? Posted: 01 May 2013 11:50 AM PDT I have three connected table in my design. Basic logic is when a category deleted, also delete records in category_product table which has deleted category's id in its cat_id column. But when I tried to create tables with query below, I get this error: "there is no unique constraint matching given keys for referenced table 'category_product'". I don't know what is wrong with this design or logic. So what kind of structure I need to achieve what I am trying? |
Oracle Express intermittently stops responding, requiring a OS reboot to fix Posted: 01 May 2013 12:32 PM PDT I've been having an issue with Oracle XE the past 2 days and it's really been making me pull my hair out. There seems to be no rhyme nor reason why this happens, or when it does, but it happened 6 times yesterday and so far twice this morning. I'm developing a .NET app, and sometimes when I start the app, refresh the page, or do whatever, I get an exception that a connection could not be made to the database. I then try to connect with sqlplus I just get: It then just sits there forever. No timeout, no error, no nothing. I can't connect with any other database program either, the whole server is just unresponsive. If I go into the Services manager, I see that the services are indeed running so nothing has crashed: I've tried restarting The only thing that fixes this problem is a complete Windows reboot. However, this of course takes 5-10 minutes and is really starting to get old. There does not appear to be anything related to Oracle around this time in the Windows Event Viewer. I'm happy to add any other information that would help debug. Any ideas? Update: I did notice a few strange errors in Last 1,000 lines of alert_XE.log are here. |
Why do some DBAs avoid the use of null? [duplicate] Posted: 01 May 2013 11:11 AM PDT This question already has an answer here:
I've noticed that some DBAs avoid using NULL for things like integer values, instead preferring to use Is there any conventional reason for this? |
Challenge. Split database 500 records in two smaller datasets, randomly Posted: 01 May 2013 02:55 PM PDT I have a database with 500 records. I want to split these records to 75% and 25% *randomly*in order to use the different datasets for training and testing to machine learning algorithms. Does anyone knows how to do that using an sql query? ps. I use mysql. In addition I find this ... select ... from ... where ... ORDER BY rand() LIMIT 150 but this gives me only the first dataset Thanks in advance |
Postgres - Schema information within Trigger? Posted: 01 May 2013 11:00 AM PDT Background : (Multi-tenant design;separate schema per customer, all table structures identical;Data-warehouse / non-OLTP system here; Postgres 9.2.x). Reviewing patterns for the following problem: I have a matview that needs to be updated if an underlying condition is altered (in this case, an inclusion BOOLEAN). Was considering implementing a TRIGGER, on update, which would then either execute the matview creation UDF or possibly call 'NOTIFY' to be handled by some listening event, perhaps at a later point in time as the immediate matview update isn't necessarily required. Workflow : UPDATE made to some underlying table. TRIGGER is executed. (Obtain schema information here). EXECUTE matview creation -or- NOTIFY command for queue / processing at a later time. First question, am I on the right track here or are there other, more efficient / effective patterns I should review? Given the nature of my multi-tenant design, is it possible to obtain the current schema from the underlying table from which the trigger event was called as I'll need that information to pass into my matview creation UDF. Thanks for any help! |
Force sql server to run query conditions as written? Posted: 01 May 2013 08:11 PM PDT Im using Sql server 2008 R2 And I have this pseudo query (SP) : The problem is that still , the query takes a very long time to execute -- even if I execute the SP with As you noticed , the long time consuming query should be executed only if @LinkMode is null ( which is not the case here. in my case @LinkMode =2 ! ) However , If I change it to : The SP does run fast . I've heard before that sometimes , optimizer can optimize the order of criteria So I ask :
|
Posted: 01 May 2013 01:05 PM PDT I've just run the following t-sql to disable a trigger (with a deploy script wrapper around osql):
In SSMS I could see the trigger was disabled (downward pointing red arrow). When I inserted some rows into the table, the trigger still fired (I have an audit table that logs when the trigger fires). I enabled and disabled the trigger this time using SSMS UI. When I inserted rows the trigger didn't fire. What does the UI do differently over the T-SQL command? Did I need to issue a commit? |
Posted: 01 May 2013 01:47 PM PDT Let's say I'm given a list of names and email addresses in an excel spreadsheet. I am asked to find all customers in the database who's name, and email are in that list. I currently know of two ways to achieve this...
Is there a 'better' way to do this kind of thing (i.e. one that doesn't require me to place the data in the database before using it, and doesn't require me to create a monster query that has all the data in a 'where x in (data)' part) |
SQL Server: Sync Indexes between two tables within Same database Posted: 01 May 2013 01:26 PM PDT I need to automate and Sync only indexes between two tables ( Primary and Stage_Table) within same database. Tried using SSIS SQL Server Objects Task, but looks like it works only when we sync between two different instances. Indexes on Primary table keep getting changed freq and i need the empty stage_table with all the updated indexes when i do the partition switch. Thanks, |
Teradata performance statistics Posted: 01 May 2013 10:37 AM PDT I need some statistics regarding the performance of Teradata, something like with a given hardware, given nodes and given AMPs - a table which has thousands of rows can be read in 't' secs. I need this information to analyze the feasibility of teradata. |
MySQL Read Speed and Partitioning on Separate Drives Posted: 01 May 2013 10:41 AM PDT IntroductionSimple project but for an heavy read load -90% reads- over a table(~20Milion Rows ever-growing where I have to boost performance and ensure response low time for read queries) that was partitioned for a fixed field -21 partitions, avg 1Mln Row/Partition- like this: The root of the queries are always " This table can be only MyISAM so I have installed the MySQL server 5.6.11
The setup consist on a Master/Slave topology with same HW (I can distribuite reads across Master and Slave) Possible solutionI have tree choice for the hardware architecture(6 slot disk 3.5''): Soloution #1
Soloution #2
Soloution #3 (2 disk waste for tmp!)
The first choice can help me to spread the I/O over two disk using the: The second choice improve the fault tolerance level, but consider that I have a Master/Slave configuration so, in every moment, I can promote the Slave as a Master. What kind of architecture you advice? There are problems that at the moment I do not see? Any other kind of suggestions would be greatly appreciated. :) Thanks in advance crx |
Is it ever a good idea to denormalize for integrity? Posted: 01 May 2013 01:18 PM PDT I'm using Postgres 9.2 to develop a quiz app, in which I present the user with a series of problems and record their answers. These problems can take a variety of forms - they might be multiple choice (What's 2 + 2? A: 2. B: 3. C: 4), or they might require the user to calculate an answer of their own, in which case I need to constrain their input to be something like '440' or '1/2' or '.333'. Some of the problems might prompt the user to type in an essay. And, of course, I may need to add more types of problems later. The tables I'm envisioning, in a simplified form, are something like this: So, the problems table would have a variety of constraints to ensure that:
This is clean enough, and constrains the problems table just fine. I might use an enum instead of a lookup table for problem_type, since all of its possible values will already be baked into the schema anyway. My difficulty is, how to constrain the questions table? Its constraints will be very similar (I don't want an answer_letter supplied for a question that references an essay problem, and so on). I can think of a few options:
I feel like there's a problem with my approach to modeling that's leading me to these difficulties, especially since I'm running into a couple of cases very similar to this elsewhere in my schema (this one was just the easiest to describe). Maybe it's just a domain that's difficult to model, but I feel like there must be a better way, and I'm just not normalizing correctly. Help? Thanks! |
Bitmask Flags with Lookup Tables Clarification Posted: 01 May 2013 07:05 PM PDT I've received a dataset from an outside source which contains several bitmask fields as varchars. They come in length as low as 3 and as long as 21 values long. I need to be able to run SELECT queries based on these fields using AND or OR logic. Using a calculated field, where I just convert the bits into an integer value, I can easily find rows that match an AND query, by using a simple WHERE rowvalue = requestvalue, but the OR logic would require using bitwise & in order to find matching records. Given that I would need to work with several of these columns and select from hundreds of millions of records, I feel that there would be a huge performance hit when doing bitwise & operations to filter my SELECT results. I came across this answer from searching and it looked like it may fit my needs, but I need some clarification on how it is implemented. Is this as simple as creating a lookup table that has all possible search conditions? Example for 3 bits using (a & b) (Edit: Wrong bitwise op) The author mentions that it's counter-intuitive initially, but I can't help but feel I'm interpreting the solution incorrectly, as this would give me a single lookup table with likely billions of rows. Any clarifications on the answer I linked above or other suggestions that would preserve the existing database are appreciated. Edit: A more concrete example using small data.
Any number of flags, from all to none, can be flipped, and results must be filtered where selection matches all (Using exact value comparison) or at least 1 (Using bitwise &). Adding a single calculated column for each bitmask is ok, but adding a column for each bit for more than 100 bits, coupled with how to insert/update the data is why I'm trying to find alternative solutions. |
SQL Server 2012 catalog.executions to sysjobhistory - any way to join them? Posted: 01 May 2013 01:05 PM PDT I have exhausted my resources and can't find a foolproof way to join the ssisdb.catalog tables to the jobs that run them. Trying to write some custom sprocs to monitor my execution times and rows written from the catalog tables, and it would be greatly beneficial to be able to tie them together with the calling job. |
BIT columns all "1" after a phpMyAdmin export/import Posted: 01 May 2013 03:05 PM PDT I have to import data from a MySQL database using phpMyAdmin because that's the only interface my hosting provider supports. I have exported my database from my local machine using phpMyAdmin. After that I imported the script file to my host. All of data in the columns that is BIT type are changed to '1'. Hosting database version is 5.5.29. Is it a phpMyAdmin problem, or MySQL version problem? How can I fix this? |
SQL Server replication subscriptions marked as inactive Posted: 01 May 2013 10:05 AM PDT Is there any way to force SQL Server NOT to mark subscriptions as inactive, ever? It happens sporadically when there are connection issues and I don't want to have to reinitialize the subscription every time. Note, I'm not talking about the subscriptions being marked as expired...just as inactive. Thank you. |
Error trying to run SSIS Packages in SQL Server 2008 SQL Server Agent Posted: 01 May 2013 02:48 PM PDT I am trying to run a number of SQL Server 2008 stored procedures that are wrapped in SSIS packages consisting mainly of Execute SQL Tasks. They run fine in Visual Studio 2008 after deploying to a folder on the SQL Server server. The problem is when I created a job in SQL Server Agent and tried to run the job. That is when I get the error: Executing the query "EXEC ? = dbo.usp_Clear_All_JDAWG_Tables" failed with the following error: "The EXECUTE permission was denied on the object 'usp_Clear_All_JDAWG_Tables' database 'JDAWGD' schema 'dbo'.". Possible failure reasons: Problems with the query "ResultSet" property not set correctly parameters not set correctly or connection not established correctly. End Error DTExec: The package execution returned DTSER_FAILURE (1). The 'usp_Clear_All_JDAWG_Tables' is the name of the first of many packages this job is trying to run. The stored procs themselves do not use parameters but in the SQL Task I have commands like this to run the stored proc: EXEC ? = dbo.usp_Load_JDAWG_Tables_From_Persons_All_Agencies Is there a better/different way to run a stored procedure in the SQL Task? As a side note, I was able to create and successfully run an Agent job that directly executed the stored procedures that the packages run. Thanks for any assistance. |
Login failure when running a SSIS package from a SQL Server job Posted: 01 May 2013 10:05 AM PDT I have a SSIS package that migrates data from a Sybase database to SQL Server. The connection to the source database has been made through an ODBC connection. If I run the package directly on my machine, it runs successfully, however if I want to run it through a SQL Server job I have created on my local machine (where the destination database is), I'm getting a login failure error: I know that the job runs as 'SQL Server Agent Service Account' that is in fact I read some posts in different sites, they were talking about setting up proxy, etc, but don't know if it is quite relevant to my issue. The connection to the source database has been set up via ODBC, but as the step was failing, I added the userId and password to it to force SQL uses the userId I want rather than SQL Sever agant user, but it didn't help. Do you know how we can fix this login issue? Thank you. |
error in Execute SQL Task with parameter Posted: 01 May 2013 10:06 AM PDT I have an Execute SQL Task object in the Error handlers section of a SSIS package. What it does is inserting a record to a table using OLE DB connection. There is a value in the insert statement that should be sourced from a user variable. When I try to pass the query I get the following error, I searched internet a lot and tried different things, but still get the error message: "The query failed to parse. Parameter Information cannot be derived from SQL Statements. Set parameter information before preparing command." What I have set to the SQLStatement is: What I have in the Parameter Mapping section of the Task: The SQLSourceType is "Direct input" and as there is no result to this query the ResultSet has been set to "None". The ErrorDescription column in the target table is varchar(max). Any idea how fix this issue? Thanks in advance. |
Microsoft Visual Studio is Busy Posted: 01 May 2013 10:08 AM PDT I am working in Visual Studio 2008 on a SSIS package. It is really frustrating that I get "Microsoft Visual Studio is Busy" message quite often. It almost happens every time I try to open a data source, destination, etc. From what I can see it seems VS tries to save Items. Is there any way to avoid this auto save? Generally how can we get rid of this message and interruptions? |
If I SET ARRAYSIZE 1 in SQL*Plus, the records are still fetched in pairs. Why? Posted: 01 May 2013 06:39 PM PDT I thought about writing a simple And then I'd like to In order to fetch the rows one by one, I Is there an explanation for this and how can I get the records as soon as one is piped? |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment