[how to] Can Mysql Cluster AQL(Push-Down Join) be trigger with sharding table at the same time? |
- Can Mysql Cluster AQL(Push-Down Join) be trigger with sharding table at the same time?
- How to create Combination of records (Order does not matter, no repetition allowed) in mySQL tables
- How to partition an existing non-partitioned table
- Conversions between any data type and the binary data types are not guaranteed to be the same between versions of SQL Server
- Updating a big replicated Dimension (SQL Server PDW)
- having trouble using dbms_scheduler
- Are passwordless logins like ##MS_SQLResourceSigningCertificate## , ##MS_AgentSigningCertificate## a vulnerability?
- PostgreSQL postmaster will not start after configuring for SymmetricDS
- Create a new database in SQL Server 2008 R2, but it is not Blank
- Mathematical method for spotting irregularity in numbers flow
- SQL Server insert randomly takes 10+ seconds to complete?
- Defining a process for troubleshooting performance issues [on hold]
- is there a recommended way to update many postgres schemes
- ORA-01031 while creating a view as DBA
- Percona mysql server slow queries
- Possible to run multiple Reporting Services (SSRS) instances on different machines with databases on same machine?
- Oracle won't start
- MariaDB Galera Cluster Replication error, no state received?
- Error: "Storage Engine for the Table Doesn't Support Nullable Columns" (SequelPro)
- optimizing MySQL for traffic analytics system
- LDAP in SQL Server 2012
- Primary replica set server goes secondary after secondary fails
- MySQL backup InnoDB
- Strange characters in mysqlbinlog output
- Connecting to a SQL Server database from a Flash program
- Oracle Express edition on Ubuntu - control file missing
- Is using multiple foreign keys separated by commas wrong, and if so, why?
Can Mysql Cluster AQL(Push-Down Join) be trigger with sharding table at the same time? Posted: 12 Aug 2013 06:45 PM PDT From the jonas slide, I knew that the feature of "Push-Down Join" can make join 70x faster. the most important factor is to eliminate the round trip between API node and data nodes. where all data in the same node, it is not necessary to do inter-node communication. But what if a certain table is partitioned to some group of data nodes, how AQL eliminate the data shipping? can push-down work in sharding scene? |
How to create Combination of records (Order does not matter, no repetition allowed) in mySQL tables Posted: 12 Aug 2013 08:13 PM PDT I've got a table that has hundreds of rows, each row is a recipe with nutritional information, for example: recipe_table: id | calories | protein| carbs | fat recipe1, 100, 20g, 10g, 2g recipe2, 110, 10g, 12g, 12g recipe3, 240, 20g, 1g, 23g .... I needed to create a new table (recipe_index) that would show every possible combination of every recipe in recipe_table as a set of 3, so it would look something like: recipe_index: id1 | id2 | id3 |calories| protein | carbs | fat recipe1, recipe2, recipe3, 450, 50g, 23g, 37g .... Basically it allows me to query recipe_index and say "what 3 recipe combinations come to a total value that's between 440 calories and 460 calories" My current code for doing this works at 3 meals, however I end up with about 450,000 records in recipe_index, I need to do this same thing for 4,5 and 6 meals as well, so I'm calculating billions of records at the end of this. Is there a more efficient way of doing this? Perhaps I need to look into partitioning a table for each range? My current SQL code: |
How to partition an existing non-partitioned table Posted: 12 Aug 2013 08:01 PM PDT I have an existing table with data: I need to change this table to be partitioned like this: How I can I achieve this without dropping and recreating the table? |
Posted: 12 Aug 2013 04:06 PM PDT I just found that on the SQL Server 2012 MSDN documentation for binary and varbinary types, they mention the following at the end of the article:
I noticed that this does not exists in the previous versions of the article for 2005, 2008, and 2008 R2. I searched around online but didn't find any exact explanations that satisfied my question of "WHY?". (Which is the question I have here.) The project I am working on needs to store data as varbinary(max), but based on this little note, I am afraid if I need to move the binary data and then convert it on another SQL Server version that it will not match the original data converted to binary, so answering this question more in depth would be a great help. Thanks! |
Updating a big replicated Dimension (SQL Server PDW) Posted: 12 Aug 2013 04:10 PM PDT We use a SQL Server PDW appliance for our data warehouse. One of the tables in our warehouse is a replicated table with about 20 million rows. As part of our ETL process we need to expire old records from this dimension; however, we are seeing that updating a handful of records (<100) takes over 1 hour to complete. This is what I would like to improve if I can. Naturally, one option that I thought about was changing this Dimension from Replicated to Distributed. My testing shows that it would fix the issue with the ETL process taking long (from 1.5 hours came down to 30 secs) but all the joins against the Distributed version of this dimension would be affected since the joins are almost never based on the same distribution column. When I look at the execution plan of some of these queries I usually see either a ShuffleMove or a BroadcastMove operation. So my question to the PDW guru's here is: Is there anything else that can be done in order to improve the performance of updating records in the replicated version of this Dimension? Again, moving to a Distributed table doesn't seem to be the best solution since it will affect hundreds of already written SQL queries and reports developed by other people. |
having trouble using dbms_scheduler Posted: 12 Aug 2013 01:40 PM PDT I am using oracle 11g and have never used dbms_scheduler jobs. I have this query that I want to update every day at 5. BEGIN I put my query in the job_action. Is that how you do it? |
Posted: 12 Aug 2013 11:39 AM PDT I'm new to MS SQL Server. Colleagues working in IT security, have run an scan showing come DB users with null password. Some of them are ( # included ) I suspect they are not a security threat but, since they don't use Windows Authentication either, I don't know how to support that assumption. What are these logins for ? |
PostgreSQL postmaster will not start after configuring for SymmetricDS Posted: 12 Aug 2013 11:45 AM PDT I'm trying to configure SymmetricDS for use with PostgreSQL, a database platform widely supported by the former. When running through the quick-start tutorial (which leads you to some pre-setup documentation), I am instructed to append the following line to The next step of the tutorial requires the postmaster to be running (at least that's what I glean from the stack trace...), so I but this fails. Without this single line added (or if it's commented out), PostgreSQL starts as expected. I must be doing something wrong, so what is it?
|
Create a new database in SQL Server 2008 R2, but it is not Blank Posted: 12 Aug 2013 01:06 PM PDT When I try to create new database in SQL Server 2008 R2, it was created completely, but it is not a blank and empty database, there are many tables that come from another database. I also tried specifying a different name for its Would you please help me to know where is the problem? Also, is there any command that would reset my database as a blank db? |
Mathematical method for spotting irregularity in numbers flow Posted: 12 Aug 2013 01:08 PM PDT I am trying to build an engine which is going to predict orders for items inside shop.
In next three graphs I'll show three possible situation which I am hunting. 1.) Ideal situation, I sold out my items just before my next order is coming . 2.) Bad situation when I sold faster when my order is coming. I want to react at 3th (orange) day . So I can warn user that he is sold items fast.
Here is sample data of (MSSSQL) I want to construct a query which is going to be run every day and try to find these situations. |
SQL Server insert randomly takes 10+ seconds to complete? Posted: 12 Aug 2013 10:48 AM PDT I have an insert statement that is being generated by the .NET Entity Framework. In most cases, this particular insert will execute in 0ms according SQL Server Profiler. One out of every 30 or so inserts will jump to as high as 12 seconds duration, causing the .NET client on the other end to show as "Not Responding" while it waits. Server load should not be an issue as our server is very, very lightly loaded. Here is the table the insert is being performed against: And the query as seen by SQL Server Profiler (actual example): I will also be glad to post the Query Plan, but I'm not sure in what format I should post it (I'm mainly a StackOverflow guy). EDIT: Here is the execution plan as a screenshot from SSMS. Any ideas on how to begin to track this down? |
Defining a process for troubleshooting performance issues [on hold] Posted: 12 Aug 2013 11:50 AM PDT The issue of poor database performance is one which presents itself from time to time and there are many techniques which can be employed to examine the root cause of slowness. Does anyone have a clearly defined process which they use to troubleshoot performance, like a tried and tested 'works for me' method for investigating issues? For me I might employ the following (not necessarily in this order)
What would others add/delete from the above list? A repeatable process for troubleshooting for all performance issues might be difficult to gather but it were possible it would certainly be useful starting point :) |
is there a recommended way to update many postgres schemes Posted: 12 Aug 2013 11:46 AM PDT I have a database in postgres which has 15 schemes, the schemes are identical with the same tables, views and functions. the system uses one scheme at a time, as if they were separate databases. So when i install an update i need to update the 15 schemes in the database. My problem comes when we update the database with changes in functions and triggers, because I need to modify in each function what scheme uses for each scheme; example: Is there a more practical way to update the 15 schemes without having to modify the script to run in each scheme? |
ORA-01031 while creating a view as DBA Posted: 12 Aug 2013 12:25 PM PDT I'm trying to create a view under my DBA schema on Oracle XE. I already did that on the productive database and it worked with a user with a lot lesser privileges than the DBA role. But now I always get And this is the output I get: I even tried it with the XE SYS account, but with the same result. Am I missing something important? |
Percona mysql server slow queries Posted: 12 Aug 2013 09:04 PM PDT Recently I bought new database server and installed percona mysql server 5.5.32-31 and transferred my |
Posted: 12 Aug 2013 11:18 AM PDT With SQL Server 2012, can SSRS instances be setup on two different machines, with both instances using the same machine for their SQL Server databases? Machine 1
Machine 2
There's this document describing installing multiple instances in 2005, but on the same machine. Can you point me to a better resource for our design? Is this still supported on 2012? How are the SSRS databases (ReportServer, ReportServerTempDB) created for the 2nd instance? |
Posted: 12 Aug 2013 12:24 PM PDT While it was working fine, I had to stop the server once. When trying to start Oracle using a script we have, I got the following error: Also when trying to start SQL Plus manually AS SYSDBA I get: Using SQL Plus with other users, I get: Any help appreciated ... |
MariaDB Galera Cluster Replication error, no state received? Posted: 12 Aug 2013 11:29 AM PDT I have a very minimal, fresh out of the box setup for MariaDB with Galera clustering. My master node of the cluster is able to run, and is reporting that there is only one node attached to the cluster, the master itself. When I try attaching the second node to the cluster I am receiving a state received error and the process errors out, and fails. The configuration on the master looks like this: This is in /etc/my.cnf.d/zabbix_cluster.cnf. The slave node looks similar, except it has the name of the Master node in it. When I run I'm not sure why this is happening, or what it means. I see it can connect, but there is something that is not transferring from the Master to the Slave. What should I look for/at or do? I also have made sure that |
Error: "Storage Engine for the Table Doesn't Support Nullable Columns" (SequelPro) Posted: 12 Aug 2013 02:00 PM PDT I'm trying to load a very normal .csv file (that was created from Excel 2011 for Mac) into SequelPro (using MySQL) with my Mac -- and I've recently started getting this error consistently. Can anybody let me know what it is and how to fix it? This is stopping me before I'm able to import the table. Thanks for the help! |
optimizing MySQL for traffic analytics system Posted: 12 Aug 2013 08:01 PM PDT background : I've developed a URL shortener system like Bitly with same features , so the system also tracks clickers info and represent as graphs to the person who has shorten the link as analytics data. currently I'm using MySQL and have a table to store click info with this schema: and for now , just the The system should represent click analytics in the time periods the user wants, for example past hour, past 24 hours , the past month , ... for example to generate graphs for past month , I do following queries: issues:
questions : 1- how to improve and optimize the structure , so the analytics of high traffic links will be shown in less than 1 second(like bitly and similar web apps) and with less usage of CPU and RAM ? should I make an index on the fields 2- is MySQL suitable for this application? assume at maximum my application should handle 100 million links and 10 billion clicks on them totally. Should I consider switching to an NoSQL solution for example? 3- if MySQL is ok , is my database design and table structure proper and well designed for my application needs? or you have better recommendations and suggestions? UPDATE: I made an index on column I think making index on these columns would not help to solve my problem, my idea is about one of these: 1- if using MySQL, maybe generating statistics using background processing for high traffic links is better instead of calculating lively at the user request. 2- using some caching solution like memcached to help MySQL with high traffic links. 3- using a NoSQL such as MongoDB and solutions like Map-Reduce which I am poorly familiar with and haven't used ever. what do you think? |
Posted: 12 Aug 2013 06:01 PM PDT In our current system, we query the LDAP using SQL Server 2008R2 but are moving to SQL Server 2012 where we have hit a snag. According to http://msdn.microsoft.com/en-us/library/cc707782.aspx one of the discontinued features of SQL Server 2012 is the AD helper service. Does anyone know of a way to query the AD from SQL Server 2012? |
Primary replica set server goes secondary after secondary fails Posted: 12 Aug 2013 05:01 PM PDT I have a 2 servers replica set that, after the secondary fails the primary goes into secondary mode while the secondary is in STARTUP2 (recovering). The problem with this is that I can't use the collection stored in that replica set freely, I'm getting errors trying to use the collection: Sometimes if I restart the mongod instances, the server rs2-1 is the primary for a while, but after some time (while the secondary is recovering) I see this in the logs of rs2-1 (the primary): Is there an easy way to make the primary keep being primary after the secondary fails? Am I doing something wrong? Thanks in advance! |
Posted: 12 Aug 2013 01:00 PM PDT I have a VoIP server running 24x7. At low peak hour at lease 150+ users are connected. My server has MySQL running with InnoDB engine on Windows 2008 platform. I like to take at least 2 times full database backup without shutting down my service. As per Peter Zaitsev - the founder of percona, mysqldump –single-transaction is not always good. read here if you are interested As I'm not a DBA, I like to know in my scenario, which would be best solution to take a database backup? Thanks, |
Strange characters in mysqlbinlog output Posted: 12 Aug 2013 11:00 AM PDT Has anyone experienced this? Data replicates fine but when output in mysqlbinlog there are hidden characters that break the input?
Thanks! Julie |
Connecting to a SQL Server database from a Flash program Posted: 12 Aug 2013 12:00 PM PDT I currently have the ability to utilize Microsoft SQL Server 2012. I am developing a project with Adobe Flash Builder 4.7. If I link my database with Adobe Flash Builder is there any additional steps I must take in order to make the database live, or as long as my computer is running will this database be accessible from any device that is utilizing it? In other words is this a LAN only system or does it automatically make itself available for the programs I link to it? |
Oracle Express edition on Ubuntu - control file missing Posted: 12 Aug 2013 07:01 PM PDT I have installed the Oracle Express edition on Ubuntu as mentioned here. I am facing issues when I try to create a sample table. Started oracle Started sqlplus Executed the CREATE command After a series of research on web, I tried to shutdown and restart oracle: Shutdown command Started the oracle instance I realized that the control file is missing at So I tried to create the control file as follows: Tried to create the sample table again So I tried to issue the following command What should be done next? I am clueless as I am not a database guy. Note: Output of |
Is using multiple foreign keys separated by commas wrong, and if so, why? Posted: 12 Aug 2013 04:11 PM PDT There are two tables: So the proper way should be to make a table called However, our outsource team stored the multiple categories in the I feel that what they did is wrong, but I don't know how to clearly explain why this is not right. How should I explain to them that this is wrong? Or maybe I'm the one who's wrong and this is acceptable? |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment