Friday, March 22, 2013

[how to] Special syntax sth.name() in the CREATE FUNCTION statement, what does it mean?

[how to] Special syntax sth.name() in the CREATE FUNCTION statement, what does it mean?


Special syntax sth.name() in the CREATE FUNCTION statement, what does it mean?

Posted: 22 Mar 2013 08:26 PM PDT

In this SO question about stored procedures in plpgsql, the stored procedure look like sth.name(). I don't know what is the meaning of the prefix sth.

For example:

create or replace function something.function_name()  returns setof record as  $$  -- code  $$ language plpgsql;  

Looking in this book in the "Part IV: Programming with PostgreSQL" I have found no mention of this type of creation declaration with a function name in two parts.

In the postgresql documentation, at the create function section, the only similar thing is when they deal about argtype or regtype than could be written in the form: table_name.column_name%TYPE. But it's not related to the name of the function.

So, what is this syntax related to ?

Linux 32b MySQL periodically crashing with signal 11 and OOM errors

Posted: 22 Mar 2013 08:33 PM PDT

I have a MySQL DB with mostly InnoDB tables that I'm fairly certain isn't tuned properly based on what's happening, but I don't have enough know how to pinpoint what to change. The server is a 8G 32b Linux system with the following in my.cnf:

[mysql]  port                           = 3306  socket                         = /var/run/mysqld/mysqld.sock  [mysqld]  user                           = mysql  default_storage_engine         = InnoDB  socket                         = /var/run/mysqld/mysqld.sock  pid_file                       = /var/run/mysqld/mysqld.pid  key_buffer_size                = 64M  myisam_recover                 = FORCE,BACKUP  max_allowed_packet             = 16M  max_connect_errors             = 1000000  datadir                        = /var/lib/mysql/  tmpdir                         = /tmp  tmp_table_size                 = 64M  max_heap_table_size            = 64M  query_cache_type               = 0  query_cache_size               = 0  max_connections                = 200  thread_cache_size              = 50  open_files_limit               = 65535  table_definition_cache         = 8192  table_open_cache               = 8192  innodb_flush_method            = O_DIRECT  innodb_log_files_in_group      = 2  innodb_log_file_size           = 128M  innodb_flush_log_at_trx_commit = 2  innodb_file_per_table          = 1  innodb_buffer_pool_size        = 2G  log_error                      = /var/log/mysql/mysql-error.log  log_queries_not_using_indexes  = 0  slow_query_log_file            = /var/log/mysql/mysql-slow.log  slow_query_log                 = 1  long_query_time                = 2  general_log                    = 0  general_log_file               = /var/log/mysql/general.log  [isamchk]  key_buffer                     = 16M  [mysqldump]  quick  quote-names  max_allowed_packet             = 16M  

Currently, here are the non-zero global status stats:

Aborted_clients 28  Aborted_connects    667  Bytes_received  283596894  Bytes_sent  3709581404  Com_admin_commands  24456  Com_change_db   10267  Com_delete  167  Com_insert  1355  Com_kill    1  Com_select  1018481  Com_set_option  19563  Com_show_processlist    74  Com_show_status 30  Com_show_table_status   1  Com_show_tables 22  Com_show_variables  5  Com_update  2208  Connections 11157  Created_tmp_disk_tables 5131  Created_tmp_files   6  Created_tmp_tables  11044  Flush_commands  1  Handler_commit  1019009  Handler_delete  160  Handler_read_first  29551  Handler_read_key    3051320  Handler_read_last   3  Handler_read_next   5038745  Handler_read_prev   251210  Handler_read_rnd    685831  Handler_read_rnd_next   22756239  Handler_rollback    38  Handler_update  1166988  Handler_write   557183  Innodb_adaptive_hash_cells  8850419  Innodb_adaptive_hash_heap_buffers   1630  Innodb_adaptive_hash_hash_searches  4990439  Innodb_adaptive_hash_non_hash_searches  4315600  Innodb_background_log_sync  4145  Innodb_buffer_pool_pages_data   129440  Innodb_buffer_pool_pages_dirty  4  Innodb_buffer_pool_pages_flushed    9952  Innodb_buffer_pool_pages_LRU_flushed    237  Innodb_buffer_pool_pages_made_young 273289  Innodb_buffer_pool_pages_misc   1631  Innodb_buffer_pool_pages_old    47761  Innodb_buffer_pool_pages_total  131071  Innodb_buffer_pool_read_ahead   607  Innodb_buffer_pool_read_ahead_evicted   1325  Innodb_buffer_pool_read_requests    35806735  Innodb_buffer_pool_reads    373297  Innodb_buffer_pool_write_requests   30891  Innodb_checkpoint_age   365  Innodb_checkpoint_max_age   216721613  Innodb_checkpoint_target_age    209949063  Innodb_data_fsyncs  5575  Innodb_data_read    1834913792  Innodb_data_reads   401613  Innodb_data_writes  17424  Innodb_data_written 332080128  Innodb_dblwr_pages_written  9952  Innodb_dblwr_writes 431  Innodb_dict_tables  27606  Innodb_history_list_length  1979  Innodb_ibuf_free_list   9  Innodb_ibuf_merged_delete_marks 13  Innodb_ibuf_merged_deletes  3  Innodb_ibuf_merged_inserts  201  Innodb_ibuf_merges  144  Innodb_ibuf_segment_size    11  Innodb_ibuf_size    1  Innodb_log_write_requests   5819  Innodb_log_writes   6591  Innodb_lsn_current  77982531271  Innodb_lsn_flushed  77982531271  Innodb_lsn_last_checkpoint  77982530906  Innodb_master_thread_1_second_loops 4131  Innodb_master_thread_10_second_loops    411  Innodb_master_thread_background_loops   15  Innodb_master_thread_main_flush_loops   15  Innodb_master_thread_sleeps 4130  Innodb_max_trx_id   576268483  Innodb_mem_adaptive_hash    62128140  Innodb_mem_dictionary   109012014  Innodb_mem_total    2179465216  Innodb_mutex_os_waits   779  Innodb_mutex_spin_rounds    36022  Innodb_mutex_spin_waits 5369  Innodb_oldest_view_low_limit_trx_id 576268482  Innodb_os_log_fsyncs    3498  Innodb_os_log_written   5761024  Innodb_page_size    16384  Innodb_pages_created    94  Innodb_pages_read   374004  Innodb_pages_written    9952  Innodb_purge_trx_id 576267172  Innodb_rows_deleted 160  Innodb_rows_inserted    1323  Innodb_rows_read    28554644  Innodb_rows_updated 2078  Innodb_s_lock_os_waits  1278  Innodb_s_lock_spin_rounds   40952  Innodb_s_lock_spin_waits    2563  Innodb_x_lock_os_waits  132  Innodb_x_lock_spin_rounds   4893  Innodb_x_lock_spin_waits    176  Key_blocks_unused   57983  Key_blocks_used 10  Key_read_requests   6724  Key_reads   7  Key_write_requests  1441  Max_used_connections    25  Open_files  11  Open_table_definitions  8192  Open_tables 8192  Opened_files    138868  Opened_table_definitions    117810  Opened_tables   126475  Queries 1062631  Questions   1062631  Select_full_join    1211  Select_range    17271  Select_scan 27709  Slow_queries    4  Sort_range  215835  Sort_rows   723648  Sort_scan   20726  Table_locks_immediate   1055875  Threads_cached  21  Threads_connected   4  Threads_created 25  Threads_running 1  Uptime  4148  Uptime_since_flush_status   4148  

And finally, ulimit -a:

ulimit -a  core file size          (blocks, -c) 0  data seg size           (kbytes, -d) unlimited  scheduling priority             (-e) 0  file size               (blocks, -f) unlimited  pending signals                 (-i) 64613  max locked memory       (kbytes, -l) 64  max memory size         (kbytes, -m) unlimited  open files                      (-n) 1024  pipe size            (512 bytes, -p) 8  POSIX message queues     (bytes, -q) 819200  real-time priority              (-r) 0  stack size              (kbytes, -s) 8192  cpu time               (seconds, -t) unlimited  max user processes              (-u) 64613  virtual memory          (kbytes, -v) unlimited  file locks                      (-x) unlimited  

Does anything pop out that is clearly mis-configured?

Correct use of VOLATILE COST (and ROWS) indications in Postgresql stored procedure

Posted: 22 Mar 2013 08:21 PM PDT

While looking at several examples of pl/python and pl/pgsql, I have seen many - but not all - using volatile cost.

ie:

CREATE OR REPLACE FUNCTION my_function()  RETURNS setof record AS  $BODY$  -- code  $BODY$  LANGUAGE plpgsql VOLATILE  COST 100;  

Searching for more information about volatile cost I have discovered (at first sight) that roughly at least 90% of web examples are using volatile cost 100, and sometimes somethimes volatile cost 1. (for rows it's 1000).

As I have understood, this indication helps the query plan optimizer to decide how to set priorities in short-circuit boolean operations.

Is it premature optimization if I start to give an estimate cost or rows for each of my stored procedures ? Should I only do it when I want to optimize certain query ? Is it an art for choosing the good value of cost ?

I know about the command explain, which I have have not learnt yet. Is this command helpful for estimating cost ?

PostgreSQL CREATE EXTENSION file not found error on Mac OS

Posted: 22 Mar 2013 08:02 PM PDT

On PostgreSQL 9.2.3 when trying this:

CREATE EXTENSION btree_gist;  

I get this error:

ERROR: could not open extension control file "/opt/local/share/postgresql92/extension/btree_gist.control": No such file or directory

Running SELECT pg_available_extensions(); shows that indeed is not available.

Unable to run union in parallel on SQL Server

Posted: 22 Mar 2013 03:40 PM PDT

I am unable to get this fairly simple query to parallelize the union operation:

select va.ObjectId, 0 as IsFlag    from Oav.ValueArray va        where va.PropertyId = @pPropertyId                     and va.value in (select value from #MatchValues)                     group by va.ObjectId  having count(distinct va.Value) = (select count(*) from #MatchValues)    union all        select distinct codv.ObjectId, 1 as IsFlag    from Pub.OtherTable codv   where PropertyId = 2551     and Id in (select value from #Ids)      and Flag = @pFlag     and Value in (select value from #MatchValues)   group by codv.ObjectId  having count(distinct codv.Value) = (select count(*) from #MatchValues)  

Running with MAXDOP 1 gives an expected .8s (.5 + .3). I was hoping that increasing MAXDOP to 2 would optimize for the biggest gain by using one processor for each side but that is not the case. Maxdop zero on a lightly loaded 12 Cpu machine all ~4% only results in parallel execution about 10% of the time.

Is there a way to weight the hints so that parallelization at the union point is the most important? Does the syntax support separate MAXDOP's for each side?

I have tried (concat/hash/merge union) with little change.

Match values is usually a small table (~10 rows).

Generate script to automate renaming of default constraints

Posted: 22 Mar 2013 03:03 PM PDT

Background: Some of our default column constraints were generated without explicit names, so we get fun names that vary from server to server like: DF__User__TimeZoneIn__5C4D869D

I would prefer to have them all manageable with a consistent naming like DF_Users_TimeZoneInfo so that we can ensure that the appropriate constraints exist on future target tables (like in RedGate compare, or even just visually)

I've got a script that mostly works for what I want:

select 'sp_rename N''[' + s.name + '].[' + d.name + ']'',      N''[DF_' + t.name + '_' + c.name + ']'', ''OBJECT'';'  from sys.tables t      join      sys.default_constraints d          on d.parent_object_id = t.object_id      join      sys.columns c          on c.object_id = t.object_id          and c.column_id = d.parent_column_id      join sys.schemas s          on t.schema_id = s.schema_id  WHERE d.NAME like 'DF[_][_]%'  

But this just gives me a resultset, and not something I can actually pass into an exec or whatever.

How can I make this so I can just execute those sp_rename scripts without having to resort to copying out all the returned elements and pasting them into a new query window and running them again? Trying to save as many keystrokes as possible so I can correct this in many environments.

enter image description here

Limit connection rights on SQL Server

Posted: 22 Mar 2013 01:55 PM PDT

I have an app to deploy in production that uses 'honor system' security. That is, all users connect to the DB using a SQL user/passwd credential and the app manages permissions itself. The latter part doesn't bother me as much as the fact that the connection object contains embedded credentials and can be copied around freely. I'm try to find some way to limit connections to a more limited set of clients. I can create firewall rules to limit by IP, of course. Is there any way to 'prequalify' SQL logins either by Machine account or domain membership?

php cannot talk to mysql from apache [closed]

Posted: 22 Mar 2013 12:46 PM PDT

I have loaded a new centos system that I have installed php 5.4.13, apache 2.2.24. and mysql 5.6.10. I get an error from mysqli_connect() that it cannot connect. However, a 500 error is given. However, I can connect from the same program run from the command line without error.

SELECT every second row without auto_increment

Posted: 22 Mar 2013 02:17 PM PDT

How can I select every second row in MySQL, without using any data of the table or some auto_increment?
Something like

SELECT EVERY SECOND ROW FROM `table`;  

Thank you!

Modeling a ticket system based on a set of disparate kinds of tickets?

Posted: 22 Mar 2013 11:42 AM PDT

I'm working on a project that allows for the creation of "support tickets." These are not confined to something that needs fixing, however, so in a way they could more accurately be called "jobs."

For the frontend of the project, the tickets are to be created out of predefined "templates." I've listed three examples below:

  1. Removing a discontinued product(s) from our online storefronts. This template would require information on which vendor the discontinued product(s) belong to, which product(s) is/are discontinued, the reasoning for the discontinuing.
  2. Resolving product upload errors. Information would need to be provided on the upload's batch id, the number of errors needing fixing, from which vendor the errors came from.
  3. Fixing a workstation. Information on the workstation number, what the problem is, and the urgency are fields that would be required.

Of course, each kind of ticket would share common attributes like the name of the ticket (a short summary of the issue/assignment), the issuer's user_id, the deadline, whether or not it has been resolved, etc.

I am trying to model this system relationally, but I am not sure if it's appropriate. The biggest struggle I'm having is how I can model and relate the different "kinds" of tickets. This seems perfect for some kind of inheritance, but should this be stored in a relational database?

It has felt natural to try some sort of EAV model, but I've heard that this should be avoided like the plague (I don't know whether or not this is an accurate evaluation).

Here is a diagram of my current attempt:

Current Attempt

I've included the rest of the relations in the image for context.

The accompanying (generic) SQL code as generated by SQLEditor:

CREATE TABLE actions  (    action_id INTEGER NOT NULL AUTO_INCREMENT  UNIQUE,    name VARCHAR NOT NULL,    PRIMARY KEY (action_id)  );    CREATE TABLE departments  (    department_id INTEGER NOT NULL AUTO_INCREMENT  UNIQUE,    name VARCHAR NOT NULL UNIQUE,    PRIMARY KEY (department_id)  );    CREATE TABLE entities  (    entity_id INTEGER NOT NULL AUTO_INCREMENT  UNIQUE,    entity_name VARCHAR NOT NULL UNIQUE,    PRIMARY KEY (entity_id)  );    CREATE TABLE activity_types  (    activity_type_id INTEGER NOT NULL AUTO_INCREMENT  UNIQUE,    entity_id INTEGER NOT NULL,    name VARCHAR NOT NULL,    PRIMARY KEY (activity_type_id)  );    CREATE TABLE objectives  (    objective_id INTEGER NOT NULL AUTO_INCREMENT  UNIQUE,    name VARCHAR NOT NULL,    PRIMARY KEY (objective_id)  );    CREATE TABLE statuses  (    status_id INTEGER NOT NULL AUTO_INCREMENT  UNIQUE,    name VARCHAR,    PRIMARY KEY (status_id)  );    CREATE TABLE notifications  (    notification_id INTEGER NOT NULL AUTO_INCREMENT  UNIQUE,    target_id INTEGER NOT NULL,    active INTEGER NOT NULL,    action_id INTEGER,    closed INTEGER NOT NULL,    activity_id INTEGER NOT NULL,    PRIMARY KEY (notification_id)  );    CREATE TABLE ticket_keys  (    ticket_key_id INTEGER NOT NULL AUTO_INCREMENT  UNIQUE,    name VARCHAR NOT NULL UNIQUE,    PRIMARY KEY (ticket_key_id)  );    CREATE TABLE tasks  (    task_id INTEGER NOT NULL AUTO_INCREMENT  UNIQUE,    ticket_id INTEGER NOT NULL,    name VARCHAR NOT NULL,    resolved INTEGER NOT NULL,    PRIMARY KEY (task_id)  );    CREATE TABLE ticket_vals  (    task_val_id INTEGER NOT NULL AUTO_INCREMENT  UNIQUE,    ticket_key_id INTEGER NOT NULL UNIQUE,    ticket_id INTEGER NOT NULL,    val VARCHAR NOT NULL,    PRIMARY KEY (task_val_id)  );    CREATE TABLE users  (    user_id INTEGER NOT NULL AUTO_INCREMENT  UNIQUE,    username VARCHAR NOT NULL UNIQUE,    department_id INTEGER NOT NULL,    first_name VARCHAR NOT NULL,    last_name VARCHAR NOT NULL,    hash VARCHAR NOT NULL,    salt VARCHAR NOT NULL UNIQUE,    PRIMARY KEY (user_id)  );    CREATE TABLE comments  (    comment_id INTEGER NOT NULL AUTO_INCREMENT  UNIQUE,    ticket_id INTEGER NOT NULL,    commenter_user_id INTEGER NOT NULL,    comment VARCHAR NOT NULL,    PRIMARY KEY (comment_id)  );    CREATE TABLE targets  (    target_id INTEGER NOT NULL AUTO_INCREMENT  UNIQUE,    ticket_id INTEGER,    task_id INTEGER,    objective_id INTEGER,    user_id INTEGER,    department_id INTEGER,    PRIMARY KEY (target_id)  );    CREATE TABLE sessions  (    session_id INTEGER NOT NULL AUTO_INCREMENT  UNIQUE,    user_id INTEGER NOT NULL,    time_in VARCHAR NOT NULL UNIQUE,    time_out VARCHAR UNIQUE,    duration INTEGER,    PRIMARY KEY (session_id)  );    CREATE TABLE tickets  (    ticket_id INTEGER NOT NULL AUTO_INCREMENT  UNIQUE,    date_created DATE NOT NULL,    name VARCHAR NOT NULL UNIQUE,    issuer_user_id INTEGER NOT NULL,    deadline VARCHAR NOT NULL,    resolved INTEGER NOT NULL,    recurring INTEGER NOT NULL,    recur_interval VARCHAR,    objective_id INTEGER,    status_id INTEGER NOT NULL,    PRIMARY KEY (ticket_id)  );    CREATE TABLE activities  (    activity_id INTEGER NOT NULL AUTO_INCREMENT  UNIQUE,    user_id INTEGER,    activity_type_id INTEGER NOT NULL,    source_id INTEGER,    PRIMARY KEY (activity_id)  );    ALTER TABLE activity_types ADD FOREIGN KEY (entity_id) REFERENCES entities (entity_id);    ALTER TABLE notifications ADD FOREIGN KEY (target_id) REFERENCES targets (target_id);    ALTER TABLE notifications ADD FOREIGN KEY (action_id) REFERENCES actions (action_id);    ALTER TABLE notifications ADD FOREIGN KEY (activity_id) REFERENCES activities (activity_id);    ALTER TABLE ticket_keys ADD FOREIGN KEY (ticket_key_id) REFERENCES ticket_vals (ticket_key_id);    ALTER TABLE tasks ADD FOREIGN KEY (ticket_id) REFERENCES tickets (ticket_id);    ALTER TABLE ticket_vals ADD FOREIGN KEY (ticket_key_id) REFERENCES ticket_keys (ticket_key_id);    ALTER TABLE ticket_vals ADD FOREIGN KEY (ticket_id) REFERENCES tickets (ticket_id);    ALTER TABLE users ADD FOREIGN KEY (department_id) REFERENCES departments (department_id);    ALTER TABLE comments ADD FOREIGN KEY (ticket_id) REFERENCES tickets (ticket_id);    ALTER TABLE comments ADD FOREIGN KEY (commenter_user_id) REFERENCES users (user_id);    ALTER TABLE targets ADD FOREIGN KEY (ticket_id) REFERENCES tickets (ticket_id);    ALTER TABLE targets ADD FOREIGN KEY (task_id) REFERENCES tasks (task_id);    ALTER TABLE targets ADD FOREIGN KEY (objective_id) REFERENCES objectives (objective_id);    ALTER TABLE targets ADD FOREIGN KEY (user_id) REFERENCES users (user_id);    ALTER TABLE targets ADD FOREIGN KEY (department_id) REFERENCES departments (department_id);    ALTER TABLE sessions ADD FOREIGN KEY (user_id) REFERENCES users (user_id);    ALTER TABLE tickets ADD FOREIGN KEY (issuer_user_id) REFERENCES users (user_id);    ALTER TABLE tickets ADD FOREIGN KEY (objective_id) REFERENCES objectives (objective_id);    ALTER TABLE tickets ADD FOREIGN KEY (status_id) REFERENCES statuses (status_id);    ALTER TABLE activities ADD FOREIGN KEY (user_id) REFERENCES users (user_id);    ALTER TABLE activities ADD FOREIGN KEY (activity_type_id) REFERENCES activity_types (activity_type_id);  

So the idea was to store all of the possible attributes in ticket_keys and relate the relevant attributes to a ticket through ticket_vals. But this seems to be more of a workaround than a solution: there is no concept of a "template" with this model. Perhaps that is not even necessary?

One of the reasons I felt like it would be necessary to include the concept of a template is for privileges: only certain users can see certain templates. For instance, only admin can open a ticket for Uploading products from a new vendor. But feels like it'd be a pain given my current setup. Maybe I could just add a privilege_required to ticket_keys, but again it does not seem quite right.

I would appreciate any advice regarding my naive attempt and whether or not the relational model is a good fit for this project. Is a document store more appropriate?

Can the OUTPUT clause create a table?

Posted: 22 Mar 2013 11:49 AM PDT

I'm doing an update like this:

UPDATE dbo.Table1  SET BirthDate = b.BirthDate  FROM Table1 a  JOIN Table2 b      ON a.ID = b.ID  

And I want to use the OUTPUT clause to back up my changes.

UPDATE dbo.Table1  SET BirthDate = b.BirthDate  OUTPUT       inserted.Id, inserted.BirthDate AS New_BirthDate,       deleted.BirthDate AS Old_BirthDate      INTO OutputTable  FROM Table1 a  JOIN Table2 b      ON a.ID = b.ID  

What I want to know is if there is a way for the OUTPUT clause to create the table OutputTable or do I have to make sure it already exists before running the statement?

Why can't we write ddl statement directly into the PL/SQL block

Posted: 22 Mar 2013 01:51 PM PDT

Why can't we write ddl statements directly in PL/SQL block, for example when i write

CREATE OR REPLACE PROCEDURE test IS  BEGIN      truncate table table_name; // error  END test;  /  

But,

CREATE OR REPLACE PROCEDURE test IS  BEGIN      execute immediate 'truncate table table_name'; // works fine  END test;  /  

Why second one executed successfully ?

sql replication conflicts after migration from 2000 to 2008

Posted: 22 Mar 2013 10:32 AM PDT

I got a suggestion over at Stackoverflow to post here....greatful for any and all help.

Please bear with me I think this might take a while to explain. For many years now my company has hosted a solution for a client involving a complex web application with smaller mobile solution consisting of IIS 6 for the web app, SQL 2000 on its own server and Visual Studio 2005 Pocket PC app replicating with SQL via Merge Replication. This whole time the mobile solution has been very solid and did not require many updates so we have replicated with sscesa20.dll the entire time.

We recently migrated this entire solution as follow:

  • Web Box - New Win Server 2008 R2 running IIS 7.5
  • SQL Box - New Win Server 2008 R2 running SQL 2008
  • Mobile app - small updates converted to Visual Studio 2008 and Windows for Moblie 5

The new web box received the 64 bit version of SQL Server Compact 3.5 tools and we now call sqlcesa35.dll from the mobile device to perform merge replication.

The basic idea of the entire process is that mobile field users get assigned safety inspections to perform on buildings. When a facility in the system needs an inspection an inspection record is created via the web app in the DB. A status flag is set such that the HOST_NAME() is utilized to make sure only records for a given inspector with this particular status will let them show up on their mobile device. The user can synch multiple times in the field sending their data up to the SQL Server/web app and receive more inspections down or other updates such as look up table data...typical merge replication here and has been working great for years. Once the field user changes the status of the inspection, it will travel from mobile device to SQL DB and be removed from their iPaq. The inspection has additional work flow on the web app from here on out.

Now on to the problem. We migrated everything Publishing the exact same subset of tables with the same joins/filters. Same settings on the publication as far as I can tell are the same. However; when a user gets a new inspection down to the hand held for the very first time, enters data, then synchronizes back to the DB every record has a conflict. Since we have default conflict resolution the publisher wins and the data collected in the field it lost. The inspection now looks blank just as it did when it first came down to the mobile device. If the user synchs again with or without changes on the mobile (subscriber) all is well. Any future changes from the mobile device are intact.

It is as if the web/db data is newer then the hand held data. I am 100% sure it is not. I have looked at table triggers, web app logic, etc. We were very careful not to include any application changes to DB/web app/mobile app with respect to data manipulation during this migration.

Here is a summary of the order of operation: New record created in the DB>>Mobile user receives data>>mobile user updates data>>synchronizes - Data is lost. Conflicts show up for all data lost.

From here on out any additional mobile changes are captured. Merge replication works in both directions flawlessly.

Thanks for taking the time to read please help. I am stuck after 3 days.

Materialized view log not updating

Posted: 22 Mar 2013 12:26 PM PDT

Under what conditions would a materialized view log not update?

I must be missing something. I've got a 10gR2 database. I can create the following MV logs:

CREATE MATERIALIZED VIEW LOG ON linked_t1   WITH SEQUENCE, ROWID, PRIMARY KEY INCLUDING NEW VALUES;    CREATE MATERIALIZED VIEW LOG ON t1   WITH SEQUENCE, ROWID, PRIMARY KEY INCLUDING NEW VALUES;  

Then if I do an insert:

INSERT INTO linked_t1 (id, link_id, link_analysis_id) VALUES ('11111111','22222222','0000000001');  

I see my insert in the MV Log:

SQL> select * from MLOG$_LINKED_T1 t;    LINK_ID  M_ROW$$            SEQUENCE$$ SNAPTIME$$  DMLTYPE$$ OLD_NEW$$ CHANGE_VECTOR$$  -------- ------------------ ---------- ----------- --------- --------- ---------------  11111111 AAHvaaAAHAABvDsACR         11 1/1/4000    I         N         FE  

Not really sure what to make of the 1/1/4000 snaptime.

As soon as I create a materialized view:

CREATE MATERIALIZED VIEW mv$sub2 (c1, c2, m1, m2, m3, m4, m5)  USING INDEX TABLESPACE idx   REFRESH FAST WITH ROWID ON COMMIT DISABLE QUERY REWRITE AS   SELECT la.rowid c1, ar.rowid c2, ar.analysis_id m1, ar.id m2, ar.valid m3, la.analysis_id m4, la.id m5     FROM linked_t1 la, t1 ar   WHERE ar.analysis_id = la.analysis_id;  

The materialized view log on LINKED_T1 is emptied, and will no longer track any DML! Why is that? I have not made any changes to T1 for this test.

I should mention that this MV was suggested by DBMS_ADVISOR.TUNE_MVIEW.

SQL Server Sandbox

Posted: 22 Mar 2013 02:34 PM PDT

I'm attempting to set up a sandbox for our report developers to their work in. My current plan is to "reset" the database every evening but I'm not sure how to go about doing so. What I mean by reset is that I want to essentially drop any user tables, views, stored procedures, etc from all but one database on the server. I suppose another option would be to drop and recreate the database as well but I'm pretty sure that'd mean regranting access to all of the appropriate AD groups/people too.

I really don't know what would be the best way to go about doing this so I'm hoping some of you will be able to provide some good ideas/suggestions. Thanks.

For clarity, we essentially want to do this with our database: http://try.discourse.org/t/this-site-is-a-sandbox-it-is-reset-every-day/57. Only difference being is that we don't want to recreate our users every day.

Version: SQL Server 2008
Edition: Developer & Enterprise

Backup Access Database Daily/Weekly

Posted: 22 Mar 2013 08:36 PM PDT

We have a Microsoft Access database split into backend/frontend. What I am looking for is to be able to automatically backup these files on a daily or weekly basis - what is the best way to go about this? We don't want to have to worry about backups for the database, just know that it is occuring automatically on a schedule.

Thanks.

PostgreSQL EXCLUDE USING error: Data type integer has no default operator class

Posted: 22 Mar 2013 07:48 PM PDT

In PostgreSQL 9.2.3 I am trying to create this simplified table:

CREATE TABLE test (      user_id INTEGER,      startend TSTZRANGE,      EXCLUDE USING gist (user_id WITH =, startend WITH &&)  );  

But I get this error:

ERROR:  data type integer has no default operator class for access method "gist"  HINT:  You must specify an operator class for the index or define         a default operator class for the data type.  

The PostgreSQL docs use these this example, which does not work for me (http://www.postgresql.org/docs/9.2/static/rangetypes.html):

CREATE TABLE room_reservation (      room text,      during tsrange,      EXCLUDE USING gist (room WITH =, during WITH &&)  );    ERROR:  data type integer has no default operator class for access method "gist"  HINT:  You must specify an operator class for the index or define         a default operator class for the data type.  

And this one, which does not work for me either (http://www.postgresql.org/docs/9.2/static/btree-gist.html):

CREATE TABLE zoo (      cage   INTEGER,      animal TEXT,      EXCLUDE USING gist (cage WITH =, animal WITH <>)  );    ERROR:  data type text has no default operator class for access method "gist"  HINT:  You must specify an operator class for the index or define          a default operator class for the data type.  

I am able to create this able without any problem:

CREATE TABLE test (      user_id INTEGER,      startend TSTZRANGE,      EXCLUDE USING gist (startend WITH &&)  );  

and this:

CREATE TABLE test (      user_id INTEGER,      startend TSTZRANGE,      EXCLUDE USING btree (user_id WITH =)  );  

I've spent quite a bit of time searching for hints about figuring out how to make this work, or figuring out why it won't work. Any ideas?

sql replication conflicts after migration from 2000 to 2008

Posted: 22 Mar 2013 02:12 PM PDT

Please bear with me I think this might take a while to explain. For many years now my company has hosted a solution for a client involving a complex web application with smaller mobile solution consisting of IIS 6 for the web app, SQL 2000 on its own server and Visual Studio 2005 Pocket PC app replicating with SQL via Merge Replication. This whole time the mobile solution has been very solid and did not require many updates so we have replicated with sscesa20.dll the entire time.

We recently migrated this entire solution as follow:

  • Web Box - New Win Server 2008 R2 running IIS 7.5
  • SQL Box - New Win Server 2008 R2 running SQL 2008
  • Mobile app - small updates converted to Visual Studio 2008 and Windows for Moblie 5

The new web box received the 64 bit version of SQL Server Compact 3.5 tools and we now call sqlcesa35.dll from the mobile device to perform merge replication.

The basic idea of the entire process is that mobile field users get assigned safety inspections to perform on buildings. When a facility in the system needs an inspection an inspection record is created via the web app in the DB. A status flag is set such that the HOST_NAME() is utilized to make sure only records for a given inspector with this particular status will let them show up on their mobile device. The user can synch multiple times in the field sending their data up to the SQL Server/web app and receive more inspections down or other updates such as look up table data...typical merge replication here and has been working great for years. Once the field user changes the status of the inspection, it will travel from mobile device to SQL DB and be removed from their iPaq. The inspection has additional work flow on the web app from here on out.

Now on to the problem. We migrated everything Publishing the exact same subset of tables with the same joins/filters. Same settings on the publication as far as I can tell are the same. However; when a user gets a new inspection down to the hand held for the very first time, enters data, then synchronizes back to the DB every record has a conflict. Since we have default conflict resolution the publisher wins and the data collected in the field it lost. The inspection now looks blank just as it did when it first came down to the mobile device. If the user synchs again with or without changes on the mobile (subscriber) all is well. Any future changes from the mobile device are intact.

It is as if the web/db data is newer then the hand held data. I am 100% sure it is not. I have looked at table triggers, web app logic, etc. We were very careful not to include any application changes to DB/web app/mobile app with respect to data manipulation during this migration.

Here is a summary of the order of operation: New record created in the DB>>Mobile user receives data>>mobile user updates data>>synchronizes - Data is lost. Conflicts show up for all data lost.

From here on out any additional mobile changes are captured. Merge replication works in both directions flawlessly.

Thanks for taking the time to read please help. I am stuck after 3 days.

IntelliSense alternative using SSMS 2012 in a contained database as a contained user?

Posted: 22 Mar 2013 11:59 AM PDT

According to this answer about contained database disadvantages:

If you connect to a contained database as a contained user, SSMS will not fully support IntelliSense. You'll get basic underlining for syntax errors, but no auto-complete lists or tooltips and all the fun stuff. I filed a bug about this issue, and it remains open.

So is there any way to get IntelliSense-style functionality? Can anyone vouch for a product that will work in this scenario?

Does the SQL Server Backup Database Command also backup views from a Database?

Posted: 22 Mar 2013 11:28 AM PDT

I have a backup job (SQL Server Agent job) which will iterate through all of our SQL Server databases in SSMS, invoke the BACKUP DATABASE command to create a .bak file on a different server. This other server has its own job iterating through all of the .baks and restoring them (with the RESTORE DATABASE command).

EDIT: The server running the RESTORE job restores the .baks into its own installation of SSMS

The jobs report success in the SQL Agent job history so I'm wondering whether or not these jobs should also backup and restore the Views in the databases (as they haven't seemed to) or if this is an indication that the scripts aren't quite working?

postgres backup / restore: restored database much smaller?

Posted: 22 Mar 2013 09:02 PM PDT

I am worried that my restored database is very different from the original:

#check size of postgres database  postgres@db1:/tmp$ psql -c "select pg_size_pretty(pg_database_size('test_db'));"   pg_size_pretty  ----------------   2105 MB  (1 row)    #backup database  postgres@db1:/tmp$ pg_dump -Fc test_db > test_db_Fc.dump    #rename postgres database (i.e. park it nearby)  postgres@db1:/tmp$ psql -c "alter database test_db rename to test_db_20130322;"  ALTER DATABASE  -------  (1 row)    #restore test_db  postgres@db1:/tmp$ pg_restore -Fc -C -d postgres test_db_Fc.dump    #check size of restored postgres database  postgres@db1:/tmp$ psql -c "select pg_size_pretty(pg_database_size('test_db'));"   pg_size_pretty  ----------------   257 MB  (1 row)  

The original db is many times larger than the restored one. What is going on here? As far as i can tell, the website that test_db serves is still working perfectly well, after a restore, but i need to know what's up before i use a backup in live context.

FYI if i run vacuumdb on each database there seems to be no change in database size.

[Addendum, added later] in the tradition of RTFM I have gone hunting in the manual for PostrgeSQL. Here are some more pieces to the puzzle

#is the autovacuum switched on?  postgres@db1:/tmp$ psql -c "SHOW autovacuum;"   autovacuum  ------------   on  (1 row)    #The "track counts" parameter is needed by autovacuum which uses statistics from database activity to know where to do its job  postgres@db1:/tmp$ psql -c "SHOW track_counts;"   track_counts  --------------   on  (1 row)    #is there an autovacuum daemon resident in memory?  postgres@db1:/tmp$ ps  -ef | grep 'autovacuum'  postgres  1261  1021  0 Jan23 ?        00:08:27 postgres: autovacuum launcher process                               postgres 18347 18149  0 00:33 pts/0    00:00:00 grep autovacuum      #what's been happening on the live server?  postgres@LIVEdb1:/tmp$ psql -c "SELECT relname, last_vacuum, last_autovacuum, last_analyze, last_autoanalyze from pg_stat_all_tables;"  #result is list of 65 tables (out about 300), all empty (no dates at all)  

Gratefully following up on @craig-ringer's advice to look into VACUUM FULL I turned to PostgreSQL documentation, (paraphrased) "... In the worst case where your disk is nearly full, VACUUM FULL may be the only workable alternative. ... (but) the usual goal of routine vacuuming is to avoid needing VACUUM FULL. The autovacuum daemon attempts to do standard VACUUMs often enough to maintain steady-state usage of disk space ..."

I will follow @chris-travers advice and map out the number of rows in tables from each version of the database. I think in my case it'd be fair to say that VACUUM FULL will relieve the pressure on disk space, and it'll make original_vs_restored look better but there's still this red flag of excessive bloat. I think autovacuum isn't doing anything and that's a worry! Thx for the guidance so far, it's fascinating.

Primary Key efficiency

Posted: 22 Mar 2013 02:55 PM PDT

If I need a table to hold Point of Sales transactions, and am told I need to store:

Country ID  Store Number  POS Terminal Number  Transaction Date  Item Code  Teller ID  Another Field  More Fields  

Now, in this case, the uniqueness would be:

Country ID, Store Number, POS Terminal Number, Transaction Date, Item Code

I am always unsure if it's best to have a identity colmn as the primary key - in this case, maybe TransactionID INT NOT NULL PRIMARY KEY, and then a unique constraint across the unique fields.

Or, should a primary key be created across all the unique fields?

The benefit of the TransactionId I guess would be, joins. To join back to a transaction, you just use the single field. I can't see the benefit of the PK across a number of fields. (Save space of an extra column?).

SQL Server split mdf into multiple files

Posted: 22 Mar 2013 02:05 PM PDT

I have a database called example.mdf with a total size of 1GB which suffers from performance issues. I checked the allocated hardware and it is higher than required, I double checked the design and every thing looks normal, when I look at the .mdf files in their physical location (C:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\DATA\example.mdf) I found that the database is split into multiple files example_1.mdf, example_2.mdf, example_3.mdf, up to example_7.mdf.

I have another database file on the same SQL Server that has the same issue.

Why does this happen? Does this affect the performance? How can I prevent or stop SQL Server from splitting my .mdf files? Moreover I need to combine back the already split files.

Even when user has 'bulkadmin' role, query says user does not have role - SQL Server 2012

Posted: 22 Mar 2013 11:02 AM PDT

I am facing a weird issue with SQL Server 2012.

I have a user (say RS\sqluser) which has bulkadmin role.

Now when i run this query

Select IS_SRVROLEMEMBER('bulkadmin', 'RS\sqluser')

output is 0. But when i execute this query

sp_helpsrvrolemember 'bulkadmin'

I can see RS\sqluser present in the list.

Can someone please help me understand how is the output differing or is it BUG in SQL Server 2012? (Can't be a BUG as same query with different user works fine)

Table Size Analysis on SQL Server 2000

Posted: 22 Mar 2013 05:02 PM PDT

Our SQL Server 2000 database .mdf file is 27Gb large which seems much larger than is plausible. Using the following query we tried to analyse table sizes:

select cast(object_name(id) as varchar(50)) AS name,      sum(CASE WHEN indid<2 THEN rows END) AS rows,      sum(reserved)*8 AS reserved,      sum(dpages)*8 AS data,      sum(used-dpages)*8 AS index_size,      sum(reserved-used)*8 AS unused  from sysindexes with (nolock)      where indid in(0,1,255) and id>100       GROUP BY id with rollup      ORDER BY sum(reserved)*8 desc  

The results were as follows:

Name           Rows       Reserved  Data     Index_Size  Unused  NULL           15274279   26645456  5674592  17361464    3609400  BigTable         875966   16789712   471096  13349816    2968800  
  1. How can we find out which objects are causing this massive NULL space usage?
  2. It seems that approx 26GB are "reserved" for NULL, 16GB for BigTable - is this basically a waste of space or are real records involved?

MySQL General log not working on Mac OS X

Posted: 22 Mar 2013 12:47 PM PDT

I want to log all queries to the database running on my MacBook Pro (Mac OS X 10.8.1). I created a file /etc/my.cnf with the following content:

[mysqld]  general_log=1  general_log_file=/Users/wdb/mysql.log  

I restarted MySQL and tried some queries on the mysql command line, but nothing gets logged. The file is not getting created. I also created the file myself afterwards to test with touch, but the file remains empty.

When I check, MySQL did read my settings:

mysql> show variables like '%general_log%';  +------------------+----------------------+  | Variable_name    | Value                |  +------------------+----------------------+  | general_log      | ON                   |  | general_log_file | /Users/wdb/mysql.log |  +------------------+----------------------+  2 rows in set (0.01 sec)  

Is there anything else I need to do? I suppose any query I type on the command line should get logged, right?

I am using MySQL 5.5.24

Handling growing number of Tenants in Multi-tenant Database Architecture

Posted: 22 Mar 2013 07:34 PM PDT

Handling a modest number of customers (tenants) in a common server with separate databases for each tenant's instance of the application is relatively straightforward and is normally the correct way to do this. Currently I am looking at the architecture for an application where each tenant has their own database instance.

However, the problem is that this application will have a large number of tenants (5,000-10,000) with a substantial number of users, perhaps 2,000 for a single tenant. We will need to support growing the system by several tenants every week.

In addition, all tenants and their users will be presented with a common login process (i.e. each tenant cannot have their own URL). To do this, I need a centralised login process and a means to dynamically add databases to the system and register users.

  • How could the registration and database creation process be automated robustly?

  • Is the process of creating and registering tenants' databases on the system likely to cause performance or locking issues. If you think this could be an issue, can anyone suggest ways to mitigate it?

  • How can I manage central authentication in a way where user credentials will be associated with a particular tenant's database but the user can log in through a common page (i.e. all through the same login URL, but their home application will be on some specific tenant's database). The tenants will have to be able to maintain their own logins and permissions, but a central login system must be aware of these. Can anyone suggest a way to do this?

  • If I need to 'scale out' by adding multiple database servers, can anyone suggest what issues I might have to deal with in managing user identies across servers (impersonation etc.) and some way to mitigate those issues?

MySQL Replication Troubleshooting

Posted: 22 Mar 2013 03:23 PM PDT

Here's what I did. On the master, change /etc/my.cnf:

[mysqld]  server-id=1  log-bin=mysql-bin  innodb_flush_log_at_trx_commit=1  sync_binlog=1  

Save and restart mysql, then log in.

>GRANT REPLICATION SLAVE ON *.* TO 'repl'@'%' IDENTIFIED BY '123';  Mysql> flush table with read lock;//  Start copy data master (myisam) chuyển sang slave .  Mysql> show master status;  

On the slave, change /etc/my.cnf.

[mysqld]  server-id=2  log-bin=mysql-bin  innodb_flush_log_at_trx_commit=1  sync_binlog=1  

Save and restart mysql. Create a database on the slave (Snapshop master). Move data from master to slave.

Mysql> stop slave;  Mysql>CHANGE MASTER TO  MASTER_HOST='server2',  MASTER_USER='repl',  MASTER_PASSWORD='password',  MASTER_LOG_FILE='xxxx',  MASTER_LOG_POS=xxx;  

On the master:

Mysql> unlock tables;  

On the slave:

Mysql> start slave;  Mysql> show slave status /G  

My problem is: I forget answer command belows.

Mysql> show master status;  

when I start the slave. The error is

236 | Got fatal error 1236 from mast er when reading data from binary log: 'Could not find first log file name in binary log index file' |  

What does the error mean, and how can I solve it?

SQL Server 2008 - Cross database performance on same physical machine and server instance

Posted: 22 Mar 2013 12:38 PM PDT

Is there any performance hit when doing a select across another DB on the same physical machine? So I have 2 databases on the same physical machine running within the same SQL 2008 instance.

For instance in SomStoreProc on_this_db I run SELECT someFields FROM the_other_db.dbo.someTable

So far from what I have read on the internet, most people seem to indicate NO.

No comments:

Post a Comment

Search This Blog