Sunday, April 21, 2013

[how to] How to create a database for unknown kinds of data?

[how to] How to create a database for unknown kinds of data?


How to create a database for unknown kinds of data?

Posted: 21 Apr 2013 04:54 PM PDT

I am in the process of designing a database for a new PHP/MySql-based application.

My problem is that I do not and cannot represent what should be saved in the database because it is unlimited and changing.

Here is the problem example: The application will be a shopping website that has many kind of products all of them have some shared attributes such as title and price but some kinds have specific details such as expiry date some have isbn some non.

This is just an example but I really have many kinds with many different attributes.

I can create a table for each kinds, but what I have is not all the available kinds, many kinds of items are unknown at this time.

Is their a way to accommodate this problem without over head in the user's side?

Misunderstanding statement in MySQL documentation

Posted: 21 Apr 2013 02:39 PM PDT

I can't understand this statement in Optimizing Data Size:

Declare columns with identical information in different tables with identical data types, to speed up joins based on the corresponding columns.

Can anyone describe this statement with example?

What is meant by Clustering index and Multilevel indices? [closed]

Posted: 21 Apr 2013 11:41 AM PDT

What is meant by Clustering Index and Multilevel Indices ?

I could not find much when I googled it.

How to fix filestream SQL database

Posted: 21 Apr 2013 04:58 PM PDT

I have SQL Server 2008 R2 FileStream enabled Database, when I try to access any value that stored in the filestream I get the following error :

Msg 233, Level 20, State 0, Line 0  A transport-level error has occurred when receiving results from the server. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.)  

Also the following error thrown in the SQL ERRORLOG file :

SqlDumpExceptionHandler: Process 51 generated fatal exception c0000005 EXCEPTION_ACCESS_VIOLATION. SQL Server is terminating this process.  

I read about that exception and Microsoft has KB for it (KB972936), but it also didn't worked after I installed its hotfix.

If I SELECT the count of the records in that table using :

SELECT COUNT(1) FROM [Table_Name]  

I get a correct result.

Here is some details for the database files and filegroups :

The database have 2 files the "Row Data" file and the "Log" file where it should also contains the "Filestream Data" item.

The database has the following filegroups :

  • Rows : PRIMARY with 1 File
  • Filestream : [MyFileName] with 0 Files !

Here is a snapshots for the DB properties page

And here is the full SQL ERRORLOG file.

Its my first question here, Any help please ..

Creating an admin user in Oracle

Posted: 21 Apr 2013 01:27 PM PDT

How can I create a "root" (like on MySQL) in Oracle?

I installed Oracle database on my system, for homework, for school (faculty) but it didn't give me the option to make a root user. Well at least I don't think so.

It asked me on preparation for install to create a password but when I tried connecting with the id and password it failed. (So that is why I presume it did not make a root user)

Connected on the databse with connect /as sysdba but don't know how to create a user with admin privileges.

Tried

create user "user" identified by "password" with grant option;  

(error)

create user "user" identified by "password";  

(succes)

grant all on "user" with grant option;  

(error)

I want to have all privileges on all tables, etc.

how to set a column to fetch the date now?

Posted: 21 Apr 2013 09:37 AM PDT

hi i have a sample table named datenow and used the following to create my table on flamerobin to have a column that gets the date today automatically when an entry is added.

CREATE TABLE "DATENOW"  (    ID Char(8) NOT NULL,    "Start_Date" Char(10),    "Client_Name" Varchar(30),    BALANCE Integer,    select cast('Now' as date) from rdb$database    CONSTRAINT PK_datenow PRIMARY KEY (ID)  );    GRANT DELETE, INSERT, REFERENCES, SELECT, UPDATE   ON "datenow" TO  SYSDBA WITH GRANT OPTION;  

also tried inserting these sql statement

select date 'Now' from rdb$database  select time 'now' from rdb$database  select timestamp 'NOW' from rdb$database  

but i cannot generate a table and cant determine where will i put the sql statement or what to do in order to have a column that gets the date today automat

What is a partitioned view?

Posted: 21 Apr 2013 06:58 AM PDT

What is a partitioned view? I googled but could not get a satisfactory result.

How should I store multiple relationships (many posts to many categories)?

Posted: 21 Apr 2013 08:23 AM PDT

I have posts stored in a posts table and I want each post to have multiple categories, what is the most efficient way to store the categories and their relationships ?

I thought of it this way first:

Categories Table:    +------------+---------------+------+-----+---------+----------------+  | Field      | Type          | Null | Key | Default | Extra          |  +------------+---------------+------+-----+---------+----------------+  | id         | int(11)       | NO   | PRI | NULL    | auto_increment |  | name       | varchar(32)   | NO   |     | NULL    |                |  | parent     | int(11)       | NO   |     | NULL    |                |  +------------+---------------+------+-----+---------+----------------+    Relationships Table:    +------------+---------------+------+-----+---------+----------------+  | Field      | Type          | Null | Key | Default | Extra          |  +------------+---------------+------+-----+---------+----------------+  | post_id    | int(11)       | NO   |     | NULL    |                |  | cat_id     | int(11)       | NO   |     | NULL    |                |  | order      | int(11)       | NO   |     | NULL    |                |  +------------+---------------+------+-----+---------+----------------+  

Using this structure I will have lots of rows inside the relationships table, for example if a post has 4 categories I will have 4 rows only for this post, of course I won't have that much categories related to the post but I want to implement the same structure for tags too.

Am I doing it right ? is there a better structure to use ?

Postgres: Given some parent id's how can I SELECT X child records for each?

Posted: 21 Apr 2013 05:07 AM PDT

In Postgres, if I have :

Parent Table  id, name  1, Parent1  2, Parent2  .....  5, Parent5    Child Table  id, parent_id, name  1, 1, Child1  2, 1, Child2  .....  6, 2, Child6  .....  25, 5, Child25  

so 5 children for each parent, how do I get, say, the first 3 children for parent id 1,3,4 in one query?

how can Audit All Login To SQL Server BY ODBC Connection

Posted: 21 Apr 2013 07:06 AM PDT

I MAke Report By Access Form On SQL database

I make ODBC Connection For login To Update the data From Database

Now I want Audit All logins To know who make Update today Or who make Update Now ?

Any One Help ME

Is there a way to stop MySQL Replication on the master?

Posted: 21 Apr 2013 03:01 PM PDT

I want the master to stop replicating data to the slave. I know I can do that on the slave with STOP SLAVE;, but I wonder if there is a way to do it in the master.

One possible solution might be to change the server_id to 0, but in this case I'll have to restart mysql in the master for changes to take effect.

What I'm looking for is a statement like STOP MASTER;.

Is there a way to recover this database (MySQL)

Posted: 21 Apr 2013 09:51 AM PDT

I did a "cold" (db offline, all filesystem files) backup of a MySQL db. I wrote the backup to an ISO image. Unfortunately Joliet filenames are not default. I didn't realize that until attempting to restore the backup and notice that the entire set of files are in DOS 8.3 format. Many of the MySQL filesystem files are not 8.3.

Anyone have tips or information on whether this is a lost cause?

If it's not a lost cause for restoring, are there some tips directions that can be offered?

Much appreciated.

SSIS Data Flow Task Slows Down

Posted: 21 Apr 2013 08:33 AM PDT

I am trying to transfer data from one table to another in the same database using SSIS. It seems to go fine (I can see the record count going up) until it gets to about 30 million records, then it starts pausing. It will resume again but there are really long pauses in between. I've made sure to set my initial database size large enough so that it won't have to grow during the process. I am using OLE DB as a source and destination with fast load. Any ideas why it runs great and then slows right down?

How do I stop a query from running if the row is locked?

Posted: 21 Apr 2013 02:07 PM PDT

I have a section of code that locks the database, updates it and then confirms.

This is all working fine, if another user attempts to update the same row they cannot and their changes are discarded.

My problem is that there is a variable that is updated each time the query is run, this increases whether the query was successful or not. So what I need is the query NOT to run in the first place if the row is locked, how do I do this?

James

Is there a generic SQL-92 ODBC 64-bit Mac driver for Python?

Posted: 21 Apr 2013 02:13 AM PDT

I have a 4D SQL database which implements SQL-92.

4D's ODBC driver does not work on my system because it is compiled for 32 bit (and I am on 64 bit) and it is closed source.

I wonder if there is some generic 'SQL 92' driver (or ODBC driver) that I could use to make this connection?

SQL Server best practice for tempdb log file

Posted: 21 Apr 2013 09:27 PM PDT

I have read many blogs here and there about how to configure tempdb data files but i didnt found any information regarding the tempdb log file.

Here's the strategy that i'm presently using with my tempdb:

  • I have used the recommendations of Paul Randal on how to split my tempdb data files
  • I have set the size of my tempdb data files to their maximum and disabled autogrowth. For example, i have 100gb of free disk space and set the size of my 8 tempdb data files to 10gb each. This prevent fragmentation on the disk as recommended by Brent Ozar and also i have 20gb free for my log file.

But like i said, nobody is talking about the tempdb log file. What should i do with it? On my setup, this file is at the same place as the tempdb data files. What is the size and the autogrowth value that i should use with the tempdb log file?

Exporting and importing a database

Posted: 21 Apr 2013 04:25 AM PDT

I have setup an Oracle 11g Database and have a question about exporting and importing a database.

I am using Enterprise Manager and have browsed to the following: Data Movement->Export to Export Files. Down the bottom of the screen, I need to supply Host Credentials. When trying to use the username SYS, I see that the error message says to logout and login using a different role.

What role should I use? Should I create a specific role for this task, and if so, what priviledges should the role have?

Linux 32b MySQL periodically crashing with signal 11 and OOM errors

Posted: 21 Apr 2013 09:03 PM PDT

I have a MySQL DB with mostly InnoDB tables that I'm fairly certain isn't tuned properly based on what's happening, but I don't have enough know how to pinpoint what to change. The server is a 8G 32b Linux system with the following in my.cnf:

[mysql]  port                           = 3306  socket                         = /var/run/mysqld/mysqld.sock  [mysqld]  user                           = mysql  default_storage_engine         = InnoDB  socket                         = /var/run/mysqld/mysqld.sock  pid_file                       = /var/run/mysqld/mysqld.pid  key_buffer_size                = 64M  myisam_recover                 = FORCE,BACKUP  max_allowed_packet             = 16M  max_connect_errors             = 1000000  datadir                        = /var/lib/mysql/  tmpdir                         = /tmp  tmp_table_size                 = 64M  max_heap_table_size            = 64M  query_cache_type               = 0  query_cache_size               = 0  max_connections                = 200  thread_cache_size              = 50  open_files_limit               = 65535  table_definition_cache         = 8192  table_open_cache               = 8192  innodb_flush_method            = O_DIRECT  innodb_log_files_in_group      = 2  innodb_log_file_size           = 128M  innodb_flush_log_at_trx_commit = 2  innodb_file_per_table          = 1  innodb_buffer_pool_size        = 2G  log_error                      = /var/log/mysql/mysql-error.log  log_queries_not_using_indexes  = 0  slow_query_log_file            = /var/log/mysql/mysql-slow.log  slow_query_log                 = 1  long_query_time                = 2  general_log                    = 0  general_log_file               = /var/log/mysql/general.log  [isamchk]  key_buffer                     = 16M  [mysqldump]  quick  quote-names  max_allowed_packet             = 16M  

Currently, here are the non-zero global status stats:

Aborted_clients 28  Aborted_connects    667  Bytes_received  283596894  Bytes_sent  3709581404  Com_admin_commands  24456  Com_change_db   10267  Com_delete  167  Com_insert  1355  Com_kill    1  Com_select  1018481  Com_set_option  19563  Com_show_processlist    74  Com_show_status 30  Com_show_table_status   1  Com_show_tables 22  Com_show_variables  5  Com_update  2208  Connections 11157  Created_tmp_disk_tables 5131  Created_tmp_files   6  Created_tmp_tables  11044  Flush_commands  1  Handler_commit  1019009  Handler_delete  160  Handler_read_first  29551  Handler_read_key    3051320  Handler_read_last   3  Handler_read_next   5038745  Handler_read_prev   251210  Handler_read_rnd    685831  Handler_read_rnd_next   22756239  Handler_rollback    38  Handler_update  1166988  Handler_write   557183  Innodb_adaptive_hash_cells  8850419  Innodb_adaptive_hash_heap_buffers   1630  Innodb_adaptive_hash_hash_searches  4990439  Innodb_adaptive_hash_non_hash_searches  4315600  Innodb_background_log_sync  4145  Innodb_buffer_pool_pages_data   129440  Innodb_buffer_pool_pages_dirty  4  Innodb_buffer_pool_pages_flushed    9952  Innodb_buffer_pool_pages_LRU_flushed    237  Innodb_buffer_pool_pages_made_young 273289  Innodb_buffer_pool_pages_misc   1631  Innodb_buffer_pool_pages_old    47761  Innodb_buffer_pool_pages_total  131071  Innodb_buffer_pool_read_ahead   607  Innodb_buffer_pool_read_ahead_evicted   1325  Innodb_buffer_pool_read_requests    35806735  Innodb_buffer_pool_reads    373297  Innodb_buffer_pool_write_requests   30891  Innodb_checkpoint_age   365  Innodb_checkpoint_max_age   216721613  Innodb_checkpoint_target_age    209949063  Innodb_data_fsyncs  5575  Innodb_data_read    1834913792  Innodb_data_reads   401613  Innodb_data_writes  17424  Innodb_data_written 332080128  Innodb_dblwr_pages_written  9952  Innodb_dblwr_writes 431  Innodb_dict_tables  27606  Innodb_history_list_length  1979  Innodb_ibuf_free_list   9  Innodb_ibuf_merged_delete_marks 13  Innodb_ibuf_merged_deletes  3  Innodb_ibuf_merged_inserts  201  Innodb_ibuf_merges  144  Innodb_ibuf_segment_size    11  Innodb_ibuf_size    1  Innodb_log_write_requests   5819  Innodb_log_writes   6591  Innodb_lsn_current  77982531271  Innodb_lsn_flushed  77982531271  Innodb_lsn_last_checkpoint  77982530906  Innodb_master_thread_1_second_loops 4131  Innodb_master_thread_10_second_loops    411  Innodb_master_thread_background_loops   15  Innodb_master_thread_main_flush_loops   15  Innodb_master_thread_sleeps 4130  Innodb_max_trx_id   576268483  Innodb_mem_adaptive_hash    62128140  Innodb_mem_dictionary   109012014  Innodb_mem_total    2179465216  Innodb_mutex_os_waits   779  Innodb_mutex_spin_rounds    36022  Innodb_mutex_spin_waits 5369  Innodb_oldest_view_low_limit_trx_id 576268482  Innodb_os_log_fsyncs    3498  Innodb_os_log_written   5761024  Innodb_page_size    16384  Innodb_pages_created    94  Innodb_pages_read   374004  Innodb_pages_written    9952  Innodb_purge_trx_id 576267172  Innodb_rows_deleted 160  Innodb_rows_inserted    1323  Innodb_rows_read    28554644  Innodb_rows_updated 2078  Innodb_s_lock_os_waits  1278  Innodb_s_lock_spin_rounds   40952  Innodb_s_lock_spin_waits    2563  Innodb_x_lock_os_waits  132  Innodb_x_lock_spin_rounds   4893  Innodb_x_lock_spin_waits    176  Key_blocks_unused   57983  Key_blocks_used 10  Key_read_requests   6724  Key_reads   7  Key_write_requests  1441  Max_used_connections    25  Open_files  11  Open_table_definitions  8192  Open_tables 8192  Opened_files    138868  Opened_table_definitions    117810  Opened_tables   126475  Queries 1062631  Questions   1062631  Select_full_join    1211  Select_range    17271  Select_scan 27709  Slow_queries    4  Sort_range  215835  Sort_rows   723648  Sort_scan   20726  Table_locks_immediate   1055875  Threads_cached  21  Threads_connected   4  Threads_created 25  Threads_running 1  Uptime  4148  Uptime_since_flush_status   4148  

And finally, ulimit -a:

ulimit -a  core file size          (blocks, -c) 0  data seg size           (kbytes, -d) unlimited  scheduling priority             (-e) 0  file size               (blocks, -f) unlimited  pending signals                 (-i) 64613  max locked memory       (kbytes, -l) 64  max memory size         (kbytes, -m) unlimited  open files                      (-n) 1024  pipe size            (512 bytes, -p) 8  POSIX message queues     (bytes, -q) 819200  real-time priority              (-r) 0  stack size              (kbytes, -s) 8192  cpu time               (seconds, -t) unlimited  max user processes              (-u) 64613  virtual memory          (kbytes, -v) unlimited  file locks                      (-x) unlimited  

Does anything pop out that is clearly mis-configured?

Second time query execution using different constants makes faster?

Posted: 21 Apr 2013 02:03 AM PDT

Can someone explain or direct me how execution on indexes happen with different constants at intervals in Mysql. I notice only for the first execution on the table it takes time, after that with different constants it executes the query very quickly. I would like to know how to execute the query in such a way that it should take same amount of time every time it executes with different constants, is there a way to set some parameter off / on?

Query executed time : 9 mins.

mysql>  EXPLAIN SELECT     chargetype,    COUNT(br.`id`),   SUM(br.`charge`)  FROM  billingreport AS br  WHERE     br.`addeddate` BETWEEN '2013-02-01 00:00:00'    AND '2013-02-28 23:59:59'  AND br.`status` = 'success'    AND br.`idvendor` = 10     GROUP BY chargetype \G     *************************** 1. row ***************************         id: 1  select_type: SIMPLE      table: br       type: index_merge   possible_keys: NewIndex3,NewIndex6,idx_br_status        key: NewIndex3,idx_br_status     key_len: 4,1        ref: NULL       rows: 2887152      Extra: Using intersect(NewIndex3,idx_br_status); Using where; Using temporary; Using filesort  1 row in set (0.00 sec)  

Query executed time : 18 Secs.

 mysql>  EXPLAIN SELECT     chargetype,    COUNT(br.`id`),   SUM(br.`charge`)     FROM     billingreport AS br  WHERE     br.`addeddate` BETWEEN '2013-01-01 00:00:00'    AND    '2013-01-31 23:59:59'    AND br.`status` = 'success'    AND br.`idvendor` = 10  GROUP BY chargetype \G  *************************** 1. row ***************************         id: 1    select_type: SIMPLE      table: br       type: index_merge   possible_keys: NewIndex3,NewIndex6,idx_br_status        key: NewIndex3,idx_br_status    key_len: 4,1        ref: NULL       rows: 3004089      Extra: Using intersect(NewIndex3,idx_br_status); Using where; Using temporary; Using filesort   1 row in set (0.01 sec)  

MySQL General log not working on Mac OS X

Posted: 21 Apr 2013 01:03 PM PDT

I want to log all queries to the database running on my MacBook Pro (Mac OS X 10.8.1). I created a file /etc/my.cnf with the following content:

[mysqld]  general_log=1  general_log_file=/Users/wdb/mysql.log  

I restarted MySQL and tried some queries on the mysql command line, but nothing gets logged. The file is not getting created. I also created the file myself afterwards to test with touch, but the file remains empty.

When I check, MySQL did read my settings:

mysql> show variables like '%general_log%';  +------------------+----------------------+  | Variable_name    | Value                |  +------------------+----------------------+  | general_log      | ON                   |  | general_log_file | /Users/wdb/mysql.log |  +------------------+----------------------+  2 rows in set (0.01 sec)  

Is there anything else I need to do? I suppose any query I type on the command line should get logged, right?

I am using MySQL 5.5.24

Full Text Search With PostgreSQL

Posted: 21 Apr 2013 04:12 AM PDT

i have a table with this rows:

Stickers  ------------------------------------------------------  ID | Title                 |Keywords (ts_vector)  ------------------------------------------------------  01 | Sticker Case 580H     |'580h':3 'cas':2 'stick':1  02 | Sticker Case 580L     |'580l':3 'cas':2 'stick':1  03 | Sticker Case 580      |'580':3 'cas':2 'stick':1  04 | Sticker Case Plus 3000|'3000':4 'cas':2 'plus':3 'stick':1  

Well, when i do search using this script, just row 03 return, how i do return the row 01 and 02?

SELECT  *  FROM  stickers  WHERE  keywords @@@ to_tsquery('case & 580');  

In MySQL Multiple instance, default instance is running, second instance is not

Posted: 21 Apr 2013 12:03 AM PDT

I was made a setup to create multiple instances on the ubuntu machine. When I start mysql instances mysqld1 is running but mysqld2 is not running.

root@ubuntu:/var/lib# mysqld_multi report  Reporting MySQL servers  MySQL server from group: mysqld1 is running  MySQL server from group: mysqld2 is not running      Below is my my.cnf file :    [client]  port            = 3306  socket          = /var/run/mysqld/mysqld.sock    [mysqld_safe]  socket          = /var/run/mysqld/mysqld.sock  nice            = 0    [mysqld]  basedir         = /usr  tmpdir          = /tmp  skip-external-locking  bind-address            = 127.0.0.1  key_buffer              = 16M  max_allowed_packet      = 16M  thread_stack            = 192K  thread_cache_size       = 8  myisam-recover         = BACKUP  query_cache_limit       = 1M  query_cache_size        = 16M  expire_logs_days        = 10  max_binlog_size         = 100M    [mysqld_multi]  mysqld     = /usr/bin/mysqld_safe  mysqladmin = /usr/bin/mysqladmin  log        = /var/log/mysqld_multi.log  user       = multi_admin  password   = admin123    [mysqld1]  port       = 3306  datadir    = /var/lib/mysql  pid-file   = /var/lib/mysql/mysqld.pid  socket     = /var/lib/mysql/mysql.sock  user       = mysql  log-error  = /var/log/mysql1.err  [mysqld2]  port       = 3307  datadir    = /var/lib/mysql-databases/mysqld2  pid-file   = /var/lib/mysql-databases/mysqld2/mysql.pid  socket     = /var/lib/mysql-databases/mysqld2/mysql.sock  user       = mysql  log-error  = /var/log/mysql2.err    [mysqldump]  quick  quote-names  max_allowed_packet      = 16M    [mysql]    [isamchk]  key_buffer              = 16M    !includedir /etc/mysql/conf.d/  

When I check my error log file mysql2.err, the error was like,

130120 18:41:59 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql-databases/mysqld2  130120 18:41:59 [Warning] Can't create test file /var/lib/mysql-databases/mysqld2/ubuntu.lower-test  130120 18:41:59 [Warning] Can't create test file /var/lib/mysql-databases/mysqld2/ubuntu.lower-test  130120 18:41:59 [Note] Plugin 'FEDERATED' is disabled.  /usr/sbin/mysqld: Can't find file: './mysql/plugin.frm' (errno: 13)  130120 18:41:59 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it.  130120 18:41:59  InnoDB: Initializing buffer pool, size = 8.0M  130120 18:41:59  InnoDB: Completed initialization of buffer pool  130120 18:41:59  InnoDB: Operating system error number 13 in a file operation.  InnoDB: The error means mysqld does not have the access rights to  InnoDB: the directory.  InnoDB: File name ./ibdata1  InnoDB: File operation call: 'create'.  InnoDB: Cannot continue operation.  130120 18:41:59 mysqld_safe mysqld from pid file /var/lib/mysql-databases/mysqld2/mysql.pid ended  

Below is my apparmor file.

#include <tunables/global>    /usr/sbin/mysqld {  capability dac_override,  capability sys_resource,  capability setgid,  capability setuid,  network tcp,  /etc/hosts.allow r,  /etc/hosts.deny r,  /etc/mysql/*.pem r,  /etc/mysql/conf.d/ r,  /etc/mysql/conf.d/* r,  /etc/mysql/*.cnf r,  /usr/lib/mysql/plugin/ r,  /usr/lib/mysql/plugin/*.so* mr,  /usr/sbin/mysqld mr,  /usr/share/mysql/** r,  /var/log/mysql.log rw,  /var/log/mysql.err rw,  /var/lib/mysql/ r,  /var/lib/mysql/** rwk,  /var/log/mysql/ r,  /var/log/mysql/* rw,  /{,var/}run/mysqld/mysqld.pid w,  /{,var/}run/mysqld/mysqld.sock w,  /sys/devices/system/cpu/ r,  }  

My second instance data directory path is : /var/lib/mysql-databases/mysqld2/mysql Please let me know how to fix this issue. Thanks in advance.

SQL Server 2012 installation failure An error occurred for a dependency of the feature causing the setup process for the feature to fail

Posted: 21 Apr 2013 01:03 AM PDT

I am installing SQL Server 2012 Developer (from ISO image written on DVD), my machine has Windows 7 (Edit: Windows 7 with service pack 1) and it already has SQL Express and SQL 2008 R2 Express installed in it.

On running 2012 setup it goes fine from Setup Support Rules to Installation Progress. In Installation Progress after some progress installation fails.

Windows show the following components couldn't installed: Managment Tools Complete Failed. Client Tools Connectivity Failed. Client Tools SDK Failed. Client Tools Backwards Compantibility Failed. Managment Tools Basic Failed. SQL Server Data Tools Failed. Reporting Services -Native Failed. Database Engine Services Failed. Data Quality Services Failed. full-Text and Semantic Extractins for Search Failed. SQL Server Replication Failed. Integration Services Failed. Analysis Services Failed. SQL Client Connectivity SDK Failed.

In detail for failure of every component it provide these details:

Action required: Use the following information to resolve the error, and then try the setup process again. Feature failure reason: An error occurred for a dependency of the feature causing the setup process for the feature to fail.

In summary log file it provide the following details: (see on pastebin.com)

postgis problem with shortest distance calculation

Posted: 21 Apr 2013 08:03 AM PDT

while working with POSTGIS pgrouting, for calculateing the distance between two roads(lines) i got the shortest_path function.

But the logic is based on Start_point(Start_id) and end_point(end_id) but in my data the linestring contains so many internal points like ('linestring(1 1,2 2,3 3,4 4,5 5)' just for example..)

it is taking start point (1 1) endpoint(5 5)

if other line starting with (5 5) it is showing as route...like ('linestring(5 5,6 6)')

But line which crossing the point inside the linestring like(2 2,3 3,4 4) which is not telling as connected.. example

table roads: id name way 1 A linestring(1 1,2 2,3 3,4 4,5 5) 2 B linestring(5 5,6 6) 3 c linestring(2 1,2 2,2 3)

if i am applying shortest_path function from point(1 1) to (6 6) its showing the way but for (1 1) to (2 3) it is not showing anything...but there is a route for this (1 1,2 2,2 3)

can anyone please help me out for finding the solution..

Regards Deepak M

No comments:

Post a Comment

Search This Blog