Re: Which config for large concurrent additions
Posted by: Inbox Roel
Date: March 16, 2016 01:11PM

Hi Rick,

thanks for your reply.

I don't have a create or select yet. I first wan't to have a scope on how to organize the data before starting.

Summarizing:
1) Not using Blobs (because I indeed will be populating and manipulating the results for my dashboards/analysis.
2) Not doing:
->Profiles_table (with all profiles and an ID per profile)
->Profile_1_table (table per profile with the scraped results)
->Profile_2_table (table per profile with the scraped results)
etc.

Is it then maybe a good idea to:
->Iteration_table (info when scraping iteration was taken and and ID per iteration)
->Profiles_table (with all profiles and an ID per profile)
->Iteration_1_table (table with the scraped results for alle profiles in the iteration)
->Iteration_2_table (table with the scraped results for alle profiles in the iteration)
etc
Looking at this option, per cycle (and looking at an estimated max of 10k profiles and max 200 results per profile in my opinion, this will have 2milion rows combined with about 50 columns). Is this within the limits of a table?

Looking at the INSERT. I've uploaded via the import function in MySql Workbench 20k profiles (with 20 columns) that where in a CSV. This took about 2,5 hours. I'm worried that the 2 million rows from 1 iteration may take ages. How do you look at this?

Options: ReplyQuote


Subject
Written By
Posted
Re: Which config for large concurrent additions
March 16, 2016 01:11PM


Sorry, you can't reply to this topic. It has been closed.

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.