The most important question to ask about your system: Which engine? (Probably it is InnoDB.)
The read/write split is not much of a factor in tuning.
1000 "users" is vague. My bank has a million "users", but they don't have to worry about a million simultaneous logins. An online multi-user game with 1000 users does need to handle hundreds at once -- but that is at the front end, not at the database.
Do you have a "database layer"? I hope you do not have 1000 different logins to MySQL. It could handle it, but it is not wise to expose the database _directly_ to users. Instead, there should be a layer with PHP (or Java, or ...) that interacts, on one side, with the users. On the other side it interacts with the database.
> select queries - 80 to 90%
But how often (queries/sec)? How complex?
> Disk space is 500 GB (can assume avg. data size will be 350 GB)
This does not leave much room for maintenance.
> (can assume avg. data size will be 350 GB)
That's possibly in the 99th percentile of systems discussed here. You need to look carefully at a lot of aspects as you design the schema. A critical thing to do early is to pick the datatypes to be as small as can safely be done. That an many other tips can be found here:
http://mysql.rjweb.org/doc.php/ricksrots
I will explain things further if needed.
> Insert queries - approx 5 to 6%
It will take a long time to fill up 350GB?
Back to your question. This will explain how to tune the critical values in my.cnf:
http://mysql.rjweb.org/doc.php/memory
Beyond that, performance is a function of schema and application design.