Could concurrent user-triggered data fetches and inserts lead to deadlocks in a multi-user ASP.NET Core + MSSQL application?
I'm facing a tricky situation that might result from a not thoroughly thought-out design, and I'm hoping to understand whether a deadlock might be a realistic cause – and if so, how to prevent similar issues in future designs. System Overview: The software runs on a customer’s server. It's a multi-tenant, multi-user system. Users can press a button at any time to trigger a fetch cycle, which simply runs a SELECT query on a MSSQL database table. At the same time, other users can place orders via an ASP.NET Core API endpoint. The orders are inserted into the same or a related table using Entity Framework Core (INSERT statements). Observed Problem: Recently, we've had multiple cases where data inserts failed due to SQL command timeouts (30s default). According to available monitoring data, CPU and RAM usage were normal during that time. Unfortunately, the events occurred about a week ago, and I lack full trace logs or SQL diagnostics. What I'm wondering: Could this be caused by deadlocks or locking/blocking due to multiple simultaneous reads and writes? Even if the SELECT queries are relatively simple, could they still interfere with inserts if many users trigger fetch cycles at once? Would changing isolation levels, restructuring the table(s), or adjusting the way inserts and reads happen help prevent this? Goal: I’m trying to understand: whether this is a design flaw (and what principles I might have missed), and how to improve the resilience of systems like this in future (e.g., batching, decoupling reads/writes, CQRS, etc.) Any insights would be greatly appreciated! (and before you ask, yes this was generated by me via chatGPT and i think it did well)

I'm facing a tricky situation that might result from a not thoroughly thought-out design, and I'm hoping to understand whether a deadlock might be a realistic cause – and if so, how to prevent similar issues in future designs.
System Overview:
The software runs on a customer’s server.
It's a multi-tenant, multi-user system.
Users can press a button at any time to trigger a fetch cycle, which simply runs a SELECT query on a MSSQL database table.
At the same time, other users can place orders via an ASP.NET Core API endpoint.
The orders are inserted into the same or a related table using Entity Framework Core (INSERT statements).
Observed Problem:
Recently, we've had multiple cases where data inserts failed due to SQL command timeouts (30s default).
According to available monitoring data, CPU and RAM usage were normal during that time.
Unfortunately, the events occurred about a week ago, and I lack full trace logs or SQL diagnostics.
What I'm wondering:
Could this be caused by deadlocks or locking/blocking due to multiple simultaneous reads and writes?
Even if the SELECT queries are relatively simple, could they still interfere with inserts if many users trigger fetch cycles at once?
Would changing isolation levels, restructuring the table(s), or adjusting the way inserts and reads happen help prevent this?
Goal:
I’m trying to understand:
whether this is a design flaw (and what principles I might have missed), and how to improve the resilience of systems like this in future (e.g., batching, decoupling reads/writes, CQRS, etc.)
Any insights would be greatly appreciated! (and before you ask, yes this was generated by me via chatGPT and i think it did well)