[SOLVED] Can having a lot of DBSets in the DbContext file affect performance?


We have a large number of tables (around 1800) in our MySQL database and most of the time, every developer creates their own project and adds a context with just the necessary tables as DbSets. One of my colleagues suggested creating a single project that has every single table in the database and sharing it amongst every project. Could this affect performance? For instance, would it slow down the program at startup or during running credits?


Large DbContexts can be slower to initialize, which is a one-off cost when the DbContext is first used. This is also when a DbContext might fail due to an invalid schema configuration. You can "warm" the DbContext by issuing a simple query when your service/application starts up. For instance:

// Check application DB version and warm the DbContext.
using (var context = new AppDbContext())
    var dbVersion = context.AppConfig.Select(x => x.DbVersion).Single();
    if (dbVersion < MinimumDbVersion)
       throw new ApplicationException("The application database version is too low.");

Where the database has an AppConfig table containing config settings including a DbVersion. Basically it can be anything, loading config settings, or simply doing a Count of Users or such. As long as a query executes, the DbContext’s static initialization will execute.

The length of time needed to initialize isn’t really linked to the DbSet declarations as much as it is linked to the number of entities in total that the DbContext needs to be aware of. Even if you have 100 "top-level" entities with a DbSet defined for them, if those entities span out to reveal relationships to another 1700 entities then that will be effectively the same amount of work as 1800 DbSets.

Bounded contexts are a process where you split up entities into "contexts" where each DbContext serves a key area of a system. This both reduces the number of entities that each DbContext needs to know about, speeding up initialization, but it also helps better organize your code to focus on entities, and even representations of the underlying data that is relevant to that area. Entities can be shared between DbContext instances, though you should seek to ensure that only one DbContext is responsible for writing that entity. For instance you might define separate DbContext instances for managing Customers and managing Orders. Orders need to know about their customers, but it would be the CustomerDbContext’s responsibility to create/update customer information, not the OrderDbContext. One option there is to use something like a CustomerSummary entity within the OrderDbContext to map just the essential details of a customer when in the context of an order. Ideally the DbContext should be inspecting what entity types are present in the inserted/updated/deleted change tracker on SaveChanges and throw exceptions if anything other than the entity types they are responsible for show up there.

The typical performance killer for DbContext’s that is typically blamed on their size is generally allowing DbContext instances to live too long and track too many entities. The more entities that a DbContext knows about, the easier it is for them to end up tracking a lot more entities without the developers realizing the instances are open too long. There is an assumption that a bigger context is "more expensive" to create, so it gets opened and passed around/persisted longer to avoid that perceived cost, which incurs performance costs from that instance gradually getting bloated by tracked instances.

Answered By – Steve Py

Answer Checked By – Cary Denson (BugsFixing Admin)

Leave a Reply

Your email address will not be published. Required fields are marked *