Book Image

ASP.NET Core 2 and Vue.js

By : Stuart Ratcliffe
5 (1)
Book Image

ASP.NET Core 2 and Vue.js

5 (1)
By: Stuart Ratcliffe

Overview of this book

This book will walk you through the process of developing an e-commerce application from start to finish, utilizing an ASP.NET Core web API and Vue.js Single-Page Application (SPA) frontend. We will build the application using a featureslice approach, whereby in each chapter we will add the required frontend and backend changes to complete an entire feature. In the early chapters, we’ll keep things fairly simple to get you started, but by the end of the book, you’ll be utilizing some advanced concepts, such as server-side rendering and continuous integration and deployment. You will learn how to set up and configure a modern development environment for building ASP.NET Core web APIs and Vue.js SPA frontends.You will also learn about how ASP.NET Core differs from its predecessors, and how we can utilize those changes to our benefit. Finally, you will learn the fundamentals of building modern frontend applications using Vue.js, as well as some of the more advanced concepts, which can help make you more productive in your own applications in the future.
Table of Contents (15 chapters)

EF Core – what's new?

These days, it is rare to see an ASP.NET application that doesn't make use of some kind of ORM, and even rarer to see one that uses anything other than EF. There are certainly other options, such as the much lighter Dapper and Marten, a library that takes the JSONB capabilities of PostgreSQL and uses them to turn it into a full-featured NoSQL document store. However, SQL Server is where most .NET developers' comfort lies, so we'll stick with what we know for the examples in this book.

Configuring relationships

In older versions of EF, you could get away with leaving it to do its thing without manually intervening with the way it builds out the relationships between tables in the database. It could handle one-to-one, one-to-many, and many-to-many relationships out of the box, meaning that unless you had a super complicated domain model, you didn't need to do much to get a working database.

In EF Core, only one-to-one and one-to-many relationships can be inferred without manual configuration. I don't see this as a huge problem, as it is only a few extra lines of code to tell the fluent model builder how to configure many-to-many relationships:

protected override void OnModelCreating(ModelBuilder builder)
{
builder.Entity<OrderItem>()
.HasKey(x => new { x.OrderId, x.ProductId });

base.OnModelCreating(builder);
}

Notice how we only have to instruct EF what to do with the join table. From these few lines of code, it can now go away and build the database for us without any problems.

Global query filters

One of the features that other ORMs had that EF didn't was the concept of global query filters. These queries are a means of telling EF to automatically apply a LINQ statement to every query that's executed against the type of entity in the filter. A common use case for this kind of query is when an application uses the concept of soft deletes. Rather than actually deleting the data, it is marked with a Boolean flag instead.

The following image shows how we can register a global query filter on a DbContext entity to only include records where the IsDeleted flag is set to false:

protected override void OnModelCreating(ModelBuilder builder)
{
builder.Entity<Order>()
.HasQueryFilter(x => !x.IsDeleted);

base.OnModelCreating(builder);
}

We could also use these global filters in multitenant applications, where each tenant should only be able to access the data associated with their tenancy. This is a much better solution than relying on applying these filters manually on every query, which is exceedingly error-prone, as it is too easy to forget.

Compiled queries

EF now supports the concept of explicitly compiled queries. These provide a number of benefits, most notably by increasing the query's performance, but also making it easy to run the same query in multiple places within the code.

The idea is pretty simple; if we have a query that is run many times within our application, then we can instruct EF to compile it. It is compiled only once, but we can run it as many times as we like, with different parameters each time. The following code shows an example of how we can define a compiled query and then execute it:

public static class CompiledQueries
{
public static Func<ApplicationDbContext, int, Order> OrderById =
EF.CompileQuery((ApplicationDbContext db, int id) =>
db.Orders
.Single(c => c.Id == id));
}
[HttpGet]
public IActionResult CompiledQuery()
{
var order = CompiledQueries.OrderById(_context, 147);
}

In-memory provider for testing

It has always been exceptionally difficult to write tests around the code that was dependent on an EF DbContext. To make testing easier, developers often resorted to implementing different variations of the repository pattern so that the business layer could depend on a repository interface instead. This had the desired effect of making testing easier, but the general concept of a repository pattern over the top of EF was quite simply unnecessary, as the DbContext is already an implementation of both the repository and unit of work patterns.

EF Core has addressed this issue by providing us with an in-memory version that we can use for our tests. It is now a fairly simple task to create an in-memory database and seed it with test data before each test is run. This ensures that the database is in a known state for each test, without the complexity of attempting to mock the DbContext!

The following example shows how we can configure a test DbContext and fill it in with test data:

public static ApplicationDbContext GetDbContext(params object[] seedData)
{
var connection = new SqliteConnection("DataSource=:memory:");
connection.Open();

var options = new DbContextOptionsBuilder<ApplicationDbContext>()
.UseSqlite(connection)
.Options;

var context = new ApplicationDbContext(options);
context.Database.EnsureCreated();
if (seedData != null && seedData.Length > 0)
{
context.AddRange(seedData);
context.SaveChanges();
}
return context;
}

We can then pass this DbContext to a dependent controller within the scope of our unit tests:

[Fact]
public async Task Test()
{
using (var context = GetDbContext())
{
//arrange
var controller = new ProductsController(context);

//act
var result = await controller.GetProducts();

//assert
Assert.NotEmpty(result);
}
}

The only thing to note when using the in-memory provider is that it isn't a full relational database, and doesn't try to mimic one. I've noticed a few weird things when using it in my applications, and found that using an in-memory SQLite provider to be far more stable and predictable. There is plenty of documentation on both options on Microsoft's own ASP.NET Core documentation pages.