Astronomical Calculations: Delta T

Historically, Greenwich Mean Time (GMT) — solar noon at the Royal Observatory in Greenwich, England — was used as a universal reference point for trains and ships across the British empire. It is a timekeeping system based on the Earth’s rotation.  And it is a “mean” (average) because the exact moment the Sun crosses the Observatory’s meridian varies throughout the year. (This is a function of the Earth’s elliptical orbit and  angular velocity that I won’t go into here; see the equation of time for a full explanation.)

Eventually we needed something more  universal than a time standard kept in sync with one spot on Earth. Enter the atomic clock. An atomic clock can keep an exact 24 hours, day after day, century after century regardless of when the Sun crosses the meridian of any spot on Earth. Using cesium-133 atoms as a reference point, an atomic clock has an error deviation of about one second in 1.4 million years! We refer to this time as International Atomic Time (TAI).

We now have a solid, universal fixed-length period. Is that it? Can we now just use TAI for our civil time? No because the Earth and the natural forces that affect it are not precise. Things speed up and slow down due to tidal friction and other smaller perturbations. In a nutshell, the Earth cannot keep in sync with the precision of an atomic clock. It’s rotation is slowing down. It’s just like a toy top that a child spins up. The top spins very fast in the first few seconds and then over time loses its angular momentum and gradually spins slower. Our Earth is like a top that is slowing down. But not to worry. Long before it wobbles to a stop it will have been swallowed up by an expanding Sun!

So on the one hand we have a very precise TAI. And on the other hand we have an uncooperative Earth that is slowing down gradually over time. So in 1972, an international standards body introduced Coordinated Universal Time (UTC), a time standard like GMT that is still based on the Earth’s rotation, but with adjustments based on the precision of the underlying TAI.

The idea is to allow UTC to follow the irregularity of the Earth’s rotational period but also to adjust it at regular intervals according to TAI. These adjustments are made by an international committee charged with tracking this deviation. Twice a year they decide whether to add leap seconds to UTC. It’s all incredibly nerdy and I’m going to leave that right there. But as I write this on 1 Feb 2020, I took note of the time as it reached 16:39:00 TAI. At that exact moment it was also 16:38:23 UTC. So TAI is currently 37 seconds ahead of UTC. (This is because TAI was 10 seconds ahead of UTC when it was adopted in 1972 and there have been 27 leap seconds since then.)

That’s all well and good for civil time. But in astronomy, in order to calculate eclipses or transits you need to know the exact orbital positions of the Sun, Moon and planets at a given moment in time. So we need yet another time scale called Terrestrial Time (TT). This is another uniform time scale that works over long distances and long periods of time. Along with TT is a current baseline called the J2000.0 epoch. Both TT and the current epoch will come into play shortly.

Finally that brings me to Delta T. From the wiki:

In precise timekeepingΔT (Delta Tdelta-TdeltaT, or DT) is a measure of the cumulative effect of the departure of the Earth’s rotation period from the fixed-length day of atomic time. Formally it is the time difference obtained by subtracting Universal Time (UT, defined by the Earth’s rotation) from Terrestrial Time (TT, independent of the Earth’s rotation): ΔT = TT − UT. The value of ΔT for the start of 1902 is approximately zero; for 2002 it is about 64 seconds. So the Earth’s rotations over that century took about 64 seconds longer than would be required for days of atomic time.

In his book Astronomical Algorithms, Meeus lists a general algorithm you can follow to calculate an approximate value of ΔT. Here we let t be the time (in centuries which is why we divide by 100) measured from the current J2000.0 epoch:

$$t = {year -2000 \over 100}$$

For years after 2000 Meeus introduces another calculation published in a paper in Paris written by Chapront, Chapront-Touzé & Francou (1997):

$$ΔT = 102 + 102 t + 25.3 t^2$$

I do not have access to their paper so I can’t explain the use of these “magic numbers” but you can see them in use on this Delta T calculator on a retro 90s-era (but awesome) web site maintained by Professor van Gent of the University of Utrecht.

Next we add the following correction for years between 2000 and 2100:

$$0.37 * (year – 2100)$$

That’s it. Let’s implement this algorithm for the year 2020. I’m using C# language syntax and Visual Studio Code as my editor:

class Program
{
    static void Main(string[] args)
    {
        var year = 2020;
        var t = (year - 2000) / 100;
        var deltaT = 102 + (102 * t) + (25.3 * Math.Pow(t, 2));
        var correction = 0.37 * (year - 2100);
        var correctedDeltaT = deltaT + correction;

        Console.WriteLine("Corrected DT for the year {0} is: {1}", 
            year, 
            correctedDeltaT);
    }
}

And my output is +72.4 seconds. (Delta T is always expressed in positive terms.) So at the start of 2020 the Earth’s rotation was roughly 72.4 seconds behind UTC.

Now you’ll recall in my discussion of TAI and UTC above that when TAI was at 16:39:00 UTC was at 16:38:23, a difference of 37 seconds. What’s the relationship here?  Well, TT is always 32.184 seconds ahead of TAI. Why 32.184 seconds? Just take my word for it. It involves an offset from the start of TAI that takes into account an earlier time standard. With that bit of hand-waving out of the way here are our relationships when TAI was at 16:39:00:

TT == 16:39:32.184
TAI == 16:39:00
UTC == 16:38:23

So TT is about 69 seconds ahead of UTC. But if ΔT = TT − UT results in 69 what about our algorithm above that produced 72.4? Remember the leap second additions? Every time a leap second is added to UTC the gap between TT and UTC widens. Three leap seconds were added since the J2000.0 epoch began (2005, 2008, and 2016). This is why our algorithm uses J2000.0 epoch as its baseline.

NASA has published another algorithm for deriving ΔT that is specific for years between 2005 and 2050. If I implement their algorithm into C# code we get this:

private static double ApproximateDeltaT(int year)
{
    var t = year - 2000;
    return 62.92 + 0.32217 * t + 0.005589 * t * t;
}

This functions returns 71.599. These are approximations so we mustn’t worry about why the  two algorithms don’t produce the exact same results. The point to all of this is to know exactly when in TT an event like the next solar eclipse will occur so that you can convert it to UTC in order to observe it!

ASP.NET Core Multi-Tenant API

Introduction

My father was a cabinet maker. A customer would hire him to build cabinets for their kitchen. And every kitchen was different. Some had several windows, others just one over the sink, and yet others no windows at all. Some ceilings were higher than others. And so on. He would measure each kitchen and build custom cabinets that fit the space. Obviously this was expensive and only customers with some serious money could afford to hire him. Now it’s common to build generic cabinets that are one-size-fits-all. A cabinet maker can crank these out by the hundreds and installers can mount them in the most standard-size kitchens. If there is a need, custom cabinets can still be built and installed alongside the generic cabinets.

Generic Building Blocks

For years and years our bread and butter has been like that of the custom cabinet maker. Each app we wrote was lovingly hand-crafted just for a particular customer. We gave no thought to reuse. A few years ago Ben Busse wrote an excellent article,  and noted that  “building one-off APIs and a custom backend for each and every new application is untenable.” Over time the dozens or hundreds of different APIs in an enterprise become a maintenance nightmare. Data governance? Security? Documentation? Discoverability? It can get pretty crazy, pretty fast.  Busse makes the case for building reusable (generic) REST APIs for common data.

In my shop we’ve been having this same discussion. It seems that every time a new app comes along there’s always the need to store people with first, middle, and last names. You just know that someone is going to write a PERSON.SQL script. Does the next app six months later have the same requirement? Let’s create yet another PERSON table! Maybe we copy and paste from another team’s project. Maybe we start from scratch. Who knows.

What if there were a better way? What if you took generic elements like PERSON that are common across business domains and encapsulate this data in a reusable API? Then development teams could leverage the reusable APIs and focus on creating new APIs that are unique to their business domains. It is this reusability (multi-tenancy) that I want to address.

Multi-Tenancy

Broadly speaking, multi-tenancy can be of two types: logical isolation and physical isolation. It is the latter in which I’m interested. Suppose each development team wants to have its own copy of the database. Or suppose the customer requires it. You would have this architecture (see Multi-tenant SaaS patterns):

Multi-Tenant Databases
Shared API with physically isolated databases

The catalog is a data store of all tenants that holds information as to which database the tenant is assigned. It could be a SQL Azure instance, or Azure Table Storage, or even an appsettings.json file.  If all tenant databases are on the same SQL Azure server in the same resource group you could group them into an elastic pool. The article I referenced above has guidance on when you might want to do that.

Implementation

The complete source code for this project is available on GitHub in this repo: https://github.com/jamesstill/MultiTenantWidgetApi.

I’ll use Visual Studio 2019 Community Edition with the latest (as of this writing) .NET Core SDK v2.2.401. This will probably be the last time I use Core 2.2 in a web project since 3.0 will be out soon!

I’ll begin by creating a new ASP.NET Core 2.2 API project. In addition to the out-of-the-box NuGet packages I’m going to add EF Core:

  • Microsoft.EntityFrameworkCore.InMemory
  • Microsoft.EntityFrameworkCore.SqlServer
  • Microsoft.EntityFrameworkCore.Tools

In a real production app I would want to use HMAC auth or an OpenID Connect (OIDC) layer such as IdentityServer4, Auth0, or Okta in my API to authenticate JSON web tokens. But to keep things really simple I’m going to use Basic Auth with this NuGet package I wrote:

My strategy here is to use the authenticated security principal (User.Identity.Name) as the tenant ID in the app. So obviously this only works if tenant names are unique. Typically, the API owner issues the credential/tenant name to the caller so this should be no problem.

Configuration

I’m going to add a default (base) connection string to my appsettings.json file:

{
  "ConnectionStrings": {
    "DefaultConnection": "Server=tcp:sample.database.windows.net,1433;Database=sample;..."
  },
  "Logging": {
    "LogLevel": {
      "Default": "Warning"
    }
  },
  "AllowedHosts": "*"
}

And in my Startup.cs I want to inject IHttpContextAccessor into my controller so that I can access the user principal. I’m also setting up basic auth just to use a simple authentication scheme for this example. As I said earlier, in a real production app I would want to use HMAC or JWT with claims. Here’s my ConfigureServices method in Startup.cs:

public void ConfigureServices(IServiceCollection services)
{
    services.AddMvc()
        .SetCompatibilityVersion(CompatibilityVersion.Version_2_2);
    services.AddHttpContextAccessor();

    // basic auth as an example authentication scheme
    services
        .AddAuthentication(BasicAuthenticationDefaults.AuthenticationScheme)
        .AddBasicAuthentication<BasicAuthenticationService>();
}

I also need to add authentication to the HTTP request pipeline:

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
    // omitted for brevity

    app.UseAuthentication();    // <---------
    app.UseHttpsRedirection();
    app.UseMvc();
}

And of course my controller is decorated with the [Authorize] attribute. This completes the configuration needed for the API.

Catalog Service

In the diagram above tenant information is stored in a catalog data store. I’m going to hard-code that store in a CatalogService class. In a production app you would call out to a database, table storage, or another API. Here’s the class:

public class CatalogService
{
    private readonly TenantSettings _tenantSettings;

    public CatalogService(TenantSettings tenantSettings)
    {
        _tenantSettings = tenantSettings ??
            throw new ArgumentNullException(nameof(tenantSettings));
    }

    public async Task<string> GetConnectionString()
    {
        // fetch the tenant from the catalog
        var tenant = await GetTenantFromCatalog(_tenantSettings.TenantId);
        if (tenant == null)
        {
            throw new Exception("Tenant not found in catalog!");
        }

        var builder = new SqlConnectionStringBuilder(_tenantSettings.DefaultConnectionString)
        {
            DataSource = tenant.DatabaseServerName,
            InitialCatalog = tenant.DatabaseName
        };

        return builder.ConnectionString;
    }

    /// <summary>
    /// Stub to simulate a call out to an API, Azure Table Storage, DB, 
    /// or the location of the catalog you implement in your project.
    /// </summary>
    /// <param name="tenantId"></param>
    /// <returns></returns>
    private async Task<Tenant> GetTenantFromCatalog(string tenantId)
    {
        var list = new List<Tenant>()
        {
            new Tenant {
                TenantId = "tenant1",
                DatabaseServerName = "tcp:tenant1.database.windows.net,1433",
                DatabaseName = "tenant1db"
            },
            new Tenant {
                TenantId = "tenant2",
                DatabaseServerName = "tcp:tenant2.database.windows.net,1433",
                DatabaseName = "tenant2db"
            },
            new Tenant {
                TenantId = "tenant3",
                DatabaseServerName = "tcp:tenant3.database.windows.net,1433",
                DatabaseName = "tenant3db"
            }
        };

        await Task.CompletedTask;
        return list
            .Where(t => t.TenantId == tenantId)
            .SingleOrDefault() ?? new Tenant();
    }
}

The main logic in the service is in the GetConnectionString method. Once we have the authenticated principal name (the tenantId) we can use a SqlConnectionStringBuilder to transform the default connection string into the tenant’s connection string. The controller can new up a DbContext instance with this connection string:

// build a DbContext for this tenant
var service = new CatalogService(settings);
var cn = service.GetConnectionString().Result;
var optionsBuilder = new DbContextOptionsBuilder<WidgetDbContext>()
    .UseSqlServer(cn);

WidgetDbContext context = new WidgetDbContext(optionsBuilder.Options);

In this way you can achieve multi-tenancy with physical database isolation pretty easily. With the DbContext instantiated, your controller Get method is the same as with any API:

[HttpGet]
[ProducesResponseType(typeof(IEnumerable<Widget>), (int)HttpStatusCode.OK)]
[ProducesResponseType(404)]
[ProducesResponseType(500)]
public async Task<IActionResult> Get()
{
    if (_context == null)
    {
        return StatusCode(500);
    }

    var list = await _context.Widgets
        .AsNoTracking()
        .ToListAsync();

    return Ok(list);
}

Test with Postman

Clone the source code and bring it down to your machine. Then build and run the API. You can test in Postman with basic auth using either “tenant1” or “tenant2” as a username and any password. Here I am authenticating as tenant1:

And after I send the request I get back my list of widgets:

Try it with tenant2 and you’ll get only those widgets for that tenant.

Logical Isolation

Suppose you only need logical isolation? Then all widgets for all tenants could be stored in the same table with an extra TenantId column on the table. You’d want to put an index on TenantId. There would be no CatalogService. And returning all of a tenant’s widgets would be done with an additional WHERE clause:

var list = await _context.Widgets
    .Where(w => w.TenantId == _tenantId) // <-----
    .AsNoTracking()
    .ToListAsync();

If the risk is low, logical isolation is good enough. But I’d be careful if your business depends on tenant’s never accidentally seeing other tenant data. If Salesforce can get this wrong then you probably will too. Physical isolation is worth the extra hassle if you want to be sure that customer data is never co-mingled.

 

 

Deep Dive: Azure DevOps Version Control

Introduction

Let’s talk about version control in Azure DevOps. If you want to automate your builds in Azure DevOps (or any DevOps process) then you must use version control.  So if you need to get up to speed check out these resource (here, here, and here).

Now let’s dive deep into the whole enchilada: files, commits, pushes, branches, tags, pull requests, and of course integration of all this goodness from within the Visual Studio IDE. If you want to see the source code for the sample project I used for this post go to my repo here: https://github.com/jamesstill/WidgetApi.

When you provision a new project you have two choices within the tool for native version control: Git or Team Foundation Version Control (TFVC). Personally, I haven’t used TFVC since I worked with an on-prem TFS Server several years ago. There’s nothing wrong with it. It’s just that Git has quickly become the de facto standard. So if you’ve got a brand new project just choose Git and be done with it.

If you’re not familiar with Azure DevOps you should probably go read my four-part series before continuing. I’m going to assume you already have an account and know your way around. Continue reading “Deep Dive: Azure DevOps Version Control”

Kepler’s Equation

Introduction

Since at least the time of Hipparchus and Eudoxus the ancient world believed that the Sun, moon, planets, and stars moved around the earth in circular orbits. Aristotle’s Physics put the heavenly bodies on perfect crystal spheres. This theory was further formalized in the second century by Ptolemy in his Almagest which served the basic needs of astronomers for the next 1,500 years. Over that time, Aristotelian physics became an article of faith not to be questioned.

But there were unexplained irregularities that never quite fit the theory. The ancients knew that the seasons were of different lengths. For example, Winter is about 89 days in length while Summer is about 94 days. Why did the Sun sometimes speed up and slow down like that? They also noticed that Mars would move East across the night sky for a few years, slow down, and then reverse course and move West for a few months before looping back to its original course. (What we now call apparent retrograde motion.)

Continue reading “Kepler’s Equation”

Astronomical Calculations: Solar Coordinates

In the last post I showed how to calculate the Julian Day (JD) and the Ephemeris Time (T). Now I want to build on that to calculate the geocentric (or apparent) coordinates of the Sun for any given moment in time. This algorithm is taken from Astronomical Algorithms by Jean Meeus (2nd Edition) with modifications from the U.S. Naval Observatory. All page number references are to Meeus’ book.

The goal here is to input any date and time (say the launch of Sputnik 1 on 4 Oct 1957 at 19:29 UTC) and get back the coordinates of the Sun as it appears to us on the celestial sphere at that moment. I recommend reading an introductory article Position of the Sun before continuing. For the complete software solution implemented in C# see my GitHub repo. Note that you will need to download and install the free Microsoft Visual Studio Community Edition 2017 to build and run the code. 

Continue reading “Astronomical Calculations: Solar Coordinates”

Astronomical Calculations: The Julian Day

Later I want to post an implementation of how to calculate geocentric solar coordinates for a given date and time based on algorithms published in Astronomical Algorithms by Jean Meeus. But that is very involved and going to be a long post. So I thought it would be wise first to talk about dates and times used by astronomers. This will “prepare the ground” so to speak.

In the West we use the Gregorian calendar which began on 4 Oct 1582 and replaced the Julian calendar. To fix the drift from the Julian calendar that had accumulated over the centuries,  4 Oct was followed by 15 Oct. As you can imagine this led to a lot of confusion since some people still used the old calendar and others referred to the new one. (Think of the Imperial and the metric system in our own time.) Also, not all cultures adopted the new calendar right away. An astronomer in Munich might still be using the Julian calendar while another in Venice used dates that referred to the new Gregorian calendar. So in 1583 a scholar named Joseph Scaliger proposed an abstraction which he called the Julian Period.  The period is 7,980 years and runs from 1 Jan 4713 BCE (which is Year 1) to 31 Dec 3268 CE.  We have a similar system today with POSIX (or Unix) time in which the epoch begins on 1 Jan 1970 UTC and counts the number of elapsed seconds since that moment.

From the Julian Period astronomers refer to the Julian Day (JD) when making calculations in celestial mechanics. A JD value is the number of days since the beginning of the Julian Period. Meeus writes:

The Julian Day number or, more simply, the Julian Day … is a continuous count of days and fractions thereof from the beginning of the year -4712.

Why -4712? Because dates always start at 0 so if 4713 BCE is year one then -4712 is the very beginning of the period for calculation purposes. Civil dates begin at midnight UTC. This is inconvenient for astronomers since celestial events might overlap from one day to the next. For example if you observe a comet at 23:00 hours and report on its movement until 02:00 hours you’d have to use two different dates. So the Julian Day begins at apparent solar noon UTC so that there’s a 12 hour difference from the beginning of the civil date.

So with all of that esoterica out of the way how do we calculate the JD? Here I’ll modify Meeus to show the pseudocode:

Let Y == Year
Let M == Month Number // 1 is Jan, 2 is Feb, etc.
Let D == Day Number // where D is double (64-bit floating-point value)

If M == 1 OR M == 2 Then:
    
    Y == Y - 1
    M == M + 12

// That is if the date is Jan or Feb then the date is the 13th or
// 14th month of the preceding year for calculation purposes.

If the date is on the Gregorian calendar (after 14 Oct 1582) then:

Let A = (int)(Y / 100)
Let B = 2 - A + (int)(A / 4)

If the date is on the Julian calendar (prior to 4 Oct 1582) then:

Let B = 0

So far we’ve set up the variables for the main calculation. D is a double because it could represent a fraction of a day. For example, 4.812 is the 4th day of the month at 19:29 (7:29 PM). Here’s the algorithm for the Julian Day itself:

JD = (int)(365.25 * (Y + 4716)) + (int)(30.6001 * (M + 1)) + D + B - 1524.5

I can implement this in C# as a struct:

/// <summary>
/// A moment is a specific point in time down to the millisecond.
/// </summary>
public struct Moment
{
    public int Year { get; }
    public int Month { get; }
    public int Day { get; }
    public int Hour { get; }
    public int Minute { get; }
    public int Second { get; }
    public int Millisecond { get; }

    /// <summary>
    /// Creates a Moment with known values down to the millisecond.
    /// </summary>
    public Moment(int y, int m, int d, int h, int m, int s = 0, int ms = 0)
    {
        Year = y;
        Month = m;
        Day = d;
        Hour = h;
        Minute = m;
        Second = s;
        Millisecond = ms;
    }

    /// <summary>
    /// A Julian Day (JD) is a continuous count of days and fractions thereof
    /// starting at 1 Jan -4712 at noon UTC to a given point in time thereafter.
    /// </summary>
    public double JulianDay
    {
        get
        {
            int Y = Year;
            int M = Month;
            int B = 0; // Julian calendar default

            // if the date is Jan or Feb then it is considered to be in the 
            // 13th or 14th month of the preceding year.
            switch (M)
            {
                case 1:
                case 2:
                    Y = Y - 1;
                    M = M + 12;
                    break;

                default:
                    break;
            }

            if (!IsJulianDate()) // convert to Gregorian calendar
            {
                var A = Y / 100;
                B = 2 - A + (A / 4);
            }

            return 
                (int)(365.25 * (Y + 4716)) + 
                (int)(30.6001 * (M + 1)) + DayOfMonth + B - 1524.5;
        }
    }


    /// <summary>
    /// Pope Gregory introduced the Gregorian calendar in October 1582 when the 
    /// calendar had drifted 10 days. Dates prior to 4 Oct 1582 are Julian dates
    /// and dates after 15 Oct 1582 are Gregorian dates. Any date in the gap is
    /// invalid on the Gregorian calendar.
    /// </summary>
    /// <returns></returns>
    public bool IsJulianDate()
    {
        if (Year > 1582)
            return false;

        if (Year < 1582)
            return true;

        // year is 1582 so check month
        if (Month > 10)
            return false;

        if (Month < 10)
            return true;

        // month is 10 so check days
        if (Day > 14)
            return false;

        return true;
    }

    public double DayOfMonth
    {
        get
        {
            return 
                Day + 
                (Hour / 24.0) + 
                (Minute / 1440.0) + 
                (Second + Millisecond / 1000.0) / 86400.0;
        }
    }
}

Now i can write a console app to test this implementation:

class Program
{
    static void Main(string[] args)
    {
        Console.WriteLine("Calculation of the Julian Day");
        Console.WriteLine(Environment.NewLine);

        var moment = new Moment(1957, 10, 4, 19, 29, 0);  // Sputnik 1 launched (UTC)

        Console.WriteLine("Sputnik 1 launched on 4 Oct 1957 at 19:29 UTC");
        Console.WriteLine("Sputnik 1 JD: " + moment.JulianDay);
        Console.ReadLine();
    }
}

And the output is:

Calculation of the Julian Day

Sputnik 1 launched on 4 Oct 1957 at 19:29 UTC
Sputnik 1 JD: 2436116.31180556

I will be using this struct in a future blog post when I dive headlong into calculating geocentric solar coordinates. That is to say, given any date and time (like the moment Sputnik 1 launched) I can calculate the position of the sun in the sky.

NuGet Package Feeds on Azure DevOps

One of my favorite features of Azure DevOps is its package management feature where you can publish NuGet packages to your own organization’s feed. Packages in your feed can be referenced by other projects in the CI/CD pipeline or through Visual Studio. In this walkthrough I’ll give a simple example of how to use it. I’ll assume you’re comfortable with Azure DevOps. If you’re not familiar with the tool see my four-part series on the subject.

That creamy NuGet center!

Continue reading “NuGet Package Feeds on Azure DevOps”

Now Available: ASP.NET Core 2 HMAC Middleware

I won’t repeat the project home page except to say that if you need good strong security for clients (MVC or otherwise) calling services (micro or otherwise) then this is for you!

Basic authentication middleware is no longer available in Core 2 and I’ve blogged about that before and wrote a SquareWidget.BasicAuth.Core NuGet package. Even with TLS you should probably not use it unless you have no choice. The password goes over the wire in base64 encoding rather than ciphertext, it sits there in the request header for the whole session, the user can cache it permanently in the browser, and anyone on the network can sniff it out before it gets to the web server.

So why do people use basic auth so much? One word: convenience.  Developers fall back on the  tried and true rather than take the time to do the right thing. So my aim with this middleware is to encapsulate all the goodness of HMAC and keep it dead simple so that the developer has no excuse for not using a more secure algorithm.

Part 4: Adding a Database to the Project

This is Part 4 in a  series on Azure DevOps.

In  Part 1 I created a simple web app called WidgetApi. I then put it under source control and pushed it up to an Azure DevOps repo. In Part 2 I configured a build pipeline and made a code change to trigger that build with continuous integration. In Part 3 I set up a release pipeline and deployed our build artifacts to Azure. In this part I’m going to add a database to WidgetApi and use a DACPAC file to bundle database changes for deployment in the release pipeline. Finally, I’ll configure a production environment with an approval process.

Continue reading “Part 4: Adding a Database to the Project”

Part 2: Setting up a Build Pipeline in Azure DevOps

This is Part 2 in a series on Azure DevOps.

In Part 1 I created a simple web app called WidgetApi. I then put it under source control and pushed it up to an Azure DevOps repo. In this part we’re going to set up a build and then change our code to trigger a continuous integration build. Open the browser and go to your Azure DevOps portal. You should see all your pushed commits there from Part 1. Awesome. Now there’s a couple of housekeeping things to do before we set up the build.

Shout out to https://devrant.com/rants/1535091/ci-cd-in-a-nutshell

Continue reading “Part 2: Setting up a Build Pipeline in Azure DevOps”