Conference / Talks / Modernizing a 30 year old stack with GraphQL, Hasura & .Net

Modernizing a 30 year old stack with GraphQL, Hasura & .Net

Talk

Transcript

Andy

00:05

Thank you very much. Good afternoon, everyone. So my name is Andrew Doyle with the office of the clerk House of Representatives. And we're going to talk a bit about our experience delivering applications with Hasura and .NET.

Andy

00:21

So as I mentioned, oh, excuse me, we're going to walk you through a few slides. I'm going to try and do this quickly. And then if you see anything along the way, we'll keep them up while we have some discussion with in the second half and please make reference to it. We can go back and, and look at them. So I'm sure as many people know Hasura gives you multiple integration options with your applications. And we took a look at all of them. We settled on using Hasura actions. We wanted to build our business logic layer in .NET. We have quite a bit of Microsoft tooling and infrastructure in our organization, and it provides us a solid enterprise ready environment and language to work with.

Andy

01:16

So it turned out that actions and ASP.NET web APIs were kind of a natural, natural match. So we're going to give you a little bit of an overview of our application, and then the kind of architecture of application, show you a couple of the integration points with Hasura and then discuss that and thank you so much for Rajoshi and the Hasura team to invite us here and speak with us.

Andy

01:47

So just to kind of start back where we started, which was our goals. We wanted to be able to centralize and manage our business logic in one place, we have a rules layer, repository layer, domain layer in our application, and this, this is an application that we're modernizing, but we're building it more or less like it was green fields with integration with the legacy application at the data layer. So we have components that communicate with the legacy application and can exchange data and effectively exchange business logic that way all so

Andy

02:28

We wanted to not spend a lot of time on the API development if you saw Tanmai's talk yesterday, sort of being able to, to minimize or simplify that API development process, that was one of our goals. And I think for sure has helped us there. And then we wanted to be able to focus on our data cause our data is key. We managed most of the data from the house of representatives related to the legislative process. So bills, floor activities, those kind of things, and the integrity of that data, and the timeliness of that data is our, our main mission.

Andy

03:09

So we wanted to have a CQRS-style architecture since everything would flow through the business logic layer. In other words, sort of no changes directly to the database, everything had to touch the business logic. We wanted GraphQL to be the API. We didn't want a sort of mix and match environment. We looked at GraphQL in the first place because all of our data is highly independent, when we looked at modernization, so sort of looking at one of the classic approaches of taking slices of features and separating them out really wasn't possible for us because all of the data is connected to all of the other data at some level. And so, GraphQL was sort of a natural way to expose that. Sure, Hasura actions as I, and we are using OAuth2/JWT for authentication using active directory integration.

Andy

04:10

Excuse me, this is what the application looks like at a high level. We do have a gateway, so to the client, it appears to be single origin. Hashura is the API, React is the UI, notionally React uses the API, but of course the client's really doing that. All queries go directly to Postgres from Hasura sort of the traditional setup and all mutations go through an action to ASP.NET. I was actually glad to hear a number of people discussing this sort of similar architecture for their applications. That was I think good validation for the approach that we took. So looking at those interactions a little more detail, we actually give Hasura, a read only connection to the data, that's partially for security and also just sort of a natural feature of the way we've done this approach, as I mentioned, mutations or actions, and each action has its own ASP.NET end-point.

Andy

05:19

Each controller, and each method in each controller is a new endpoint that maps to Hasura action and then data. Each of those actions is a sort of a business level transaction. And we track them that way, and we do the processing, in a endpoint by endpoint basis and the ASP.NET process or application is the only thing that has a read, write connection to Postgres. And we'll see next.

Andy

05:53

How we separate the schema out. So we actually use the public schema with views to do a tiny bit of reshaping and sort of relabeling of data so that it makes more sense for our front end clients. And then that talks to a core schema, which contains the real application to, so we've already had a number of cases where we've made, somewhat significant changes to the core schema without really a big impact on the views or the API for the client, and we expect to be, leveraging that approach going forward. So, as I mentioned, Hasura can only select data from these public views and the ASP.NET application, which accepts all of the mutations is the only thing that can write data to the core application schema.

Andy

6:52

So gluing the actions and the controllers together turned out to be easier than we expected, I think in some ways, and we put a little bit of code in to make it even simpler to leverage the Hasura pieces without having to do a lot of boiler plate code. We don't generate code, we've been writing our own, but it's fairly simple stuff to be honest. So we have a little wrapper type for the Hasura payload information that wraps the input output types. And then we do have a filter for reshaping the exception data in a form that Hasura can consume. And then our input output types map pretty directly onto .NET types. And the only place where we've seen any issues with the data type mapping, I know there's been some mention of that over the past couple days is with dates and times, because dates are a little bit different in Postgres than they are in some of the graphQL implementations.

Andy

8:01

And then they're a little bit different than the way dates are in .NET and times have their own oddities and .NET with time zones and that sort of thing. But we haven't had a show stopper there it's just been a something we've had to be aware of as we built things.

Andy

8:20

So this is that wrapper type I was talking about, if you've ever built action in Hasura, you recognize the input, the session variables action. This gives us most of the serialization we need for the .NET endpoints. And then the input type is parameterized so that we can inject our own application level types. So if we look at this as a sample mutation, setting the title for a bill, we have an input output. The input structure has a single field called title, and then an enrolled title input with those parameters. And then the output also similar.

Andy

09:10

And so, the big advantage this gives us is that lets us do strongly typed controller endpoints. So this is a signature for a method in one of our controllers. It returns in a role title output type, which has pretty much exactly what you saw on the previous screen for the output type. It has an input type that's identical to what you saw previous in input screen for the title arcs, the title parameter, and it has the wrapper around it. And so this, the .NET serialization handles this for us automatically. And if there're any changes or anything gets out of sync, we know about it soon as we try and use it. So that's advantage for us, and then our controller simply delegate to the domain objects.

Andy

10:01

So, just to reiterate, and we can start discussion over this. We want business rules for all mutations, if you looked at the mutation we had in the example set the enrolled title, you would think, well, you're just setting a value. It turns out there is a large number of business rules behind that. We, the bills have to be in the right state, they have to be from the right body, there's, everything has a lot of business rules associated with it. So it was not only useful, it was necessary for us to do that. Actions made it easy for us to do the mapping and .NET gave us a good environment to build things ASP.NET is very manageable for us in terms of both developing and hosting the endpoints. And then, we have a pretty simple integration platform, that's very robust, and we look forward to using this, to build out our systems for the foreseeable future. So with that shows she, if you'd like to kick things off or ask some questions.

Rajoshi

11:23

Absolutely.

Andy

11:25

That's all I have.

Rajoshi

11:27

Thank you so much, Andy, thanks a lot. We'll just take a quick step back and do a quick round of introductions before we go into questions. And I would encourage anyone watching in, to sort of drop your questions in QnA so we can keep taking them, but while we're doing that, I would love to do a quick round of intros.

Andy

11:46

Of course. So I'm Andy Doyle, Director of Legislative Applications. Adam Glenn...

Adam

11:57

I'm Adams Turoff, I'm a Senior Software Engineer, working on the backend system in the Hasura integration.

Glenn

12:04

I'm Glen Rueff, I'm a Software Engineer working on DevOps and cloud deployments.

Rajoshi

12:11

Thank you all so much for joining us today. And I think just to sort of start off, and kind of paint the picture a little bit, and you mentioned that, you're modernizing a stack and I know you've told me that like this is a 30 year old system, so we'd love to kind of know a little bit about the system that you were modernizing from, and also what was the motivation of kind of this modernization and the migration effort today

Andy

12:37

Sure. Yeah, it is a 30 year old system and we didn't describe it much in the slides, but it's the system that's used to manage the Core Legislative activities of the House of Representatives. So introducing bills, managing floor actions, managing activities on bills, enrolling bills, exchanging information with the Senate, exchanging information with our other partners, such as government publishing office Library of Congress. This is the Grand Central Station for most of the Legislative data that the House manages. As I mentioned, it is 30 years old, this was something that was born on the mainframe. It now runs on Linux, we can access it over, RPCs the main motivation for the modernization is to move on the modern tools. The database a sort of pre-relational database called database. The development environment, the core development environment is still green screens using a language called Natural. If you've never heard of those things, maybe you haven't been in the industry for 40 or 50 years. So, but that was the core goal.

Rajoshi

13:58

Got it. Awesome. And, we have a question that's come in, which is that given the amount of regulation in the government, have there been any unexpected surprises as you've gone through the process of picking the tech stack that you're currently using?

Andy

14:11

So not really that that is something that we're still working on to some extent, but the security and Glenn can probably talk to some of the DevOps stuff, but the security posture for many of the tools we're using is actually pretty good. Like I said, we have a lot of Microsoft infrastructure we're using Azure and because Azure is FedRAMP high, everywhere that checks most of the boxes that we need.

Rajoshi

14:42

Got it. And because you would, Glenn was mentioning how he's handling the DevOps portion, so would love to kind of hear a little bit about that in terms of like, how is the project structured and what is like your CI/CD pipeline? What is your DevOps pipeline like?

Glenn

15:00

Sure. So, our repo is probably structured in a pretty general way. It's, we have a directory that corresponds to each of our application components. We have only one persistent Git branch that we use, the main branch. So when we run pipelines off of that, it just works off that main branch usually, and then we'll go through general progression of scanning and then testing, and then building the Docker containers and then finally moves into a HELM deployment into an Azure Kubernetes service. Yeah, I think sort of from a Hasura perspective, I think the interesting thing at the deployment part is actually that since entity framework is managing the schema and we wanted to keep Hasura in sync with that, we've got a couple of scripts that help automate the process of pulling out schema updates, and then bringing that into the Hasura container build during the pipeline, so that the whole process is fairly streamlined.

Rajoshi

16:18

Got it. Thank you. And, the other question I had is about, you're kind of modernizing from the system. So, how are you doing this in an incremental way? And how are you sort of keeping these, are you keeping these systems in sync? And what does that really look like? Adam would love to kind of hear from you a little bit about that.

Adam

16:23

Sure. So it's a reasonably large system, as Andy mentioned, there is a lot of data and all of the data is interrelated. So our approach is we're tackling the migration, module by module and at any given point in time, there's only one single source of truth. It's either the new system or the old system. So we're building the new system, so that runs in two modes. And in the first mode, it's managing the Postgres data directly, and it acts as the source of truth, and the other mode it's taking the same request, but sending an RPC request to the legacy system that makes a change and then sends a response. And the business logic layer incorporates the response into the Postgre system, sort of as a satellite repository for the same data. So, we can manage that easily because all of the modifications in the new system are going through a single choke point with our new business logic layer. And then it'll figure out whether it's making a change directly or indirectly through the legacy system.

Rajoshi

17:49

Got it. And, how did this process start? Like, what was sort of, how did you sequence this migration?

Andy

17:56

Yeah. That's. Oh, go ahead.

Adam

18:02

No, please.

Andy

18:03

Yeah, that's us working very closely with our users to decide that, there's a number of business processes and house processes that we kind of had to analyze and figure out the dependencies essentially. And also what things were manageable in terms of both training and timing for different groups. So we've timed that sort of based on business need and availability.

Rajoshi

18:34

Got it. And, how about architecturally? Is it like, you're kind of moving the entire system, right? Like all the way from the database to your application layer, your front end app. So, what was that architecturally? What's what was the kind of sequence?

Andy

18:51

Well, we started looking at the application from a number of different directions. You know, we looked at who's using it, we looked at the different modules in the application, we looked at where it fits in the Legislative process. And then we looked at the data and ultimately it was the data that decided, the dependencies between the data and then the people that use, how people use certain parts of that data? Became the main driver. And so that focus on the data integration, data synchronization sort of became the fulcrum for the architecture.

Rajoshi

19:35

Got it. And so, basically right now, you have some of the business logic in your legacy system, but you're also kind of rewriting some of this business logic in .NET, is that right?

Andy

19:51

Yeah, that's correct. And the first system we did was actually a hybrid, Adam, I don't know if you want to speak to that at all.

Adam

20:03

Sure. So, we took one system that was managing part of the process for executive actions. And there was a small bookkeeping app that wasn't integrated into the legacy system. So we migrated that, and that's an example of an area where our new system is going to be the single source of truth in testing, we're using the new system as the single source of truth for those operations, but in deployment, we'll be using the legacy system to manage half of the process, the process that it was already managing. So we're augmenting the legacy system with some new features as we're building it out.

Rajoshi

20:50

Got it. And, since we said that the topic table, there's sort of the discussion is a lot about the experience with with .NET as well. So just want to spend a couple more minutes on that, Adam, in terms of what is the experience been using the combination of .NET and GraphQL and Hasura, what's that been like for the team in terms of the productivity or the developer experience?

Adam

21:18

Sure. So as Andy mentioned, we were already a Microsoft shop, so it was natural, we would look at .NET. We could have picked any language, and had back off, but realistically .NET had all of the points that we needed to cover. It was widely used, it was stable, it was well known within the group. It's relatively easy to find new staff with the skills that we need. The platform is rich robust. It has a lot of support that we need for things like Web APIs and ORMs. And mostly we were able to use it in a way where our business logic is strongly typed, and we can get a lot of the small details out of the way and let the compiler deal with that, let the IDE deal with that. So we're spending all of our time focusing on business logic. Using Hasura, as Andy mentioned, also got us out of the business of running data access APIs. So, we are really focusing on what the business rules are and not focusing on technical minutia or data access to get the application up and running.

Rajoshi

22:27

Got it. And, how has that sort of been in terms of, has the team structure or the way the sort of team, skill sets and the team has, have you seen some of that changing based on the legacy stack and the model, the stack that you're using today in terms of how you're hiring, or like have you had the staff up is the same thing. Just, if you could spend a couple of minutes on just how the team is structured, given this migration that you're undergoing.

Andy

22:59

Yes. So, we're about a dozen people working on the new system. We also work closely with the small team that primarily supports the legacy system and they of course, work with a number of the integration pieces. I think a fairly standard setup, we're about half on the front end, half on the back end, couple people doing UX and business analysis. So, it's pretty common structure, I think for any sort of product or system development. And we have been looking for people with new skills.

Andy

23:44

I mean, React is very different than the tools we had been using, you know, graphQL is a new approach and a new way of looking at data. But that, I think of all the things that we've dealt with all the adoptions we've needed to do has probably been the smallest. The front end React piece is much bigger. And then, using Postgres and some of the ways that we're using Web APIs is a fairly big change. But, we've got good tooling on the front end to get to the data we're using Apollo Client, and we've got good tools for integrating the data and integrating the business logic. And so that piece in the middle, Hasura kind of gluing it all together is the least of my worries most days, I guess, is the way I put it.

Rajoshi

24:45

That's really good to hear. I see another question that's coming, which is, what is the kind of scale of usage that you're seeing today?

Andy

24:54

Yeah, so we're interesting in the fact that our data's very complex and certainly our data integrity requirements are very high, but we don't really live in a high scale environment. Our user community is in the dozens and our data, I think within a Congress there'll be maybe around 10000 bills introduced. So, we don't live in that world of ramping up a user base to hundreds of thousands or millions of users. We're really focused on the internal piece of this right now, but we are looking at, how do we want to deliver data to the public? We do deliver quite a bit of our data to the public, in fact. All of the business of House of Representatives, all of the floor business is public. And our goal is to get the right data to the public as quickly as possible. So I think in the future, we'll be looking at ways to deliver that, we'll be more in the world of scaling at that point, you think.

Rajoshi

26:07

Got it today, that portion is not being delivered on the graph QL and Hasura Stack.

Andy

26:12

Right. Yeah, it's really more about the data and the complexity and the richness of the data and how we integrate that. That's where the complexity is.

Rajoshi

26:23

Got it. And I know we were sort of running out of time, so just wanted to hear, all your thoughts and sort of what's next in this process of migration? What's coming up next? And what are you all excited about with working? It's new for all of you as well. So, what's next? And what are you excited about?

Andy

26:42

I think we're kind of on a glide path now, just adding new features, new business, new rules, new capabilities, and just building on this foundation that we built.

Rajoshi

27:00

Adam, Glen, would you like to add anything about just from your day to day work? What are the things that you're excited about? In terms of what's coming up next?

Glenn

27:13

Well, we haven't yet deployed the system into a production environment. And so we've been planning out our final cloud deployments and it'll be fun to get to the part where we're actually deploying everything into a production cloud and having users use it in that environment.

Rajoshi

27:37

Awesome.

Adam

27:39

So what we've been talking about today is our Cornerstone System. And as a service organization with, in the house of representatives, we manage a lot of other applications, and I'm looking forward to taking this architecture, taking this development approach forward for some of the other applications that are outside of managing the Legislative Process directly.

Rajoshi

28:04

Awesome. Well, thank you all so much. I know we're up on time, so thanks a lot for joining us today at Hasura Converse. We're super happy that you could come here and sharing your work with us. Thanks a lot.

Andy

28:19

Thank you very much for having us.

Glenn

28:19

Thank you.

Adam

28:19

Thank you.

Andy

28:19

Appreciate it.


End of transcript

Description

Andrew Doyle, Adam Turoff & Glenn Rueff from the U.S. House of Representatives will be hosting a topic table, moderated by Hasura founder & COO Rajoshi, on their experience of using Hasura to modernize an existing application. If you have any questions on building with .NET & Hasura, do drop by!

© 2024 Hasura Inc. All rights reserved