MagicOnion — Unified Realtime/API Engine for .NET Core and Unity

It has been a while. Since my last post, I have been working on and created many things. Now that it is 2019, I would like to start periodically writing about my work again.

One big change for me is that I established a new company, Cysharp, together with Cygames (The Idolmaster Cinderella Girls: Starlight Stage, Shadowverse, Dragalia Lost, etc…). Cysharp is specialized in C#, with Unity and .NET Core.

Today, I am announcing that Cysharp has released an open-source integrated library for real-time communication and API communication for Unity.

https://github.com/Cysharp/MagicOnion

It was first released two years ago, and was used in a mobile game which hit the market, and we have further enhanced its functions for real-time communication for this official release.

Its basic function is the provision of streaming RPC between a server and a client. Both the server side and the client side are implemented in C#, the message format is LZ4-compressed MessagePack, and the communication follows the HTTP/2 protocol using gRPC. It also functions as an API server, so it acts like a regular web framework as well.

MagicOnion was developed to deliver the best possible performance and an interface that feels natural to C# developers.

MagicOnion is for Microservices(communicate between .NET Core Servers like Orleans, ServiceFabric, AMBROSIA), API Service(for WinForms/WPF like WCF, ASP.NET Core MVC), Native Client’s API(for Xamarin, Unity) and Realtime Server that replacement like Socket.io, SignalR, Photon, UNet, etc.

Interface that is strongly typed by C#

By having both the server and the client share these, an error-free communication can be established between them simply by implementing this interface on both sides.

In this way, there is no need to generate code from an intermediate language, and methods can be called over the network just by calling them (even with multiple inputs or primitive-type variables) in a manner that is coherent to C# syntax. Of course, it supports autocompletion.

An actual implementation is outlined below. The server implements an interface defined as IGamingHub.

  • it is all done asynchronously. (tasks relaying return values are asynchronous.)
  • values can be returned. (If an exception is caught, it will be relayed to the client as such.)
  • Grouping by Group makes it possible to send to clients in a group using Broadcast(group).

The client can receive data broadcast from the server by implementing an interface defined as IGamingHubReceiver. Also, IGamingHub itself acts as a network client that is automatically implemented on the server.

The client side can receive data broadcast from the server by implementing an interface defined as IGamingHubReceiver. Also, IGamingHub itself serves as a network client automatically implemented on the server.

As everything is strongly typed as C# variables,

  • IDF’s refactoring is tracked on changes in a method’s name and its inputs on both the server side and the client side.
  • incomplete implementation results in a compile error, allowing you to spot them and fix them.
  • string-free communication improves efficiency. (Method names are automatically converted to ID numbers, so no string is sent.)
  • primitive-type variables can be sent in a natural manner. (There is no need to wrap them in a designated request class.)

When using Protocol Buffers, you need to manage .proto (IDL: Interface Definition Language), worry about how to generate them, etc., but as long as it is written in C#, none of this occurs.

Zero deserialization mapping

Also, taking advantage of the fact that both the client and the server run on C# and data stored on internal memory are expected to share the same layout, I added an option to do mapping through memory copy without serialization/deserialization in case of a value-type variable.

Nothing needs to be processed here, so it promises the best performance theoretically possible in terms of transmission speed. However, since these struct-type variables need to be copied, I recommend handling everything as ref as a rule when you need to define a large struct-type, or it might slow down the process.

I believe that this can be easily and effectively applied to sending a large number of Transforms, such as an array of Vector3 variables.

Why gRPC’s Bidirectional Streaming is not enough

// Bidirectional Streaming definition by proto
rpc BidiHello(stream HelloRequest) returns (stream HelloResponse);

However, it is difficult to use Bidirectional Streaming as an RPC for real-time communication for many reasons. The biggest reason is that, since it is not an RPC at this point, after a connection is established, the Request/Response defined using oneof (one type containing multiple types) must be manually branched to the method that needs to be called. That may be feasible, but there are still many hurdles. For example,

  • the client cannot wait for the server to complete operation (Once the request is sent, the next line of code is executed.)
  • not being able to wait for the response means that the client cannot receive return values or exceptions.
  • there are currently no way to bundle multiple connections.

Even if you construct a system to handle these issues, you can never escape from the template of Bidirectional Streaming that is generated by proto, so it messes up the code. While MagicOnion’s StreamingHub uses Bidirectional Streaming to establish a connection, it communicates using a unique lightweight protocol within this communication frame, realizing an RPC for real-time communication that feels natural to C# developers.

Why I choose distributed model and gRPC

Another is that completely distributing the load using a TCP load balancer while delegating the process of broadcasting by Group to Redis makes it possible send data to clients connected to different servers. This function comes standard with MagicOnion as MagicOnion.Redis. This is suited to implementing chat functionality, notification, etc.

Also, much like gRPC itself, MagicOnion is suited to implementing what are called Microservices, so you can build a server-to-server connection and construct a server-to-server-RPC structure.

Now, MagicOnion is built on gRPC, but it completely ignores the need for providing language-independent RPC using .proto, which is its most notable characteristic. Moreover, the fact that network communication is limited to HTTP/2(TCP) does not necessarily make it ideal for creating games. However, there are good reasons why I chose gRPC.

One reason is the maturity of the library. There are no libraries available for communication that support server/client implementation including Unity, and the core part (gRPC C, which is shared across all languages) is used by almost all developers including by Google, which means it is highly stable. It may be possible to implement an original communication library composed of parts that are specific to communication in games, but ensuring stability from the ground up is not an easy task. Do not reinvent the wheel, right?

However, I am not satisfied with C# binding in gRPC in terms of performance. That is why I think it may be a good idea to keep using gRPC C Core while completely replacing C# binding. At least, if it is limited to the Unity side (client communication), I believe it is both feasible and effective.

Another reason is the ecosystem. gRPC has established itself as the de facto standard as a modern RPC, so it is supported by many servers and middleware. HTTP/2 and gRPC being industry-standard protocols, there are many advantages of using them, such as using them with Nginx or request-based load balancing by Envoy. Also, there are many blogs and slideshows providing information on gRPC, which makes it easier for developers to build a better system.

MagicOnion has an original application layer built into it, but its infrastructure is gRPC, so any piece of middleware or any shared knowledge can almost always be applied directly.

I believe that a modern server should have a cloud-ready architecture, and that a system that fully utilizes infrastructure and middleware supplied by a cloud provider has a better chance of performing well than a system that attempts do everything by itself. Therefore, the framework that deals with the infrastructure should be lightweight, composed of essential functions only.

Supporting API communication

Also with API communication, everything about the framework is thoroughly made asynchronous and non-blocking. What makes this look almost natural is the async/await function provided by the C# language itself. It also comes with a filtering function that hooks the execution before and after a request is made, which also contributes to the natural asynchronous processing.

The filter can also be used with StreamingHub in the same manner.

Swagger

This is all you need to do to be able to check if the APIs are working properly, and just by defining debug commands as APIs, they show up on Swagger, so it may be possible to easily prepare commands that operate database for debugging.

StreamingHub does not support it at the moment, but I am planning to make a WebSocketGateway that connects WebSocket and MagicOnion.

Deployment and hosting

Making a container when it comes to C# today is not really for constructing a local environment. It is for easily carrying things into development/deployment environments and allowing people who are not particularly familiar with C#/Windows to build on rich knowledge on infrastructure without learning anything special. That, I think, is the largest advantage.

Conclusion

As a real-time communication framework, it only provides Client-Server RPC. However, that is the only thing you need and you can build all other functions yourselves. (It depends, but generally speaking, it will not require much work.) Free of all unnecessary functions, I believe that it ensures the best coding experience when it comes to RPC. (I wish I could say the best performance as well, but there are a few things that can be improved in the way it handles gRPC C# binding, so I hope I will be able to say that when I release the next version.)

Also, as it is an independent closed system, you can for example use it to exhibit VR/AR content just by keeping the server running within the same LAN network even if the LAN network has limitations……!

I hope you will give it a try.

I hope to be able to keep writing about MagicOnion as well as how things are going with UniRx, UniRx.Async, MessagePack for C#, etc., on this blog.

a.k.a. neuecc. Creator of UniRx, UniTask, MessagePack for C#, MagicOnion etc. Microsoft MVP for C#. CEO/CTO of Cysharp Inc. Live and work in Tokyo, Japan.