Book Image

Microsoft Windows Server AppFabric Cookbook

Book Image

Microsoft Windows Server AppFabric Cookbook

Overview of this book

Windows Server AppFabric provides a set of integrated capabilities that extend IIS and the Windows Server platform making it easier to build, scale and manage composite applications today. Windows Server AppFabric delivers the first wave of innovation within an exciting new middleware paradigm which brings performance, scalability and enhanced management capabilities to the platform for applications built on the .NET Framework using Windows Communication Foundation and Windows Workflow Foundation.'Microsoft Windows Server AppFabric Cookbook' shows you how to get the most from WCF and WF services using Windows Server AppFabric leveraging the capabilities for building composite solutions on the .NET platform. Packed with over 60 task-based and immediately reusable recipes, 'Microsoft Windows Server AppFabric Cookbook' starts by showing you how to set up your development environment to start using Windows Server AppFabric quickly. The book then moves on to provide comprehensive coverage of the most important capabilities provided by Windows Server AppFabric, diving right in to hands-on topics such as deploying WCF and WF applications to Windows Server AppFabric and leveraging the distributed caching, scalable hosting, persistence, monitoring and management capabilities that Windows Server AppFabric has to offer, with recipes covering a full spectrum of complexity from simple to intermediate and advanced.
Table of Contents (16 chapters)
Microsoft Windows Server AppFabric Cookbook
Credits
Foreword
About the Authors
About the Reviewers
www.PacktPub.com
Preface

Using pessimistic concurrency


Windows Server AppFabric caching supports the notion of pessimistic concurrency implementation by placing locks on the cache items that are expected to be within the scope of subsequent updates by more than one possible caching client. This is in total contrast to optimistic concurrency, where there are no locks and each cache client is allowed to modify the cache item. Of course, this is only as long as there is no cache item version number mismatch.

The following are three key API calls available on DataCache that support pessimistic concurrency (via locks):

  • GetAndLock

  • PutAndLock

  • UnLock

When the lock is acquired, it returns a (lock) handle to the cache client. For example, for GetAndLock calls, once a particular cache client has a lock handle, no other cache client will be able to invoke a GetAndLock call (for as long as the lock is valid and alive there is a timeout associated with each lock and we will discuss this later in the recipe).

It should be...