Cracker hebeln Kopierschutz Denuvo 4.8 aus, Sonic Forces illegal im Netz

heise online Newsticker - 24. Januar 2018 - 11:00
Cracker haben den Kopierschutz des Spiels Sonic Forces umgangen. Dabei handelt es sich um die aktuelle Denuvo-Version 4.8, welche bisher als sicher galt.

Neffos X1 Lite: 100-Euro-Smartphone überzeugt im Test

heise online Newsticker - 24. Januar 2018 - 11:00
Das Neffos X1 Lite bekommt man schon ab knapp 100 Euro. Was der Nutzer für so wenig Geld an Technik bekommt, klingt fast zu schön, um wahr zu sein. Aber: In der Tat überzeugt das Smartphone in der Praxis. Ob sich der Kauf lohnt, erklärt TechStage im Test.

Strengere Richtlinien für US-Forschungsgelder

heise online Newsticker - 24. Januar 2018 - 11:00
Forscher brauchen Geld, um klinische Studien durchführen zu können. In den USA bekommen sie das auch von der Gesundheitsbehörde NIH. Die führt jetzt aber strengere Regeln ein.

Real Life Guys: Mit der Badewannen-Drohne zum Bäcker

heise online Newsticker - 24. Januar 2018 - 10:30
In einer Badewanne zum Bäcker fliegen? Drei YouTuber haben kurzerhand eine ausrangierte Wanne zur Drohne umgebaut. Mit sechs Rotoren fliegt sie ganz schön ansehnlich.

Lauschprogramm: NSA hat verfahrensrelevante Daten "versehentlich" gelöscht

heise online Newsticker - 24. Januar 2018 - 10:00
Die NSA kämpft in einem seit Jahren laufenden Verfahren wegen Massenüberwachung auch von US-Bürgern erneut mit Speicherproblemen. Sie musste nun einräumen, dass Inhaltsdaten, die als Beweise dienen sollten, überschrieben wurden.

HackerOne-Studie: Bug Bounties als lukrative Einnahmequelle

heise online Newsticker - 24. Januar 2018 - 9:30
Wer Schwachstellen in Webanwendungen, Soft- und Hardware findet, bekommt häufig Preisgelder – so genannte Bug Bounties. Eine neue Studie offenbart interessante Details zu Verdienstmöglichkeiten und Background der Bounty-Jäger.

Nokia Body Cardio: Messfunktion wird per Firmware-Update entfernt

heise online Newsticker - 24. Januar 2018 - 9:30
Die WLAN-Körperwaage Nokia Body Cardio kann die Pulswellengeschwindigkeit des Nutzers messen – ein Alleinstellungsmerkmal. Mit einem neuen Software-Update wird die Funktion entfernt. Die Begründung bleibt vage.

Microsoft: Der digitale Assistent wird das "Alter Ego" der Zukunft sein

heise online Newsticker - 24. Januar 2018 - 9:00
In der Berliner Microsoft-Repräsentanz hat Brad Smith, Chefjustiziar des Unternehmens ein Buch vorgestellt, das sich mit Künstlicher Intelligenz und der Zukunft der Arbeit beschäftigt.

Next-Gen-Spielegrafik: Unity zeigt Techdemo Book of the Dead

heise online Newsticker - 24. Januar 2018 - 8:30
Eine Gruppe von Unity-Entwicklern hat mit Book of the Dead eine Technikdemo erschaffen, die nur noch beim genauen Hinsehen von der Realität zu unterscheiden ist.

Update-Verweigerung: iOS 11 verbreitet sich langsamer als frühere Versionen

heise online Newsticker - 24. Januar 2018 - 8:30
Kein großes iOS-Update haben iPhone- und iPad-Nutzer in den letzten Jahren so zögerlich eingespielt wie iOS 11. Problematisch ist dies insbesondere, weil Apple in älteren Versionen keine Sicherheitslücken mehr stopft.

ISPO: Digitalisierung in der Sportartikelbranche schreitet voran

heise online Newsticker - 24. Januar 2018 - 8:30
Sportartikel-Händler sehen sich immer stärker dem digitalen Wandel ausgesetzt. Auf der diesjährigen ISPO-Messe nimmt das Thema Digitalisierung deshalb einen besonders breiten Raum ein.

BIU: Jeder zehnte Gamer in Deutschland betreibt E-Sport

heise online Newsticker - 24. Januar 2018 - 8:00
In Deutschland gibt es über 4 Millionen Menschen, die Videospiele in Ligen und auf Events spielen, also E-Sport betreiben. Zu diesem Ergebnis kommt eine Umfrage von BIU und YouGov.

Sportrechte: Facebook holt Eurosport-CEO

heise online Newsticker - 24. Januar 2018 - 8:00
Das soziale Netzwerk verstärkt seine Bemühungen, das Publikum mit Liveübertragungen von Sportveranstaltungen bei Laune zu halten und holt dafür den bisherigen Chef von Eurosport, der nach den Olympischen Winterspielen wechseln soll.

INsReady: Single Sign-on using OAuth2 and JWT for Distributed Architecture

Planet Drupal - 24. Januar 2018 - 7:35

Single sign-on (SSO) is a property, where a user logs in with a single ID and password to gain access to a connected system or systems without using different usernames or passwords, or in some configurations seamlessly sign on at each system. A simple version of single sign-on can be achieved over IP networks using cookies but only if the sites share a common DNS parent domain. ---- https://en.wikipedia.org/wiki/Single_sign-on

As the definition suggests, one can imagine that SSO becomes one critical part of the system design and user experience design for complex and distributed system, or for a new application to integrate with the existing connected system. With SSO enabled, a system owner can manage access control at a centralized place, therefore granting users permissions cross multiple subsystem is organized. On the other hand, as an end user, he/she only needs to secure one set of credentials to access multiple resources or to access functionalities whose distributed architecture is hidden from the user.

As we entering 2018, our software becomes more complex and its services become more ubiquitous. Let's use Google's SSO for example to illustrate the demand for a modern SSO:

  • A user can sign in with password once for both Gmail.com and YouTube.com
  • A user can go to Feedly.com or New York Times and use the "Sign-in with Google" to authorize third parties to access the user's data
  • A user can sign in with password on a mobile device to sync all photos or contacts from Google
  • A Google Home device can connect to multiple people's Google accounts, and read out their calendar events when needed
  • YouTube.com developers can use Polymer as frontend technology, and authenticate with YouTube.com backend to load the content via web services API

You might not realize the complexity of such system to support the modern use cases above until your system needs one, and you need to develop the support. Let's translate the above use cases into SSO technical requirements:

  • Support SSO cross multiple domains
  • Support Password Grant (sing-in directly on the web), Authorization Code Grant (user authorizes third-party), Client Credentials Grant (Machine sign-in), and Implicit Grant (third-party web app sign-in)
  • Support distributed architecture, where your authentication server is not necessary on the same domain or at the same server as your resource servers
  • Web services API on resources server can effectively authenticate requests
  • No technology lock-in for authentication server, resource servers as well as client-side apps.
  • Support a seamless user authorization experience cross different client-side technology (Web, Mobile or IoT), and cross different first-party and third-party applications

Fortunately, we can leverage existing open standards and open source software to implement a SSO for a distributed system. First, we will rely on OAuth 2.0 Authorization Framework and JSON Web Token (JWT) open protocols. OAuth 2.0 is used to support common authentication workflows; in fact, the above 4 types of grants in the requirements are the terminologies borrowed from OAuth 2.0 protocol. JWT protocol is used to standardize the sharing of a successful authentication result cross clients apps and resources servers. The protocol allows resources server to trust a client request without double checking with authentication server, which lowers the amount of communication within a distributed system, therefore increases the performance of overall authentication and identification. For more technical details on how to use OAuth 2.0 and JWT for authentication, please see Stateless authentication with OAuth 2 and JWT - JavaZone 2015.

Regarding to building the authentication sever, where all users and machines will sign-in, authenticate, authorize, or identify themselves, the critical requirement for the authentication server is that this server implements OAuth 2.0 protocol and use JWT as the bearer token. As long as the authentication server implements the protocols, the rest of facilitating features can be built on any technology. I like use simple_oauth module with Drupal 8, because out-of-box, this solution is the whole application, including users, consumers and tokens management. Particularly, I have been helping to optimize the user experience of user authorization process for different use cases. If you are not familiar with Drupal, a particular distribution Contenta CMS has pre-packaged simple_oauth and its dependencies for you.

Once the authentication server is in place, we will implement the protocol and workflows on resource server and client-side apps. This part is largely up to your resource server and client-side technologies you picked. We are building this part of integration with Node.js, Laraval, Drupal 7 and Drupal 8 applications. As the time of writing, we have published the module oauth2_jwt_sso on Drupal 8.

I leave the extensibility, limitation, and more technical details of this SSO solution for the upcoming DrupalCon Nashville session. I will include the session video here in late Apri, 2018.

Files:  SSO diagram.pngTag: SSOOAuth2JWTDecoupledDistributedArchitectureSecurityDrupal Planet

PreviousNext: Better image optimisation in Drupal

Planet Drupal - 24. Januar 2018 - 5:08

When optimising a site for performance, one of the options with the best effort-to-reward ratio is image optimisation. Crunching those images in your Front End workflow is easy, but how about author-uploaded images through the CMS?

by Tony Comben / 24 January 2018

Recently, a client of ours was looking for ways to reduce the size of uploaded images on their site without burdening the authors. To solve this, we used the module Image Optimize which allows you to use a number of compression tools, both local and 3rd party.

The tools it currently supports include:

We decided to avoid the use of 3rd party services, as processing the images on our servers could reduce processing time (no waiting for a third party to reply) and ensure reliability.

Picking your server-side compression tool

In order to pick the tools which best served our we picked an image that closely represented the type of image the authors often used. We picked an image featuring a person’s face with a complex background - one png and one jpeg, and ran it through each of the tools with a moderately aggressive compression level.

PNG Results Compression Library Compressed size Percentage saving Original (Drupal 8 default resizing) 234kb - AdvPng 234kb 0% OptiPng 200kb 14.52% PngCrush 200kb 14.52% PngOut 194kb 17.09% PngQuant 63kb 73.07% Compression Library Compressed size Percentage saving Original 1403kb - AdvPng 1403kb 0% OptiPng 1288kb 8.19% PngCrush 1288kb 8.19% PngOut 1313kb 6.41% PngQuant 445kb 68.28% JPEG Results Compression Library Compressed size Percentage saving Original (Drupal 8 default resizing) 57kb - JfifRemove 57kb 0% JpegOptim 49kb 14.03% JpegTran 57kb 0% Compression Library Compressed size Percentage saving Original 778kb - JfifRemove 778kb 0% JpegOptim 83kb 89.33% JpegTran 715kb 8.09%

Using a combination of PngQuant and JpegOptim, we could save anywhere between 14% and 89% in file size, with larger images bringing greater percentage savings.

Setting up automated image compression in Drupal 8

The Image Optimize module allows us to set up optimisation pipelines and attach them to our image styles. This allows us to set both site-wide and per-image style optimisation.

After installing the Image Optimize module, head to the Image Optimize pipelines configuration (Configuration > Media > Image Optimize pipeline) and add a new optimization pipeline.

Now add the PngQuant and JpegOptim processors. If they have been installed to the server Image Optimize should pick up their location automatically, or you can manually set the location if using a standalone binary.

JpegOptim has some additional quality settings, I’m setting “Progressive” to always and “Quality” to a sweet spot of 60. 70 could also be used as a more conservative target.

The final pipeline looks like the following:

Back to the Image Optimize pipelines configuration page, we can now set the new pipeline as the sitewide default:

And boom! Automated sitewide image compression!

Overriding image compression for individual image styles

If the default compression pipeline is too aggressive (or conservative) for a particular image style, we can override it in the Image Styles configuration (Configuration > Media > Image styles). Edit the image style you’d like to override, and select your alternative pipeline:

Applying compression to existing images

Flushing the image cache will recreate existing images with compression the next time the image is loaded. This can be done with the drush command 

drush image-flush --all

Conclusion

Setting up automated image optimisation is a relatively simple process, with potentially large impacts on site performance. If you have experience with image optimisation, I would love to hear about it in the comments.

Tagged Image Optimisation

MidCamp - Midwest Drupal Camp: We are pleased to announce Chris Rooney will be our keynote speaker at MidCamp 2018

Planet Drupal - 24. Januar 2018 - 2:51
We are pleased to announce Chris Rooney will be our keynote speaker at MidCamp 2018

We are so excited to have Chris as our keynote speaker this year.  He is the President and Founder of Digital Bridge Solutions, a Drupal and Magento Agency here in Chicago that has been a supporter of MidCamp since its inception. 

His presentation at our 2017 event, Whitewashed - Drupal's Diversity Problem And How To Solve It, was a deep, and eye-opening look at diversity in Drupal, and the greater tech world, and how we can go about making it better.

Since then, he has been partnered with Palantir.net on an ambitious inclusion initiative working with students to introduce them to Drupal.  Last year, they brought a group of students from Baltimore to DrupalCon Baltimore.  They have held Drupal training sessions here in Chicago, and are currently working to bring students from Genesys Works and NPower to DrupalCon Nashville.

Chris' presentation will be a collective group journey into sensitive and vulnerable territories, but promises interactivity, a safe space for the exchange of ideas, and perhaps even a little humor.  We hope you join us for it.

Session Submissions close Friday!

MidCamp is looking for folks just like you to speak to our Drupal audience! Experienced speakers are always welcome, but our camp is also a great place to start for first-time speakers.

MidCamp is soliciting sessions geared toward beginner through advanced Drupal users. Know someone who might be a new voice, but has something to say? Please suggest they submit a session.

Find out more at: Buy a Ticket

Tickets and Individual Sponsorships are available on the site for MidCamp 2018.

Click here to get yours!

Schedule of Events
  • Thursday, March 8th, 2018 - Training and Sprints
  • Friday, March 9th, 2018 - Sessions and Social
  • Saturday, March 10th, 2018 - Sessions and Social
  • Sunday, March 11th, 2018 - Sprints
Sponsor MidCamp 2018!

Are you or your company interested in becoming a sponsor for the 2018 event? Sponsoring MidCamp is a great way to promote your company, organization, or product and to show your support for Drupal and the Midwest Drupal community. It also is a great opportunity to connect with potential customers and recruit talent.

Find out more at:

Volunteer for MidCamp 2018

Want to be part of the MidCamp action? We're always looking for volunteers to help out during the event.  We need registration table help, room monitors, help with setting up the venue, and help clearing out.  Sign up at http://bit.ly/midcamp-volunteer-signup and we'll be in touch shortly!

We hope you'll join us at MidCamp 2018!

Dcycle: Caching a Drupal 8 REST resource

Planet Drupal - 24. Januar 2018 - 2:00

Here are a few things I learned about caching for REST resources.

There are probably better ways to accomplish this, but here is what works for me.

Let’s say we have a rest resource that looks something like this in my_module/src/Plugin/rest/resource/MyRestResource.php and we have enabled it using the Rest UI module and given anonymous users permission to view it:

<?php namespace Drupal\my_module\Plugin\rest\resource; use Drupal\rest\ResourceResponse; /** * This is just an example. * * @RestResource( * id = "this_is_just_an_example", * label = @Translation("Display the title of node 1"), * uri_paths = { * "canonical" = "/api/v1/get" * } * ) */ class MyRestResource extends ResourceBase { /** * {@inheritdoc} */ public function get() { $node = node_load(1); $response = new ResourceResponse( [ 'title' => $node->getTitle(), 'time' => time(), ] ); return $response; } }

Now, we can visit http://example.localhost/api/v1/get?_format=json and we will see something like:

{"title":"Some Title","time":1516803204}

Reloading the page, ‘time’ stays the same. That means caching is working; we are not re-computing our Json output each time someone requests it.

How to invalidate the cache when the title changes.

If we edit node 1 and change its title to, say, “Another title”, and reload http://example.localhost/api/v1/get?_format=json, we’ll see the old title. To make sure the cache is invalidated when this happens, we need to provide cacheability metadata to our response telling it when it needs to be recomputed.

Our node, when it’s loaded, contains within it all the caching metadata needed to describe when it should be recomputed: when the title changes, when new filters are added to the text format that’s being used, etc. We can add this information to our ResourceResponse like this:

... $response->addCacheableDependency($node); return $response; ...

When we clear our cache with drush cr and reload our page, we’ll see something like:

{"title":"Another title","time":1516804411}

We know this is still cached because the time stays the same no matter how often we load the page. Try it, it’s fun!

Even more fun is changing the title of node 1 and reloading our Json page, and seeing the title change without clearing the cache:

{"title":"Yet another title","time":1516804481} How to set custom cache invalidation events

Let’s say you want to trigger a cache rebuild for some reason other than those defined by the node itself (title change, etc.).

A real-world example might be events: an “upcoming events” page should only display events which start later than now. If we invalidate the cache every day, then we’ll never show yesterday’s events in our events feed. Here, we need to add our custom cache invalidation event, in this case “rebuild events feed”.

For the purpose of this demo, we won’t actually build an events feed, but we’ll see how cron might be able to trigger cache invalidation.

Let’s add the following code to our response:

... use Drupal\Core\Cache\CacheableMetadata; ... $response->addCacheableDependency($node); $response->addCacheableDependency(CacheableMetadata::createFromRenderArray([ '#cache' => [ 'tags' => [ 'rebuild-events-feed', ], ], ])); return $response; ...

This uses Drupal’s cache tags concept and tells Drupal that when the cache tag ‘rebuild-events-feed’ is invalidated, all cacheable responses which have that cache tag should be invalidated as well. I prefer this to the ‘max-age’ cache tag because it allows us more fine-grained control over when to invalidate our caches.

On cron, we could only invalidate ‘rebuild-events-feed’ if events have passed since our last invalidation of that tag, for example.

For this example, we’ll just invalidate it manually. Clear your cache to begin using the new code (drush cr), then load the page, you will see something like:

{"hello":"Yet another title","time":1516805677}

As always, the time remains the same no matter how many times you reload the page.

Let’s say you are in the midst of a cron run and you have determined that you need to invalidate your cache for response which have the cache tag ‘rebuild-events-feed’, you can run:

\Drupal::service('cache_tags.invalidator')->invalidateTags(['rebuild-events-feed'])

Let’s do it in Drush to see it in action:

drush ev "\Drupal::service('cache_tags.invalidator')->\ invalidateTags(['rebuild-events-feed'])"

We’ve just invalidated our ‘rebuild-events-feed’ tag and, hence, Responses that use it.

The dreaded “leaked metadata” error

This one is beyond my competence level, but I wanted to mention it anyway.

Let’s say you want to output your node’s URL to Json, you might consider computing it using $node->toUrl()->toString(). This will give us “/node/1”.

Let’s add it to our code:

... 'title' => $node->getTitle(), 'url' => $node->toUrl()->toString(), 'time' => time(), ...

This results in a very ugly error which completely breaks your site (at least at the time of this writing): “The controller result claims to be providing relevant cache metadata, but leaked metadata was detected. Please ensure you are not rendering content too early.”.

The problem, it seems, is that Drupal detects that the URL object, like the node we saw earlier, contains its own internal information which tells it when its cache should be invalidated. Converting it to a string prevents the Response from being informed about that information somehow (again, if someone can explain this better than me, please leave a comment), so an exception is thrown.

The ‘toString()’ function has an optional parameter, “$collect_bubbleable_metadata”, which can be used to get not just a string, but also information about its cache should be invalidated. In Drush, this will look like something like:

drush ev 'print_r(node_load(1)->toUrl()->toString(TRUE))' Drupal\Core\GeneratedUrl Object ( [generatedUrl:protected] => /node/1 [cacheContexts:protected] => Array ( ) [cacheTags:protected] => Array ( ) [cacheMaxAge:protected] => -1 [attachments:protected] => Array ( ) )

This changes the return type of toString(), though: toString() no longer returns a string but a GeneratedUrl, so this won’t work:

... 'title' => $node->getTitle(), 'url' => $node->toUrl()->toString(TRUE), 'time' => time(), ...

It gives us the error “Could not normalize object of type Drupal\Core\GeneratedUrl, no supporting normalizer found”.

ohthehugemanatee commented on Drupal.org on how to fix this. Integrating his suggestion, our code now looks like:

... $url = $node->toUrl()->toString(TRUE); $response = new ResourceResponse( [ 'title' => $node->getTitle(), 'url' => $url->getGeneratedUrl(), 'time' => time(), ] ); $response->addCacheableDependency($node); $response->addCacheableDependency($url); ...

This will now work as expected.

With all the fun we’re having, though let’s take this a step further, let’s say we want to export the feed of frontpage items in our Response:

$url = $node->toUrl()->toString(TRUE); $view = \Drupal\views\Views::getView("frontpage"); $view->setDisplay("feed_1"); $view_render_array = $view->render(); $rendered_view = render($view_render_array); $response = new ResourceResponse( [ 'title' => $node->getTitle(), 'url' => $url->getGeneratedUrl(), 'view' => $rendered_view, 'time' => time(), ] ); $response->addCacheableDependency($node); $response->addCacheableDependency($url); $response->addCacheableDependency(CacheableMetadata::createFromRenderArray($view_render_array));

You will not be surpised to see the “leaked metadata was detected” error again… In fact you have come to love and expect this error at this point.

Here is where I’m completely out of my league; according to Crell, “[i]f you [use render() yourself], you’re wrong and you should fix your code “, but I’m not sure how to get a rendered view without using render() myself… I’ve implemented a variation on a comment on Drupal.org by mikejw suggesting using different render context to prevent Drupal from complaining.

$view_render_array = NULL; $rendered_view = NULL; \Drupal::service('renderer')->executeInRenderContext(new RenderContext(), function () use ($view, &$view_render_array, &$rendered_view) { $view_render_array = $view->render(); $rendered_view = render($view_render_array); });

If we check to make sure we have this line in our code:

$response->addCacheableDependency(CacheableMetadata::createFromRenderArray($view_render_array));

we’re telling our Response’s cache to invalidate whenever our view’s cache invaliates. So, for example, if we have several nodes promoted to the front page in our view, we can modify any one of them and our entire Response’s cache will be invalidated and rebuilt.

Resources and further reading

Here are a few things I learned about caching for REST resources.

Drupal.org Featured Case Studies: Chicago Park District Website

Planet Drupal - 23. Januar 2018 - 23:39
Completed Drupal site or project URL: https://www.chicagoparkdistrict.com/

The Chicago Park District owns more than 8,800 acres of green space, making it the largest municipal park manager in the nation. The Chicago Park District’s more than 600 parks offer thousands of sports and physical activities as well as cultural and environmental programs for youth, adults, and seniors. The Chicago Park District is also responsible for 28 indoor pools, 50 outdoor pools, and 26 miles of lakefront including 23 swimming beaches plus one inland beach.

Clarity redesigned, built, and hosts the official website for the Chicago Park District (CPD). Clarity designed and developed this user-friendly, mobile-responsive site, with a unified look and feel and marketing emphasis to promote CPD’s parks, programs, and events. The new website acts as a solution focused on its customers – “front end” visitors of the website and “back end” content administers – both of whom have a wide scope of needs.

Specifically, the new site provides the following improvements and features:

  • New Content Management System (CMS) Platform
    • Drupal 8, the latest version of the popular open-source framework;
    • Allows CPD to more easily integrate and connect to third-party tools, such as
      • ActiveNet, which provides externally-hosted ecommerce functions;
      • AppliTrack, which provides job postings;
      • Bonfire, which provides procurement and contracting opportunities;
      • MailChimp, which provides newsletter signup capabilities.
  • Updated design based on user focus group reactions to the old site, including
    • A cleaner, refreshed look built for devices of all sizes;
    • Home page updates that allow CPD staff to push more information in a more organized fashion;
    • Larger emphasis on maps (hugely important for such a large metropolitan area);
    • The ability to highlight features and attractions, such as artworks and natural areas, that CPD has to offer both residents and visitors;
    • Overall increased speed and performance.
  • Improved administrative functions that allow for
    • Distributed content responsibilities;
    • Workflow approvals to ensure editorial integrity;
    • More modular administrative tools allowing CPD to highlight location details such as accessibility features

With its new site, Chicago Park District is now poised to better serve the long-term needs of residents and visitors for years to come.

Microsofts Drawing-Bot erzeugt genauere Bilder aus natürlicher Sprache

heise online Newsticker - 23. Januar 2018 - 19:30
Forscher von Microsoft haben einen Bot geschaffen, der mittels neuronaler Netze Bilder aus natürlicher Sprache erzeugt. Das Netz erzeugt dabei Pixel für Pixel vollständig neue Bilder und achtet dabei genauer auf Vorgaben als bisherige Techniken.

Sicherheitslücke für Spicker in Lernplattform Moodle

heise online Newsticker - 23. Januar 2018 - 19:00
Einige Moodle-Versionen sind aufgrund diverser Schwachstellen für Angriffe empfänglich. Sicherheitsupdates sind verfügbar.