GNU/Linux and opensource news gathered
«  mars 2018  »
lundi mardi mercredi jeudi vendredi samedi dimanche
26 27 28 1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31 1
retour à la date courante

Aujourd'hui 5 nouvelles :

  • TTP iPrint Printer Lists and Extending Open Enterprise Server Part VI: Installing and Configuring Printer Lists, par jlodom, 7 mars 2018

    mercredi 7 mars 2018 :: Novell News :: RSS
    After a long hiatus, I am back to conclude this series with the remaining articles over the next few weeks. Since we last met, OES 2018 has been released and, as expected, it contains even more functionality in the “iPrint for OES” offshoot. We will not cover that here as what we need to note was covered in Part V.
    +read more
    The post TTP iPrint Printer Lists and Extending Open Enterprise Server Part VI: Installing and Configuring Printer Lists appeared first on Cool Solutions. jlodom Lire la suite jlodom
  • How Google uses Census internally, par Open Source Programs Office, 7 mars 2018

    mercredi 7 mars 2018 :: Google Open Source Blog :: RSS
    This post is the first in a series about OpenCensus, a set of open source instrumentation libraries based on what we use inside Google. This series will cover the benefits of OpenCensus for developers and vendors, Google’s interest in open sourcing instrumentation tools, how to get started with OpenCensus, and our long-term vision.

    If you’re new to distributed tracing and metrics, we recommend Adrian Cole’s excellent talk on the subject: Observability Three Ways.

    Gaining Observability into Planet-Scale Computing

    Google adopted or invented new technologies, including distributed tracing (Dapper) and metrics processing, in order to operate some of the world’s largest web services. However, building analysis systems didn’t solve the difficult problem of instrumenting and extracting data from production services. This is what Census was created to do.

    The Census project provides uniform instrumentation across most Google services, capturing trace spans, app-level metrics, and other metadata like log correlations from production applications. One of the biggest benefits of uniform instrumentation to developers inside of Google is that it’s almost entirely automatic: any service that uses gRPC automatically collects and exports basic traces and metrics.

    OpenCensus offers these capabilities to developers everywhere. Today we’re sharing how we use distributed tracing and metrics inside of Google.

    Incident Management

    When latency problems or new errors crop up in a highly distributed environment, visibility into what’s happening is critical. For example, when the latency of a service crosses expected boundaries, we can view distributed traces in Dapper to find where things are slowing down. Or when a request is returning an error, we can look at the chain of calls that led to the error and examine the metadata captured during a trace (typically logs or trace annotations). This is effectively a bigger stack trace. In rare cases, we enable custom trigger-based sampling which allows us to focus on specific kinds of requests.

    Once we know there’s a production issue, we can use Census data to determine the regions, services, and scope (one customer vs many) of a given problem. You can use service-specific diagnostics pages, called “z-pages,” to monitor problems and the results of solutions you deploy. These pages are hosted locally on each service and provide a firehose view of recent requests, stats, and other performance-related information.

    Performance Optimization

    At Google’s scale, we need to be able to instrument and attribute costs for services. We use Census to help us answer questions like:
    • How much CPU time does my query consume?
    • Does my feature consume more storage resources than before?
    • What is the cost of a particular user operation at a particular layer of the stack?
    • What is the total cost of a particular user operation across all layers of the stack?
    We’re obsessed with reducing the tail latency of all services, so we’ve built sophisticated analysis systems that process traces and metrics captured by Census to identify regressions and other anomalies.

    Quality of Service

    Google also improves performance dynamically depending on the source and type of traffic. Using Census tags, traffic can be directed to more appropriate shards, or we can do things like load shedding and rate limiting.

    Next week we’ll discuss Google’s motivations for open sourcing Census, then we’ll shift the focus back onto the open source project itself.

    By Pritam Shah and Morgan McLean, Census team
    Lire la suite Open Source Programs Office
  • The Building Blocks of Interpretability, par Open Source Programs Office, 7 mars 2018

    mercredi 7 mars 2018 :: Google Open Source Blog :: RSS
    Cross-posted on the Google Research Blog.

    In 2015, our early attempts to visualize how neural networks understand images led to psychedelic images. Soon after, we open sourced our code as DeepDream and it grew into a small art movement producing all sorts of amazing things. But we also continued the original line of research behind DeepDream, trying to address one of the most exciting questions in Deep Learning: how do neural networks do what they do?

    Last year in the online journal Distill, we demonstrated how those same techniques could show what individual neurons in a network do, rather than just what is “interesting to the network” as in DeepDream. This allowed us to see how neurons in the middle of the network are detectors for all sorts of things — buttons, patches of cloth, buildings — and see how those build up to be more and more sophisticated over the networks layers.
    Visualizations of neurons in GoogLeNet. Neurons in higher layers represent higher level ideas.
    While visualizing neurons is exciting, our work last year was missing something important: how do these neurons actually connect to what the network does in practice?

    Today, we’re excited to publish “The Building Blocks of Interpretability,” a new Distill article exploring how feature visualization can combine together with other interpretability techniques to understand aspects of how networks make decisions. We show that these combinations can allow us to sort of “stand in the middle of a neural network” and see some of the decisions being made at that point, and how they influence the final output. For example, we can see things like how a network detects a floppy ear, and then that increases the probability it gives to the image being a “Labrador retriever” or “beagle”.

    We explore techniques for understanding which neurons fire in the network. Normally, if we ask which neurons fire, we get something meaningless like “neuron 538 fired a little bit,” which isn’t very helpful even to experts. Our techniques make things more meaningful to humans by attaching visualizations to each neuron, so we can see things like “the floppy ear detector fired”. It’s almost a kind of MRI for neural networks.
    We can also zoom out and show how the entire image was “perceived” at different layers. This allows us to really see the transition from the network detecting very simple combinations of edges, to rich textures and 3d structure, to high-level structures like ears, snouts, heads and legs.
    These insights are exciting by themselves, but they become even more exciting when we can relate them to the final decision the network makes. So not only can we see that the network detected a floppy ear, but we can also see how that increases the probability of the image being a labrador retriever.
    In addition to our paper, we’re also releasing Lucid, a neural network visualization library building off our work on DeepDream. It allows you to make the sort of lucid feature visualizations we see above, in addition to more artistic DeepDream images.

    We’re also releasing colab notebooks. These notebooks make it extremely easy to use Lucid to reproduce visualizations in our article! Just open the notebook, click a button to run code — no setup required!
    In colab notebooks you can click a button to run code, and see the result below.
    This work only scratches the surface of the kind of interfaces that we think it’s possible to build for understanding neural networks. We’re excited to see what the community will do — and we’re excited to work together towards deeper human understanding of neural networks.

    By Chris Olah, Research Scientist and Arvind Satyanarayan, Visiting Researcher, Google Brain Team
    Lire la suite Open Source Programs Office
  • File and Print Community Webinars – 2018 (March to May), par Punyashloka Mall, 7 mars 2018

    mercredi 7 mars 2018 :: Novell News :: RSS
    We had a pretty good run for our community webinars last year and I am happy to announce our next set of webinars for OES, Filr, and iPrint. Scheduled to be run on the last Wednesday of each month, we will run this for a period of 3 months and will publish a new list …
    +read more
    The post File and Print Community Webinars – 2018 (March to May) appeared first on Cool Solutions. Punyashloka Mall Lire la suite Punyashloka Mall
  • Appel à orateurs KiwiParty 2018, par Raphael, 7 mars 2018

    mercredi 7 mars 2018 :: - Actualités :: RSS
    C'est parti pour la 9e édition de la KiwiParty (eh oui déjà), notre événement réunissant des conférences autour des thèmes de la conception web et de ses bonnes pratiques : accessibilité, ergonomie, nouveautés technologiques, conformité aux standards.

    Quoi de neuf ?

    Après nous êtres exportés à Lyon l’an dernier suite à une invitation du BlendWebMix, la KiwiParty fait son grand retour au pays des knacks et de la flammekueche durant une journée complète en juin 2018 (la date n’est pas encore tout à fait fixée).
    Cette année, notre capacité d'accueil sera de 300 places ! Afin de satisfaire un maximum de personnes, nous procéderons à plusieurs vagues d'inscriptions successives dans le courant du mois de mai.

    Tu as envie de t'exprimer ?

    Comme chaque année, nous comptons sur des gens motivés et passionnés par les bonnes pratiques du Web pour venir partager leurs expériences durant la journée.
    Si vous désirez participer à cet événement en tant qu'orateur ou oratrice, faites-nous part de votre candidature et de vos idées en remplissant le formulaire dédié :
    Vous avez jusqu'au mercredi 2 mai 2018 pour proposer vos interventions. Le choix final sur les contenus se fera quelques jours plus tard et sera annoncé sur le site, nous vous tiendrons bien évidemment informés dés que possible.
    Merci d'avance pour votre participation.
    • Date limite de l'appel à orateurs/trices : le mercredi 2 mai 2018
    • Date et lieu de la KiwiParty : un vendredÿ fin juin 2018 à Strasbourg (jour exact à confirmer)

    NOTE importante : les choix de conférences seront anonymisés, cela signifie que nos principaux éléments de vote porteront sur le titre et la description que vous pourrez faire de votre présentation. Soignez-les !

    Tu as envie de nous aider ?

    Pour organiser la KiwiParty, défrayer les orateurs et oratrices, proposer notre mythique Goûtaÿ,  nous sommes activement en recherche de sponsors. Toute participation sera bien évidemment étudiée avec soin.

    Publié par Lire la suite Raphael