Blog

  • Material-Message-Box

    WPF Material Message Box

    A WPF Message Box implementing material design

    Actions Status

    Code quality score

    Code grade

    Release

    Nuget

    Nuget

    ❇️ Main Features

    The message box has the following custom features:

    ✅ Material Theme design

    ✅ Custom styles for border window, message foreground and background, title foreground and background, border, etc

    ✅ Button to copy message box details to clipboard

    ✅ Scrollable message box content

    ✅ Right to left (RTL) support

    ✅ Message content is .NET UIElement which can host any content

    ❇️ Installing

    ▶️ Download from Nuget ☁⤵

    ▶️ Install from Package manager Console

    $ Install-Package MaterialMessageBox

    Or, if using dotnet

    $ dotnet add package MaterialMessageBox

    ❇️ Usage (Screenshots)

    Creating a simple message box

    MaterialMessageBox.Show("Your cool message here", "The awesome message title");

    Simple Message

    Show a message box with RTL support

    MaterialMessageBox.Show("Your cool message here", "The awesome message title", true);

    message box with RTL support

    Showing an error message

    MaterialMessageBox.ShowError(@"This is an error message");

    Error Message

    Showing an error message

    MaterialMessageBox.ShowError(@"This is an error message");

    Error Message

    Capturing Message Box Results

    var result = MaterialMessageBox.ShowWithCancel($"This is a simple message with a cancel button. You can listen to the return value", "Message Box Title");

    Capturing Message Box Results

    Styling a message box

    CustomMaterialMessageBox msg = new CustomMaterialMessageBox
    {
        TxtMessage = { Text = "Do you like white wine?", Foreground = Brushes.White },
        TxtTitle = { Text = "This is too cool", Foreground = Brushes.White },
        BtnOk = { Content = "Yes" },
        BtnCancel = { Content = "Noooo" },
        MainContentControl = { Background = Brushes.MediumVioletRed },
        TitleBackgroundPanel = { Background = Brushes.BlueViolet },
    
        BorderBrush = Brushes.BlueViolet
    };
    
    msg.Show();
    MessageBoxResult results = msg.Result;

    Capturing Message Box Results

    ❇️ Toolkits used

    This library is built on top of the following toolkits:

    • Material Design In XAML Toolkit – Comprehensive and easy to use Material Design theme and control library for the Windows desktop.
    • Material Design Colors – ResourceDictionary instances containing standard Google Material Design swatches, for inclusion in a XAML application.

    ❇️ Contributing to this project

    If you’ve improved Material Message Box and think that other people would enjoy it, submit a pull request. Anyone and everyone is welcome to contribute.


    ❇️ Licence

    The MIT License (MIT)

    Copyright (c) 2021, Bespoke Fusion

    Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

    The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

    THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

    ❤️

    Visit original content creator repository https://github.com/denpalrius/Material-Message-Box
  • pptempcheck

    PPTempCheck

    simple platform.io project, build for a 01Space/ESP32-C3-0.42LCD development board used with a DHT11 Sensor to measure temperature+humidity and publish it to a mqtt server, for cosumption by Home Assistants MQTT integration (supports discovery).

    Installation

    Sensor End

    Clone project, copy include/config.h-dist to include/config.h, edit config to fit your setup (PIN, Wifi, MQTT Server/Sensor Name/ID) get dependenies with platform io, build and flash to your development board.

    It should flash a connecting to Wi-Fi message a few times (if it keeps doing that it can’t), then it will flash a connecting to MQTT server (if it keeps doing that it can’t), and ultimately it will publish and show the measured temperature and humidity (if constant at -1 while not being at -1, then its not reading anything).

    Temperature is always in celsius.

    Home Assistant End

    All you need to do is have an MQTT Broker available (I use the mosquito add-on for home assistant), the MQTT Integration enabled, and discover mode on, and the sensor will automatically appear as an entity, if everything is working.

    License

    MIT style, see LICENSE.md file

    Links

    Visit original content creator repository
    https://github.com/ppetermann/pptempcheck

  • Sales-Data-Analysis

    Sales-Data-Analysis

    12 months’ worth of sales data was analyzed here
    Here is a comprehensive summary of the project

    1. Import Necessary Libraries: The project began by importing essential Python libraries, pandas and os.
    2. Data Collection and Merging: I collected and merged 12 months of sales data from separate CSV files into a single panda DataFrame.
    3. Data Cleaning: The data was cleaned by handling missing values, erroneous entries, and data type conversions.
    4. Data Augmentation: I augmented the data with additional columns that could be useful for analysis. These included:
      a. ‘Month’: Extracted from the ‘Order Date’ column.
      b. ‘Sales’: Calculated by multiplying ‘Quantity Ordered’ and ‘Price Each’.
      c. ‘City’: Extracted from the ‘Purchase Address’ column, including the state to handle cities with the same name in different states.
      d. ‘Hour’ and ‘Minute’: Extracted from the ‘Order Date’ column to identify the time of purchase.
    5. Data Analysis: I performed various analyses on the augmented data to answer specific business questions. These included:
      e. What was the best month for sales and how much was earned?
      f. Which city had the highest number of sales?
      g. What time should advertisements be displayed to maximize the likelihood of customers buying a product?
      h. Which products were most often sold together?
      i. Which product was sold the most and why?
      Each of these questions was addressed using a combination of pandas functions to manipulate the data and matplotlib to visualize the results. I also used methods from itertools and collections to identify frequently bought-together items.
    6. Conclusions: Based on the analyses, I drew several conclusions about the dataset:
    • The best month for sales was December.
    • The city with the highest sales was San Francisco, CA.
    • Advertisements should be displayed around 11 am and 7 pm for maximum impact.
    • iPhone and Lightning Charging Cable were often sold together.
    • AAA Batteries were the most sold product, likely due to their low price.

    This project demonstrates the proficient use of Python libraries such as pandas and matplotlib for data manipulation, analysis, and visualization. The findings could provide valuable insights for the company to drive its sales strategy.

    Visit original content creator repository
    https://github.com/wahidupal/Sales-Data-Analysis

  • nodebox-runtime

    Nodebox

    Nodebox is a runtime for executing Node.js modules in the browser.

    Why we built Nodebox

    With sandpack-bundler, we allowed any developer anywhere to instantly create a fast, local, shareable playground inside their browser, without having to wait forever to install dependencies and fight with devtools. This improves the learning, experimentation and sharing experience of client-side JavaScript code.

    However, server-side JavaScript remained a challenge. At CodeSandbox we have solved this by using Firecracker VMs, allowing us to bring your entire development environment to the cloud regardless of the programming language or tooling you might be using. Unfortunately, as VMs live in the cloud, they require infrastructure and a network connection, resulting in a higher cost compared to our client-side sandboxes.

    To solve this problem, we built Nodebox, a runtime that runs entirely in the browser, eliminating the need for a network connection and infrastructure.

    Nodebox gives you the same user experience you have come to expect from Sandpack, which means a near-instant server-side JavaScript environment at the click of a button—built for experimentation, examples and documentation.

    The differences between a VM and Nodebox

    As mentioned in the previous section, we solved server-side sandboxes in CodeSandbox by using Firecracker VMs. In this section, we’ll explain the advantages and disadvantages of each approach.

    Advantages of VMs over Nodebox

    • You get dedicated resources, with no resource limits enforced by the browser
    • You have an entire Unix OS available
    • You can run any language, database, command
    • You can use network sockets
    • You can run large and complex projects
    • A perfect 1:1 environment as compared to your local setup (at least, if you’re using a Unix-based system)
    • No emulation, so Node.js would run exactly the same way as locally

    Advantages of Nodebox

    • No need for any infrastructure
    • No need for a network connection
    • Instant feedback to any change
    • Easy to get started
    • Easy and instant reset – simply refresh the page/iframe
    • Every page visitor gets their own Nodebox instance automatically

    What makes it different

    Nodebox is currently the only cross-browser Node.js runtime supporting all the latest browsers:

    • Chrome;
    • Firefox;
    • Safari *
    • Support for iOS Safari is in beta

    Nodebox does not emulate Node.js but is, instead, a Node.js-compatible runtime. This means that it implements as much of the Node.js API as possible while keeping a minimal performance imprint, using browser API where applicable and, in some cases, leaving out certain parts of Node.js due to browser limitations or complexity.

    Nodebox uses an internal dependency manager that is fine-tuned to deliver optimal initial load time by utilizing dependency caching via Sandpack CDN. That CDN is an open-source Rust package manager that runs in the cloud and can be self-hosted pretty easily as well.

    While there are alternatives to Nodebox, they are closer to mimicking a container-based environment, running commands step-by-step, or even the entire Linux distributive in your browser. That makes them slower and harder to use compared to Nodebox, which is optimized to run sandboxes fast and with as little friction as possible.

    Limitations

    Unfortunately, any type of runtime that does not have access to operating system-level APIs will come with certain limitations. For Nodebox, those are the following:

    • N-API modules
    • net#Sockets pointing to external IPs
    • Synchronous exec/spawn
    • async_hooks (planned for implementation)
    • Automatic process exiting – users now need to manually call process.exit before the process is exited (planned for implementation)

    As we implement every module manually one by one, it is possible that some will not behave correctly. If this happens, feel free to open an issue here on GitHub and we’ll make sure to fix it.

    Documentation


    Getting started

    Nodebox is meant for usage in your client-side applications, granting them the capability of running actual Node.js code directly in the browser. Here are a couple of examples of when Nodebox can be used:

    • Building interactive examples for server-side code in your documentation;
    • Showcasing a UI component library in the actual framework it’s built for;
    • Generally any evaluation of Node.js code and previewing it in the browser.

    In the context of this tutorial, we will be working on a documentation website that illustrates different examples of using a Next.js application. Bear in mind that our documentation itself can be written in any framework of our choosing.

    Install

    Nodebox can be installed from NPM just like any other dependency:

    npm install @codesandbox/nodebox

    Setup

    Nodebox consists of two main parts:

    • A runtime environment evaluating the code;
    • A preview environment serving the result of the evaluation.

    Corresponding to these two parts, let’s create two iframes in our application:

    import { Nodebox } from '@codesandbox/nodebox';
    
    const runtime = new Nodebox({
      // Provide a reference to the <iframe> element in the DOM
      // where Nodebox should render the preview.
      iframe: document.getElementById('nodebox-iframe'),
    });
    
    // Establish a connection with the runtime environment.
    await runtime.connect();

    Learn more about the Nodebox API.

    You want to establish a single Nodebox instance across your entire application. Bear that in mind during the setup phase and consult your framework’s documentation and best practices regarding the most efficient way of achieving this.

    Previews correspond to commands executed in Nodebox, meaning that at this stage there will be no previews mounted at the given iframe because we haven’t run any commands yet. Let’s change that.

    Initialize file system

    Much like your own project, the project you create in Nodebox needs files to work with. It can be a single JavaScript file or the entire project, like Astro or Next.js.

    Let’s describe a Next.js project that we need.

    // Populate the in-memory file system of Nodebox
    // with a Next.js project files.
    await runtime.fs.init({
      'package.json': JSON.stringify({
        name: 'nextjs-preview',
        dependencies: {
          '@next/swc-wasm-nodejs': '12.1.6',
          next: '12.1.6',
          react: '18.2.0',
          'react-dom': '18.2.0',
        },
      }),
      // On the index page, let's illustrate how server-side props
      // propagate to your page component in Next.js.
      'pages/index.jsx': `
    export default function Homepage({ name }) {
      return (
        <div>
          <h1>Hello, {name}</h1>
          <p>The name "{name}" has been received from server-side props.</p>
        </div>
      )
    }
    
    export function getServerSideProps() {
      return {
        props: {
          name: 'John'
        }
      }
    }
        `,
    });

    You can reference standard Node.js modules, as well as external dependencies while writing your project files. Note that you don’t have to install those dependencies as Nodebox will manage dependency installation, caching, and resolution automatically.

    What we did above was outline a file system state of an actual Next.js project for Nodebox to run. The last step remaining is to run Next.js.

    Run project

    To run the project, we will run the npm dev command using the Shell API provided by Nodebox.

    // First, create a new shell instance.
    // You can use the same instance to spawn commands,
    // observe stdio, restart and kill the process.
    const shell = runtime.shell.create();
    
    // Then, let's run the "dev" script that we've defined
    // in "package.json" during the previous step.
    const nextProcess = await shell.runCommand('npm', ['dev']);
    
    // Find the preview by the process and mount it
    // on the preview iframe on the page.
    const previewInfo = await runtime.preview.getByShellId(nextProcess.id);
    const previewIframe = document.getElementById('preview-iframe');
    previewIframe.setAttribute('src', previewInfo.url);

    Note that you can treat shell.runCommand similar to spawn in Node.js. Learn more about the Shell API in the documentation.

    Once this command runs, it will return a shell reference we can use to retrieve the preview URL. By mounting that preview URL on our preview iframe from the setup, we can see the Next.js project running:

    That’s it! 🎉 Not a single server was spawned while running this Next.js application. Everything was managed by Nodebox directly in your browser.

    👉 Check out the Sandbox for this tutorial.

    Visit original content creator repository https://github.com/Sandpack/nodebox-runtime
  • iPick

    iPick

    Multiprocessing Peak Picking Software for UCSF NMR Spectra


    Introduction

    The iPick program is available as a module for the POKY and NMRFAM-SPARKY. It is highly recommended that you use the module instead of the command line tool. The main reasons are the ease of work and the extended capabilities provided in the module. For example, using the module, you can easily select the experiment you are interested in and click a button to perform the peak picking task. There are also many capabilities built into the module. One example is the newly proposed Reliability Score feature that can help a researcher to determine the noise peaks easily.


    Running the iPick module

    The module is integrated into the latest version of POKY and NMRFAM-SPARKY programs which makes starting the program much easier. To get the latest version use these links:

    To get POKY: https://poky.clas.ucdenver.edu

    To get NMRFAM-SPARKY: https://nmrfam.wisc.edu/nmrfam-sparky-distribution/

    If you are using POKY, you can open the iPick module by using the two-letter-code iP.

    If you are using NMRFAM-SPARKY, you can open the iPick module by using the two-letter-code iP. Alternatively, you can use the top menu and open “Extensions”, from there, navigate to “Peak” menu and find “iPick Peak Picker”.


    Downloading and Running the Code

    If you want to use the code from this repository, you can run the module by following these steps:

    Open a Terminal and download the code:

    git clone https://github.com/pokynmr/iPick.git
    

    Please note the directory you downloaded the code in. To find that, you can navigate to the iPick folder:

    cd iPick
    

    and then use the pwd command to see the full address:

    pwd
    

    In this case, the full address is /home/samic/iPick

    Then, inside the POKY or NMRFAM-SPARKY window, open the Python module by typing the two-letter-code py

    from here, click the Load Module… button and navigate to the before-mentioned directory and select the ipick_gui_sparky.py file.

    Alternatively, you can copy-paste these two commands:

    sys.path.append('/home/samic/iPick')
    import ipick_gui_sparky
    

    (make sure to replace the address inside the single-quotations with the address of the iPick directory on your computer)

    Finally, run the module by running this command in the Python module window:

    ipick_gui_sparky.show_ipick_dialog(s)
    

    This will open the iPick window and let you use the module.


    Capabilities

    The iPick module had two modes of operation. The Basic mode and the Advanced mode.

    A researcher may use the Basic Mode of iPick to quickly pick signals in spectra. With the Advanced Mode of iPick, one can customize the peak picking options: positive/negative peak selection, base level selection, automatic peak import, and auto-integration by a selected integration mode. Two automated integration modes are available: Individual Fit and Group Fit. The former mode fits each pick individually without considering neighboring peaks, while the latter integrates all peaks considered to be neighbors. Integration Settings (two-letter-code it) specifies different integration protocols, including Gaussian, Lorentzian, Pseudo-Voigt, and viable linewidth ranges.

    The Peak List window generated by iPick lists the position, volume, height, S/N, linewidth and Reliability Score (calculated from a linear combination of the volume, S/N, and linewidth) for each detected peak. By clicking the Setup button, the user can add or remove columns from the Peak List Settings window. The weighting factors used calculating the Reliability Score can be manually adjusted from the Manual Coefficients tab. Since the Reliability Score reflects the probability of a peak to be a true signal, the researcher can easily remove false signals (noise and artifact peaks) by specifying a threshold and clicking the Remove button.

    The Cross-Validation module provides an alternative way to qualify picked peaks. Noise peaks with the decent reliability score – due to a large SNR or volume because of experimental defects or integration errors – are likely to show up only in a certain spectrum and their resonances will not appear in the other spectra. On the other hand, the resonances of true signals may appear frequently in interrelated spectra. This fact has been used to cross-validate peaks between all spectra.

    Each peak will be examined and corresponding peaks in other spectra will be noted. This information will be presented in the Peak list so that the researcher can easily locate lone peak and remove them by clicking the “Remove Lone Peaks” button.

    The frequency of the resonances for each peak will be visualized in the Peak Histogram (Figure below) by clicking the Peak Histogram button of the cross-validation window. Less frequently occurring resonances help the user to identify false-positives. It is also possible for the user to view associated histogram bars on the histogram by selecting one or more peaks from the spectral views and clicking the Show the selected peaks button.


    Standalone usage

    The iPick program can also be run from the command line.

    Here’s a simple running example:

    python iPick.py –input spectra.ucsf –output peaks.list

    There are more options that you can use with the command line script. To see a full list of these options, run the script without any options:

    python iPick.py
    

    Please note that it is highly recommended that you use the POKY or NMRFAM-SPARKY module instead of the command line tool.

    Here is an example of running iPick script:

    python iPick.py -i ~/Ubiquitin_NMRFAM/Spectra/CHSQC.ucsf -o peaks.list -r 1 --threshold 50325.0 --overwrite -c 1
    

    In this example, the input file is a CHSQC experiment and the output file (the list of the found peaks) will be named “peaks.list” in the current directory. Also, a threshold of 50325.0 has been defined. The last part of the command, “-c 1”, indicates that we want to use only one CPU process. This number can be increased as needed.


    Windows Users

    The iPick program runs natively on Mac, Linux and Windows. However, multiprocessing of Python is limited in Windows due to its kernel architecture. To overcome this issue, a user can use WSL2 (Windows Subsystem for Linux):

    https://www.windowscentral.com/how-install-wsl2-windows-10

    Alternatively, NMRbox.org can be considered. NMRbox provides cloud-based virtual machines for NMR software.


    Acknowledgments

    Citation

    Rahimi, Mehdi, Yeongjoon Lee, John L. Markley, and Woonghee Lee. “iPick: Multiprocessing software for integrated NMR signal detection and validation.” Journal of Magnetic Resonance 328 (2021): 106995. https://doi.org/10.1016/j.jmr.2021.106995

    Contributions

    • The ipick.py script was written by Dr. Woonghee Lee (University of Colorado, Denver)
    • The iPick Module was written by Dr. Mehdi Rahimi (University of Colorado, Denver)

    Funding sources

    National Science Foundation:

    • DBI 1902076 (Lee, W)
    • DBI 2051595 (Lee, W)

    University of Colorado Denver

    Visit original content creator repository https://github.com/pokynmr/iPick
  • iPick

    iPick

    Multiprocessing Peak Picking Software for UCSF NMR Spectra


    Introduction

    The iPick program is available as a module for the POKY and NMRFAM-SPARKY. It is highly recommended that you use the module instead of the command line tool. The main reasons are the ease of work and the extended capabilities provided in the module. For example, using the module, you can easily select the experiment you are interested in and click a button to perform the peak picking task. There are also many capabilities built into the module. One example is the newly proposed Reliability Score feature that can help a researcher to determine the noise peaks easily.


    Running the iPick module

    The module is integrated into the latest version of POKY and NMRFAM-SPARKY programs which makes starting the program much easier. To get the latest version use these links:

    To get POKY: https://poky.clas.ucdenver.edu

    To get NMRFAM-SPARKY: https://nmrfam.wisc.edu/nmrfam-sparky-distribution/

    If you are using POKY, you can open the iPick module by using the two-letter-code iP.

    If you are using NMRFAM-SPARKY, you can open the iPick module by using the two-letter-code iP. Alternatively, you can use the top menu and open “Extensions”, from there, navigate to “Peak” menu and find “iPick Peak Picker”.


    Downloading and Running the Code

    If you want to use the code from this repository, you can run the module by following these steps:

    Open a Terminal and download the code:

    git clone https://github.com/pokynmr/iPick.git
    

    Please note the directory you downloaded the code in. To find that, you can navigate to the iPick folder:

    cd iPick
    

    and then use the pwd command to see the full address:

    pwd
    

    In this case, the full address is /home/samic/iPick

    Then, inside the POKY or NMRFAM-SPARKY window, open the Python module by typing the two-letter-code py

    from here, click the Load Module… button and navigate to the before-mentioned directory and select the ipick_gui_sparky.py file.

    Alternatively, you can copy-paste these two commands:

    sys.path.append('/home/samic/iPick')
    import ipick_gui_sparky
    

    (make sure to replace the address inside the single-quotations with the address of the iPick directory on your computer)

    Finally, run the module by running this command in the Python module window:

    ipick_gui_sparky.show_ipick_dialog(s)
    

    This will open the iPick window and let you use the module.


    Capabilities

    The iPick module had two modes of operation. The Basic mode and the Advanced mode.

    A researcher may use the Basic Mode of iPick to quickly pick signals in spectra. With the Advanced Mode of iPick, one can customize the peak picking options: positive/negative peak selection, base level selection, automatic peak import, and auto-integration by a selected integration mode. Two automated integration modes are available: Individual Fit and Group Fit. The former mode fits each pick individually without considering neighboring peaks, while the latter integrates all peaks considered to be neighbors. Integration Settings (two-letter-code it) specifies different integration protocols, including Gaussian, Lorentzian, Pseudo-Voigt, and viable linewidth ranges.

    The Peak List window generated by iPick lists the position, volume, height, S/N, linewidth and Reliability Score (calculated from a linear combination of the volume, S/N, and linewidth) for each detected peak. By clicking the Setup button, the user can add or remove columns from the Peak List Settings window. The weighting factors used calculating the Reliability Score can be manually adjusted from the Manual Coefficients tab. Since the Reliability Score reflects the probability of a peak to be a true signal, the researcher can easily remove false signals (noise and artifact peaks) by specifying a threshold and clicking the Remove button.

    The Cross-Validation module provides an alternative way to qualify picked peaks. Noise peaks with the decent reliability score – due to a large SNR or volume because of experimental defects or integration errors – are likely to show up only in a certain spectrum and their resonances will not appear in the other spectra. On the other hand, the resonances of true signals may appear frequently in interrelated spectra. This fact has been used to cross-validate peaks between all spectra.

    Each peak will be examined and corresponding peaks in other spectra will be noted. This information will be presented in the Peak list so that the researcher can easily locate lone peak and remove them by clicking the “Remove Lone Peaks” button.

    The frequency of the resonances for each peak will be visualized in the Peak Histogram (Figure below) by clicking the Peak Histogram button of the cross-validation window. Less frequently occurring resonances help the user to identify false-positives. It is also possible for the user to view associated histogram bars on the histogram by selecting one or more peaks from the spectral views and clicking the Show the selected peaks button.


    Standalone usage

    The iPick program can also be run from the command line.

    Here’s a simple running example:

    python iPick.py –input spectra.ucsf –output peaks.list

    There are more options that you can use with the command line script. To see a full list of these options, run the script without any options:

    python iPick.py
    

    Please note that it is highly recommended that you use the POKY or NMRFAM-SPARKY module instead of the command line tool.

    Here is an example of running iPick script:

    python iPick.py -i ~/Ubiquitin_NMRFAM/Spectra/CHSQC.ucsf -o peaks.list -r 1 --threshold 50325.0 --overwrite -c 1
    

    In this example, the input file is a CHSQC experiment and the output file (the list of the found peaks) will be named “peaks.list” in the current directory. Also, a threshold of 50325.0 has been defined. The last part of the command, “-c 1”, indicates that we want to use only one CPU process. This number can be increased as needed.


    Windows Users

    The iPick program runs natively on Mac, Linux and Windows. However, multiprocessing of Python is limited in Windows due to its kernel architecture. To overcome this issue, a user can use WSL2 (Windows Subsystem for Linux):

    https://www.windowscentral.com/how-install-wsl2-windows-10

    Alternatively, NMRbox.org can be considered. NMRbox provides cloud-based virtual machines for NMR software.


    Acknowledgments

    Citation

    Rahimi, Mehdi, Yeongjoon Lee, John L. Markley, and Woonghee Lee. “iPick: Multiprocessing software for integrated NMR signal detection and validation.” Journal of Magnetic Resonance 328 (2021): 106995. https://doi.org/10.1016/j.jmr.2021.106995

    Contributions

    • The ipick.py script was written by Dr. Woonghee Lee (University of Colorado, Denver)
    • The iPick Module was written by Dr. Mehdi Rahimi (University of Colorado, Denver)

    Funding sources

    National Science Foundation:

    • DBI 1902076 (Lee, W)
    • DBI 2051595 (Lee, W)

    University of Colorado Denver

    Visit original content creator repository https://github.com/pokynmr/iPick
  • poto-framework

    GitHub License

    poto-framework

    • process-oriented to object 面向过程到面向对象编程,缩写poto

    现状

    • 目前微服务很火,而且现在说架构都是基于业务的宏观设计,而恰恰忽略了代码层面的架构,所有现在大部分的项目是微服务划分的很好, 在打开实际的各个微服务的代码,有的可以说几乎不可维护的状态,代码都是基于Restful mvc的代码结构层次,甚至划分module都是mvc(简单的项目还是可以用的),一旦项目复杂,日积月累的业务代码的堆积, 各种层次技术人员的随意编写代码,代码惨不忍睹,相信大家最头疼的就是看别人的代码。
    • 拿着面向对象的语言写着面向过程的代码。

    DDD领域驱动

    • DDD不是一套架构,而是一种架构思想,poto-framework只是框架的约束,降低DDD的实践门槛。业务层领域的划分才是重点。

    DDD四层架构

    • Evans在它的《领域驱动设计:软件核心复杂性应对之道》书中推荐采用分层架构去实现领域驱动设计:

    其实这种分层架构我们早已驾轻就熟,MVC模式就是我们所熟知的一种分层架构,我们尽可能去设计每一层,使其保持高度内聚性,让它们只对下层进行依赖,体现了高内聚低耦合的思想。 分层架构的落地就简单明了了,用户界面层我们可以理解成web层的Controller,应用层和业务无关,它负责协调领域层进行工作,领域层是领域驱动设计的业务核心,包含领域模型和领域服务,领域层的重点放 在如何表达领域模型上,无需考虑显示和存储问题,基础实施层是最底层,提供基础的接口和实现,领域层和应用服务层通过基础实施层提供的接口实现类如持久化、发送消息等功能。

    • 改进DDD分层架构和DIP依赖倒置原则

      DDD分层架构是一种可落地的架构,但是我们依然可以进行改进,Vernon在它的《实现领域驱动设计》一书中提到了采用依赖倒置原则改进的方案。 所谓的依赖倒置原则指的是:高层模块不应该依赖于低层模块,两者都应该依赖于抽象,抽象不应该依赖于细节,细节应该依赖于抽象。

      从图中可以看到,基础实施层位于其他所有层的上方,接口定义在其它层,基础实施实现这些接口。依赖原则的定义在DDD设计中可以改述为:领域层等其他层不应该依赖于基础实施层,两者都应该依赖于抽象,具体落地的时候,这些抽象的接口定义放在了领域层等下方层中。这也就是意味着一个重要的落地指导原则: 所有依赖基础实施实现的抽象接口,都应该定义在领域层或应用层中。

      采用依赖倒置原则改进DDD分层架构除了上面说的DIP的好处外,还有什么好处吗?其实这种分层结构更加地高内聚低耦合。每一层只依赖于抽象,因为具体的实现在基础实施层,无需关心。只要抽象不变,就无需改动那一层,实现如果需要改变,只需要修改基础实施层就可以了。

    java的设计原则

    • 单一职责原则
    • 依赖倒置原则
    • 开闭原则

    什么是CQRS?

    CQRS 架构全称是Command Query Responsibility Segregation,即命令查询职责分离,事件驱动。名词本身最早应该是Greg Young提出来的,但是概念却很早就有了。 本质上,CQRS也是一种读写分离的机制,是一种思想很简单清晰的设计模式,架构图如下:

    CQRS把整个系统划分成两块:

    • Command Side 写的一边 接收外部所有的Insert、Update、Delete命令,转化为Command,每一个Command修改一个Aggregate的状态。Command Side的命令通常不需要返回数据。注意:这种“写”操作过程中,可能会涉及“读”,因为要做校验,这时可直接在这一边进行读操作,而不需要再到Query Side去。
    • Query Side 读的一边 接受所有查询请求,直接返回数据。

    为什么使用CQRS

    • [领域] 在 DDD 中占据了核心的地位,DDD 通过领域对象之间的交互实现业务逻辑与流程,并通过分层的方式将业务逻辑剥离出来,单独进行维护,从而控制业务本身的复杂度。但是作为一个业务系统,[查询]的相关功能也是不可或缺的。在实现各式各样的查询功能时,往往会发现很难用领域模型来实现,查询更多的是直接查data object(DO)就可以完成,用领域对象反而增加了复杂度.

    DDD、CQRS架构落地

    • 架构中,我们平等的看待Web、RPC、DB、MQ等外部服务,基础实施依赖圆圈内部的抽象
    • 当一个命令Command请求过来时,会通过应用层的CommandService去协调领域层工作,而一个查询Query请求过来时,则直接通过基础实施的实现与数据库或者外部服务交互。我们所有的抽象都定义在圆圈内部,实现都在基础设施。

    使用

    文档

    Visit original content creator repository https://github.com/bfxyzshb/poto-framework
  • poto-framework

    GitHub License

    poto-framework

    • process-oriented to object 面向过程到面向对象编程,缩写poto

    现状

    • 目前微服务很火,而且现在说架构都是基于业务的宏观设计,而恰恰忽略了代码层面的架构,所有现在大部分的项目是微服务划分的很好, 在打开实际的各个微服务的代码,有的可以说几乎不可维护的状态,代码都是基于Restful mvc的代码结构层次,甚至划分module都是mvc(简单的项目还是可以用的),一旦项目复杂,日积月累的业务代码的堆积, 各种层次技术人员的随意编写代码,代码惨不忍睹,相信大家最头疼的就是看别人的代码。
    • 拿着面向对象的语言写着面向过程的代码。

    DDD领域驱动

    • DDD不是一套架构,而是一种架构思想,poto-framework只是框架的约束,降低DDD的实践门槛。业务层领域的划分才是重点。

    DDD四层架构

    • Evans在它的《领域驱动设计:软件核心复杂性应对之道》书中推荐采用分层架构去实现领域驱动设计:

    其实这种分层架构我们早已驾轻就熟,MVC模式就是我们所熟知的一种分层架构,我们尽可能去设计每一层,使其保持高度内聚性,让它们只对下层进行依赖,体现了高内聚低耦合的思想。 分层架构的落地就简单明了了,用户界面层我们可以理解成web层的Controller,应用层和业务无关,它负责协调领域层进行工作,领域层是领域驱动设计的业务核心,包含领域模型和领域服务,领域层的重点放 在如何表达领域模型上,无需考虑显示和存储问题,基础实施层是最底层,提供基础的接口和实现,领域层和应用服务层通过基础实施层提供的接口实现类如持久化、发送消息等功能。

    • 改进DDD分层架构和DIP依赖倒置原则

      DDD分层架构是一种可落地的架构,但是我们依然可以进行改进,Vernon在它的《实现领域驱动设计》一书中提到了采用依赖倒置原则改进的方案。 所谓的依赖倒置原则指的是:高层模块不应该依赖于低层模块,两者都应该依赖于抽象,抽象不应该依赖于细节,细节应该依赖于抽象。

      从图中可以看到,基础实施层位于其他所有层的上方,接口定义在其它层,基础实施实现这些接口。依赖原则的定义在DDD设计中可以改述为:领域层等其他层不应该依赖于基础实施层,两者都应该依赖于抽象,具体落地的时候,这些抽象的接口定义放在了领域层等下方层中。这也就是意味着一个重要的落地指导原则: 所有依赖基础实施实现的抽象接口,都应该定义在领域层或应用层中。

      采用依赖倒置原则改进DDD分层架构除了上面说的DIP的好处外,还有什么好处吗?其实这种分层结构更加地高内聚低耦合。每一层只依赖于抽象,因为具体的实现在基础实施层,无需关心。只要抽象不变,就无需改动那一层,实现如果需要改变,只需要修改基础实施层就可以了。

    java的设计原则

    • 单一职责原则
    • 依赖倒置原则
    • 开闭原则

    什么是CQRS?

    CQRS 架构全称是Command Query Responsibility Segregation,即命令查询职责分离,事件驱动。名词本身最早应该是Greg Young提出来的,但是概念却很早就有了。 本质上,CQRS也是一种读写分离的机制,是一种思想很简单清晰的设计模式,架构图如下:

    CQRS把整个系统划分成两块:

    • Command Side 写的一边 接收外部所有的Insert、Update、Delete命令,转化为Command,每一个Command修改一个Aggregate的状态。Command Side的命令通常不需要返回数据。注意:这种“写”操作过程中,可能会涉及“读”,因为要做校验,这时可直接在这一边进行读操作,而不需要再到Query Side去。
    • Query Side 读的一边 接受所有查询请求,直接返回数据。

    为什么使用CQRS

    • [领域] 在 DDD 中占据了核心的地位,DDD 通过领域对象之间的交互实现业务逻辑与流程,并通过分层的方式将业务逻辑剥离出来,单独进行维护,从而控制业务本身的复杂度。但是作为一个业务系统,[查询]的相关功能也是不可或缺的。在实现各式各样的查询功能时,往往会发现很难用领域模型来实现,查询更多的是直接查data object(DO)就可以完成,用领域对象反而增加了复杂度.

    DDD、CQRS架构落地

    • 架构中,我们平等的看待Web、RPC、DB、MQ等外部服务,基础实施依赖圆圈内部的抽象
    • 当一个命令Command请求过来时,会通过应用层的CommandService去协调领域层工作,而一个查询Query请求过来时,则直接通过基础实施的实现与数据库或者外部服务交互。我们所有的抽象都定义在圆圈内部,实现都在基础设施。

    使用

    文档

    Visit original content creator repository https://github.com/bfxyzshb/poto-framework
  • react-native-style-utilities

    react-native-style-utilities

    Fully typed hooks and utility functions for the React Native StyleSheet API

    npm i react-native-style-utilities

    ESLint Setup

    If you’re using the eslint-plugin-react-hooks plugin, add the following to your .eslintrc.js:

    "react-hooks/exhaustive-deps": [
      "error",
      {
        additionalHooks: "(useStyle|useFlatStyle)",
      },
    ],

    useStyle

    A hook to memoize dynamic styles.

    See “Memoize!!! 💾 – a react (native) performance guide”

    Objects

    By using useStyle the object { height: number } gets memoized and will only be re-created if someDynamicValue changes, resulting in better optimized re-renders.

    Bad

    return <View style={{ height: someDynamicValue }} />

    Good

    const style = useStyle(() => ({ height: someDynamicValue }), [someDynamicValue])
    
    return <View style={style} />

    Arrays

    useStyle can also be used to join arrays together, also improving re-render times.

    Bad

    return <View style={[styles.container, props.style, { height: someDynamicValue }]} />

    Good

    const style = useStyle(
      () => [styles.container, props.style, { height: someDynamicValue }],
      [props.style, someDynamicValue]
    );
    
    return <View style={style} />

    useFlatStyle

    Same as useStyle, but flattens (“merges”) the returned values into a simple object with StyleSheet.flatten(...).

    See “Memoize!!! 💾 – a react (native) performance guide”

    const style1 = useStyle(
      () => [styles.container, props.style, { height: someDynamicValue }],
      [props.style, someDynamicValue]
    );
    style1.borderRadius // <-- does not work, `style1` is an array!
    
    const style2 = useFlatStyle(
      () => [styles.container, props.style, { height: someDynamicValue }],
      [props.style, someDynamicValue]
    );
    style2.borderRadius // <-- works, will return 'number | undefined'

    findStyle

    A helper function to find a given style property in any style object without using expensive flattening (no StyleSheet.flatten(...)).

    function Component({ style, ...props }) {
      const borderRadius = style.borderRadius // <-- does not work, style type is complex
      const borderRadius = findStyle(style, "borderRadius") // <-- works, is 'number | undefined'
    }

    Visit original content creator repository
    https://github.com/mrousavy/react-native-style-utilities

  • zendesk-salesforce-sdk

    Zendesk SDK for Salesforce

    The Zendesk SDK for Salesforce allows your Force.com apps to call the Zendesk Core API. The library provides a set of Apex classes, such as ZendeskUsersAPI and ZendeskTicketsAPI, that model Zendesk Objects Users and Tickets.

    View the Zendesk API documentation here https://developer.zendesk.com/rest_api/docs/core/introduction

    Included in this repository are a number of sample Visualforce pages and controllers that demonstrate in more detail how the library can be used.

    Examples

    // Create a new API connection
    ZendeskConnection zconn = ZendeskConnection.createWithAPIToken('subdomain','username','token');
    or 
    ZendeskConnection zconn = ZendeskConnection.createWithNamedCredential('named_credential');
    
    // Get recent Tickets
    ZendeskTicketsAPI zapi = new ZendeskTicketsAPI(zconn);
    ZendeskTicketsAPI.TicketsWrapper result = zapi.getTickets();
    for (ZendeskTypes.ZTicket zt : result.tickets) {
        System.debug(zt);
    }
    
    // Update a Ticket
    ZendeskTicketsAPI zapi = new ZendeskTicketsAPI(zconn);
    ZendeskTypes.ZTicket zt = new ZendeskTypes.ZTicket();
    zt.priority = ZendeskTypes.TicketPriority.urgent;
    zapi.updateTicket(12345, zt);
    
    // Get Users of an Organization
    ZendeskUsersAPI zapi = new ZendeskUsersAPI(zconn);
    ZendeskUsersAPI.UsersWrapper result = zapi.getUsersByOrganization(1122334455);
    for (ZendeskTypes.ZUser zu : result.users) {
        System.debug(zu);
    }
    
    // Search Organizations with paging options
    ZendeskOrganizationsAPI orgs_api = new ZendeskOrganizationsAPI(zconn);
    Map<String, Object> params = new Map<String, Object>{'per_page'=>20, 'page'=>2};
    ZendeskOrganizationsAPI.OrganizationsWrapper orgsWrapper = orgs_api.autocompleteSearch('searchText', params);

    Implemented Resources

    • Attachments
    • Autocomplete
    • Group Memberships
    • Groups
    • Job Statuses
    • Organization Fields
    • Organization Memberships
    • Organizations
    • Satisfaction Ratings
    • Search
    • Sessions
    • Tags
    • Ticket Comments
    • Ticket Fields
    • Ticket Forms
    • Ticket Metrics
    • Tickets
    • User Fields
    • Users

    Installation

    There are two mechanisms for installing the toolkit: as a managed package or from GitHub. Choose the managed package if you only want the Apex API library without sample code. If you are considering modifying or extending the toolkit itself or want to install the sample Visualforce pages, then installing from GitHub is a little more work, but will enable you to easily contribute code back to the project.

    Installing the Managed Package

    1. Create a new Developer Edition (DE) account at https://developer.salesforce.com/signup. You will receive an activation email – click the enclosed link to complete setup of your DE environment. This will also log you in to your new DE environment.
    2. Install the managed package into your new DE org via this URL: (email me for the latest URL. My email is listed in my GitHub profile)
    3. Go to Setup | Administration Setup | Security Controls | Remote Site Settings and add https://yoursubdomain.zendesk.com as a new remote site.

    Installing from GitHub (and using MavensMate)

    1. Clone project to your local filesystem
      $ git clone https://github.com/JmeG/zendesk-salesforce-sdk.git
    2. Drag directory into Sublime Text
    3. Right click the project root in the Sublime Text sidebar
    4. Select MavensMate > Create MavensMate Project.
    5. You will then be prompted for Salesforce.com credentials.

    Installing from GitHub (and using Eclipse)

    1. Create a new Developer Edition (DE) account at https://developer.salesforce.com/signup. You will receive an activation email – click the enclosed link to complete setup of your DE environment. This will also log you in to your new DE environment.

    2. Create a new Force.com project in the Force.com IDE using your new org’s credentials. In the ‘Choose Initial Project Contents’ dialog, select ‘Selected metadata components’, hit ‘Choose…’ and select ALL of the components in the next page. This will give you a complete project directory tree.

    3. Clone this GitHub project into the Force.com IDE project directory. You will need to clone it first to a temporary location, since git will not let you clone to a directory with existing content:

       $ git clone --no-checkout git://github.com/JmeG/zendesk-salesforce-sdk.git /path/to/your/projectdir/tmp
       $ mv /path/to/your/projectdir/tmp/.git /path/to/your/projectdir
       $ rm -rf /path/to/your/projectdir/tmp
       $ cd /path/to/your/projectdir
       $ git reset --hard HEAD
      
    4. In Eclipse, right click your project in the project explorer and click ‘Refresh’. This causes Eclipse to scan the project directory tree for changes, and the plugin syncs changes to Force.com.

    5. In your DE environment, go to Setup | App Setup | Create | Apps, click ‘Edit’ next to the Zendesk Toolkit app, scroll down, click the ‘Visible’ box next to System Administrator and hit ‘Save’. Now go to Setup | Administration Setup | Manage Users | Profiles, click on System Administrator, Object Settings, set ‘Zendesk Samples’ to ‘Default On’ and hit ‘Save’. ‘Zendesk Toolkit’ should now be available in the dropdown list of apps (top right).

    6. Go to Setup | Administration Setup | Security Controls | Remote Site Settings and add https://yoursubdomain.zendesk.com as a new remote site.

    Installing from GitHub (direct deploy)

    https://githubsfdeploy.herokuapp.com/app/githubdeploy/JmeG/zendesk-salesforce-sdk

    Visit original content creator repository
    https://github.com/JmeG/zendesk-salesforce-sdk