Planet Primates

March 09, 2015

High Scalability

The Architecture of Algolia’s Distributed Search Network

Guest post by Julien Lemoine, co-founder & CTO of Algolia, a developer friendly search as a service API.

Algolia started in 2012 as an offline search engine SDK for mobile. At this time we had no idea that within two years we would have built a worldwide distributed search network.

Today Algolia serves more than 2 billion user generated queries per month from 12 regions worldwide, our average server response time is 6.7ms and 90% of queries are answered in less than 15ms. Our unavailability rate on search is below 10-6 which represents less than 3 seconds per month.

The challenges we faced with the offline mobile SDK were technical limitations imposed by the nature of mobile. These challenges forced us to think differently when developing our algorithms because classic server-side approaches would not work.

Our product has evolved greatly since then. We would like to share our experiences with building and scaling our REST API built on top of those algorithms.

We will explain how we are using a distributed consensus for high-availability and synchronization of data in different regions around the world and how we are doing the routing of queries to the closest locations via an anycast DNS.

The data size misconception

by Todd Hoff at March 09, 2015 03:56 PM

March 06, 2015

StackOverflow

How to convert from Scala string to Java enum

I have an API that receives a string representing a language. My Scala code (using Scalatra for the API) calls into existing Java code that I must support. This Java code expects the language to be in the form of an enum that it defines.

I could exhaustively pattern match on the string to return the proper enum element, but I have to believe there's a better way?

For example, I could do this:

      f.language.value.get.toUpperCase.split(",").map {
        case "ALL" => JavaLanguageEnum.ALL
        case "AAA" => JavaLanguageEnum.AAA
        case "BBB" => JavaLanguageEnum.BBB
        case "CCC" => JavaLanguageEnum.CCC
        case "DDD" => JavaLanguageEnum.DDD
        case "EEE" => JavaLanguageEnum.EEE
        case "FFF" => JavaLanguageEnum.FFF
        case _ => JavaLanguageEnum.ALL
      }.toList

... but that would be a pretty big piece of code to do this work. Is there a better way to simply say, "if the string matches one of the enums, return that enum so I can pass it in?"

by Christopher Ambler at March 06, 2015 07:13 PM

Using trait method in the class constructor

I have a trait and a class that extends the trait. I can use the methods from the trait as follows:

trait A {
  def a = ""
}

class B(s: String) extends A {
  def b = a 
}

However, when I use the trait's method in the constructor like this:

trait A {
  def a = ""
}

class B(s: String) extends A {
  def this() = this(a) 
}

then the following error appears:

error: not found: value a

Is there some way to define default parameters for the construction of classes in the trait?

by mirelon at March 06, 2015 07:10 PM

scala script execution in IDE

I have just started Scala programming. This is basic example which is most popular I believe.

I am using Eclipse Scala IDE. I don't understand what type of file I should create for Scala Script from Menu. Is Scala script a Scala File or Scala Class or Scala worksheet or Scala application? Could you please confirm.

// code-examples/IntroducingScala/shapes-actor-script.scala
import shapes._
ShapeDrawingActor.start()
ShapeDrawingActor ! new Circle(new Point(0.0,0.0), 1.0)
ShapeDrawingActor ! new Rectangle(new Point(0.0,0.0), 2, 5)
ShapeDrawingActor ! new Triangle(new Point(0.0,0.0),
 new Point(1.0,0.0),
 new Point(0.0,1.0))
ShapeDrawingActor ! 3.14159
ShapeDrawingActor ! "exit"

Thanks, Chandra

by chandra at March 06, 2015 07:09 PM

Clojure: refresh running web app when html files change

I've set up my project with lein-ring to allow hot code reload. It does work when I change any .clj file while the app is running...

How can I make it the same for change in any html, css and js files. (located in resources/public...)

Here is my project.clj set-up:

(defproject ...
  :plugins [[lein-cljsbuild "1.0.4"]
            [lein-ring "0.9.2"]]      
  :ring {:handler votepourca.server/limoilou
          :auto-reload? true
          :auto-refresh? true}
  :resource-paths ["resources" "markup"]
  :source-paths ["src/clj"]
  ...)

by leontalbot at March 06, 2015 07:07 PM

Play: populating a helper select from Java

I have to populate an HTML select with ids and labels. I also need the labels to be ordered alphabetically.

If I pass a List<String> of labels, I lose ids. If I pass a Map<String,String>, I have ids, but the ordering is not kept.

My page:

@(countries: Map[String,String], myForm: Form[JUG], title: String) @header(title)
...
@helper.select(myForm("countryId"), helper.options(countries) )

How can I populate the HTML select with ids and labels alphabetically ordered?

I have to use the helper to keep the selected element between requests.

by Vitalij Zadneprovskij at March 06, 2015 07:05 PM

Scala and play 2 framework hosting what are the requirements

I am new to scala and play 2 framework and would like to know if they can be hosted on a webhost that has Tomcat, JSP, and Java Servlet hosting. I would obviously include the scala files.

by user1591668 at March 06, 2015 07:04 PM

scala macros generating implicits

I am trying to generate some implicits via a macro -the condensed version of the macro looks like this:

object Implicits {
  def generate(c:Context):c.Expr[Unit]={
    import c.universe._
    c.Expr[Unit] {
      q"""
           object Dud{
            implicit val p:java.io.File = new java.io.File("/tmp")
             def toString():String ={ "Dud here" }
          }
          import Dud._
      """
    }
  }
}

I am using the macro:

object ImplicitTest extends App {
  def genImplicits = macro Implicits.generate
  genImplicits
  val f: File = implicitly[File]
  println(f)
}

The test bails out complaining that

ImplicitTest.scala could not find implicit value for parameter e: java.io.File
[error]   val f: File = implicitly[File]
[error]                           ^

What am I doing wrong with this macro?

Thanks

Amit

by Amit at March 06, 2015 07:01 PM

Fefe

Gestern hat wohl der de Maiziere bei Maybrit Illner ...

Gestern hat wohl der de Maiziere bei Maybrit Illner gesagt, wäre hätten 1000 Gefährder in Deutschland. Daraufhin ist Tilo Jung mal in die Bundespressekonferenz gelaufen und hat nachgefragt.
Q: Wie viele Gefährder gibt es in Deutschland und welche?

IM: "270 Personen Stand Januar 2015, die dem islamistisch-terroristischen Spektrum zuzuordnen sind, sind als Gefährder eingestuft. Vergleichsweise sei genannt: Aus dem rechtsextrem Spektrum werden 12 Personen als Gefährder eingestuft, aus dem linksextrem Spektrum momentan 6 Personen."

Ich bin jetzt kein Diplom-Mathematiker, aber das summiert sich bei mir nicht auf 1000.

Und anlässlich dieser Meldung wollte er mal wissen, warum die amerikanischen Atombomben in Deutschland bleiben müssen. Darauf antwortete das Auswärtige Amt:

Ich glaube nicht, dass ich dazu etwas sagen sollte. Das entscheiden in erster Linie einmal die Amerikaner [...] Da sind einseitige Entscheidungen von wem auch immer; jedenfalls auch von deutscher Seite gänzlich unangebracht.
Finde ich auch. Völlig unangebracht. Wieso sollten wir uns darüber Gedanken machen, wieso auf unserem angeblich souveränen Hoheitsgebiet amerikanische Atombomben stationiert sind?

March 06, 2015 07:01 PM

DataTau

CompsciOverflow

Is the following statement about turing machines true?

Here's the statement:

Take a set of finite inputs from some alphabet. If for any two turing machines:

  1. All inputs in the set produce the same output for both machines
  2. In both machines, the following is true: every state transition (i.e. every row in the state table) occurs at least once for at least one of the inputs in the set before halting

Then both machines will produce the same output as each other for any input.

This is an informal statement, it probably needs a bit of refining. But for now the question is, is this true?

by Ben Aaronson at March 06, 2015 06:56 PM

StackOverflow

ProGuard fails with "Warning: class [*] unexpectedly contains class [*]", despite reading the FAQ

I need to shrink the Scala library down so it can be included in an applet, but I keep getting errors like: [Being$$anon$1.class] unexpectedly contains class [stayAway.Being$$anon$1] (I get a similar error for each class in the directory).

I looked at the ProGuard FAQ, and it says that this error means my directory isn't configured properly. As far as I can tell though, it is.

I have 8 class files; all a part of the stayAway package. It's set up like:

- stayAway
    - Being$$anon$1.class
    - Being$.class
    - Being.class
    - MainFrame.class
    - NPC$.class
    - NPC.class
    - Player$.class
    - Player.class

Using the following configuration (Generated by the ProGuard GUI):

-injars 'C:\Users\Brendon\Desktop\stayAway'
-outjars 'C:\Users\Brendon\Desktop\scalaLibraryShrunk.jar'

-libraryjars 'C:\Program Files (x86)\Java\jre1.8.0_40\lib\rt.jar'
-libraryjars 'C:\Users\Brendon\Desktop\scala-library.jar'

-dontobfuscate


# Keep - Applications. Keep all application classes, along with their 'main'
# methods.
-keepclasseswithmembers public class * {
    public static void main(java.lang.String[]);
}

# Keep - Applets. Keep all extensions of java.applet.Applet.
-keep public class * extends java.applet.Applet

# Keep - Library. Keep all public and protected classes, fields, and methods.
-keep public class * {
    public protected <fields>;
    public protected <methods>;
}

# Also keep - Enumerations. Keep the special static methods that are required in
# enumeration classes.
-keepclassmembers enum  * {
    public static **[] values();
    public static ** valueOf(java.lang.String);
}

# Also keep - Database drivers. Keep all implementations of java.sql.Driver.
-keep class * extends java.sql.Driver

# Also keep - Swing UI L&F. Keep all extensions of javax.swing.plaf.ComponentUI,
# along with the special 'createUI' method.
-keep class * extends javax.swing.plaf.ComponentUI {
    public static javax.swing.plaf.ComponentUI createUI(javax.swing.JComponent);
}

# Keep names - Native method names. Keep all native class/method names.
-keepclasseswithmembers,includedescriptorclasses,allowshrinking class * {
    native <methods>;
}

# Remove - System method calls. Remove all invocations of System
# methods without side effects whose return values are not used.
-assumenosideeffects public class java.lang.System {
    public static long currentTimeMillis();
    static java.lang.Class getCallerClass();
    public static int identityHashCode(java.lang.Object);
    public static java.lang.SecurityManager getSecurityManager();
    public static java.util.Properties getProperties();
    public static java.lang.String getProperty(java.lang.String);
    public static java.lang.String getenv(java.lang.String);
    public static java.lang.String mapLibraryName(java.lang.String);
    public static java.lang.String getProperty(java.lang.String,java.lang.String);
}

# Remove - Math method calls. Remove all invocations of Math
# methods without side effects whose return values are not used.
-assumenosideeffects public class java.lang.Math {
    public static double sin(double);
    public static double cos(double);
    public static double tan(double);
    public static double asin(double);
    public static double acos(double);
    public static double atan(double);
    public static double toRadians(double);
    public static double toDegrees(double);
    public static double exp(double);
    public static double log(double);
    public static double log10(double);
    public static double sqrt(double);
    public static double cbrt(double);
    public static double IEEEremainder(double,double);
    public static double ceil(double);
    public static double floor(double);
    public static double rint(double);
    public static double atan2(double,double);
    public static double pow(double,double);
    public static int round(float);
    public static long round(double);
    public static double random();
    public static int abs(int);
    public static long abs(long);
    public static float abs(float);
    public static double abs(double);
    public static int max(int,int);
    public static long max(long,long);
    public static float max(float,float);
    public static double max(double,double);
    public static int min(int,int);
    public static long min(long,long);
    public static float min(float,float);
    public static double min(double,double);
    public static double ulp(double);
    public static float ulp(float);
    public static double signum(double);
    public static float signum(float);
    public static double sinh(double);
    public static double cosh(double);
    public static double tanh(double);
    public static double hypot(double,double);
    public static double expm1(double);
    public static double log1p(double);
}

# Remove - Number method calls. Remove all invocations of Number
# methods without side effects whose return values are not used.
-assumenosideeffects public class java.lang.* extends java.lang.Number {
    public static java.lang.String toString(byte);
    public static java.lang.Byte valueOf(byte);
    public static byte parseByte(java.lang.String);
    public static byte parseByte(java.lang.String,int);
    public static java.lang.Byte valueOf(java.lang.String,int);
    public static java.lang.Byte valueOf(java.lang.String);
    public static java.lang.Byte decode(java.lang.String);
    public int compareTo(java.lang.Byte);
    public static java.lang.String toString(short);
    public static short parseShort(java.lang.String);
    public static short parseShort(java.lang.String,int);
    public static java.lang.Short valueOf(java.lang.String,int);
    public static java.lang.Short valueOf(java.lang.String);
    public static java.lang.Short valueOf(short);
    public static java.lang.Short decode(java.lang.String);
    public static short reverseBytes(short);
    public int compareTo(java.lang.Short);
    public static java.lang.String toString(int,int);
    public static java.lang.String toHexString(int);
    public static java.lang.String toOctalString(int);
    public static java.lang.String toBinaryString(int);
    public static java.lang.String toString(int);
    public static int parseInt(java.lang.String,int);
    public static int parseInt(java.lang.String);
    public static java.lang.Integer valueOf(java.lang.String,int);
    public static java.lang.Integer valueOf(java.lang.String);
    public static java.lang.Integer valueOf(int);
    public static java.lang.Integer getInteger(java.lang.String);
    public static java.lang.Integer getInteger(java.lang.String,int);
    public static java.lang.Integer getInteger(java.lang.String,java.lang.Integer);
    public static java.lang.Integer decode(java.lang.String);
    public static int highestOneBit(int);
    public static int lowestOneBit(int);
    public static int numberOfLeadingZeros(int);
    public static int numberOfTrailingZeros(int);
    public static int bitCount(int);
    public static int rotateLeft(int,int);
    public static int rotateRight(int,int);
    public static int reverse(int);
    public static int signum(int);
    public static int reverseBytes(int);
    public int compareTo(java.lang.Integer);
    public static java.lang.String toString(long,int);
    public static java.lang.String toHexString(long);
    public static java.lang.String toOctalString(long);
    public static java.lang.String toBinaryString(long);
    public static java.lang.String toString(long);
    public static long parseLong(java.lang.String,int);
    public static long parseLong(java.lang.String);
    public static java.lang.Long valueOf(java.lang.String,int);
    public static java.lang.Long valueOf(java.lang.String);
    public static java.lang.Long valueOf(long);
    public static java.lang.Long decode(java.lang.String);
    public static java.lang.Long getLong(java.lang.String);
    public static java.lang.Long getLong(java.lang.String,long);
    public static java.lang.Long getLong(java.lang.String,java.lang.Long);
    public static long highestOneBit(long);
    public static long lowestOneBit(long);
    public static int numberOfLeadingZeros(long);
    public static int numberOfTrailingZeros(long);
    public static int bitCount(long);
    public static long rotateLeft(long,int);
    public static long rotateRight(long,int);
    public static long reverse(long);
    public static int signum(long);
    public static long reverseBytes(long);
    public int compareTo(java.lang.Long);
    public static java.lang.String toString(float);
    public static java.lang.String toHexString(float);
    public static java.lang.Float valueOf(java.lang.String);
    public static java.lang.Float valueOf(float);
    public static float parseFloat(java.lang.String);
    public static boolean isNaN(float);
    public static boolean isInfinite(float);
    public static int floatToIntBits(float);
    public static int floatToRawIntBits(float);
    public static float intBitsToFloat(int);
    public static int compare(float,float);
    public boolean isNaN();
    public boolean isInfinite();
    public int compareTo(java.lang.Float);
    public static java.lang.String toString(double);
    public static java.lang.String toHexString(double);
    public static java.lang.Double valueOf(java.lang.String);
    public static java.lang.Double valueOf(double);
    public static double parseDouble(java.lang.String);
    public static boolean isNaN(double);
    public static boolean isInfinite(double);
    public static long doubleToLongBits(double);
    public static long doubleToRawLongBits(double);
    public static double longBitsToDouble(long);
    public static int compare(double,double);
    public boolean isNaN();
    public boolean isInfinite();
    public int compareTo(java.lang.Double);
    public byte byteValue();
    public short shortValue();
    public int intValue();
    public long longValue();
    public float floatValue();
    public double doubleValue();
    public int compareTo(java.lang.Object);
    public boolean equals(java.lang.Object);
    public int hashCode();
    public java.lang.String toString();
}

# Remove - String method calls. Remove all invocations of String
# methods without side effects whose return values are not used.
-assumenosideeffects public class java.lang.String {
    public static java.lang.String copyValueOf(char[]);
    public static java.lang.String copyValueOf(char[],int,int);
    public static java.lang.String valueOf(boolean);
    public static java.lang.String valueOf(char);
    public static java.lang.String valueOf(char[]);
    public static java.lang.String valueOf(char[],int,int);
    public static java.lang.String valueOf(double);
    public static java.lang.String valueOf(float);
    public static java.lang.String valueOf(int);
    public static java.lang.String valueOf(java.lang.Object);
    public static java.lang.String valueOf(long);
    public boolean contentEquals(java.lang.StringBuffer);
    public boolean endsWith(java.lang.String);
    public boolean equalsIgnoreCase(java.lang.String);
    public boolean equals(java.lang.Object);
    public boolean matches(java.lang.String);
    public boolean regionMatches(boolean,int,java.lang.String,int,int);
    public boolean regionMatches(int,java.lang.String,int,int);
    public boolean startsWith(java.lang.String);
    public boolean startsWith(java.lang.String,int);
    public byte[] getBytes();
    public byte[] getBytes(java.lang.String);
    public char charAt(int);
    public char[] toCharArray();
    public int compareToIgnoreCase(java.lang.String);
    public int compareTo(java.lang.Object);
    public int compareTo(java.lang.String);
    public int hashCode();
    public int indexOf(int);
    public int indexOf(int,int);
    public int indexOf(java.lang.String);
    public int indexOf(java.lang.String,int);
    public int lastIndexOf(int);
    public int lastIndexOf(int,int);
    public int lastIndexOf(java.lang.String);
    public int lastIndexOf(java.lang.String,int);
    public int length();
    public java.lang.CharSequence subSequence(int,int);
    public java.lang.String concat(java.lang.String);
    public java.lang.String replaceAll(java.lang.String,java.lang.String);
    public java.lang.String replace(char,char);
    public java.lang.String replaceFirst(java.lang.String,java.lang.String);
    public java.lang.String[] split(java.lang.String);
    public java.lang.String[] split(java.lang.String,int);
    public java.lang.String substring(int);
    public java.lang.String substring(int,int);
    public java.lang.String toLowerCase();
    public java.lang.String toLowerCase(java.util.Locale);
    public java.lang.String toString();
    public java.lang.String toUpperCase();
    public java.lang.String toUpperCase(java.util.Locale);
    public java.lang.String trim();
}

# Remove - StringBuffer method calls. Remove all invocations of StringBuffer
# methods without side effects whose return values are not used.
-assumenosideeffects public class java.lang.StringBuffer {
    public java.lang.String toString();
    public char charAt(int);
    public int capacity();
    public int codePointAt(int);
    public int codePointBefore(int);
    public int indexOf(java.lang.String,int);
    public int lastIndexOf(java.lang.String);
    public int lastIndexOf(java.lang.String,int);
    public int length();
    public java.lang.String substring(int);
    public java.lang.String substring(int,int);
}

# Remove - StringBuilder method calls. Remove all invocations of StringBuilder
# methods without side effects whose return values are not used.
-assumenosideeffects public class java.lang.StringBuilder {
    public java.lang.String toString();
    public char charAt(int);
    public int capacity();
    public int codePointAt(int);
    public int codePointBefore(int);
    public int indexOf(java.lang.String,int);
    public int lastIndexOf(java.lang.String);
    public int lastIndexOf(java.lang.String,int);
    public int length();
    public java.lang.String substring(int);
    public java.lang.String substring(int,int);
}

And here's the exact error message:

ProGuard, version 5.2
Reading program directory [C:\Users\Brendon\Desktop\stayAway]
Warning: class [Being$$anon$1.class] unexpectedly contains class [stayAway.Being$$anon$1]
Warning: class [Being$.class] unexpectedly contains class [stayAway.Being$]
Warning: class [Being.class] unexpectedly contains class [stayAway.Being]
Warning: class [MainFrame.class] unexpectedly contains class [stayAway.MainFrame]
Warning: class [NPC$.class] unexpectedly contains class [stayAway.NPC$]
Warning: class [NPC.class] unexpectedly contains class [stayAway.NPC]
Warning: class [Player$.class] unexpectedly contains class [stayAway.Player$]
Warning: class [Player.class] unexpectedly contains class [stayAway.Player]
Reading library jar [C:\Program Files (x86)\Java\jre1.8.0_40\lib\rt.jar]
Reading library jar [C:\Users\Brendon\Desktop\scala-library.jar]
Warning: there were 8 classes in incorrectly named files.
         You should make sure all file names correspond to their class names.
         The directory hierarchies must correspond to the package hierarchies.
         (http://proguard.sourceforge.net/manual/troubleshooting.html#unexpectedclass)
         If you don't mind the mentioned classes not being written out,
         you could try your luck using the '-ignorewarnings' option.
Please correct the above warnings first.

This is my first time using ProGuard, so I may be overlooking something. Any help here would be appreciated.

by Carcigenicate at March 06, 2015 06:46 PM

How to install ZeroMQ and its cpan perl module ine windows (activeperl)

I want to install and use ZeroMQ through my perl script. I tried to install ZeroMQ from CPAN but the installation fails at the last step saying:

ZeroMQ-0.21/eg/threaded_server.pl
---- Unsatisfied dependencies detected during ----
----         DMAKI/ZeroMQ-0.21.tar.gz         ----
    Devel::CheckLib [build_requires]
    ExtUtils::MakeMaker [build_requires]
Running make test
  Make had some problems, won't test
  Delayed until after prerequisites
Running make install
  Make had some problems, won't install
  Delayed until after prerequisites
..........
.............

blib\script\use-devel-checklib
        pl2bat.bat blib\script\use-devel-checklib
  MATTN/Devel-CheckLib-0.98.tar.gz
  C:\PROGRA~2\MICROS~1.0\VC\bin\nmake.exe -- OK
Running make test

Microsoft (R) Program Maintenance Utility Version 9.00.30729.01
Copyright (C) Microsoft Corporation.  All rights reserved.

        C:\Perl5.14\bin\perl.exe "-MExtUtils::Command::MM" "-e" "test_harness(0, 'blib\lib', 'blib\arch')" t/*.t
t/00-load.t ................... ok
t/bad-single-word-compiler.t .. ok
t/cmdline-LIBS-INC.t .......... skipped: Couldn't build a library to test against
t/custom-function.t ........... skipped: Couldn't build a library to test against
t/dash-l-libs.t ............... skipped: Couldn't build a library to test against
t/exit_with_message.t ......... ok
t/found.t .....................
t/found.t ..................... 1/6 #   Failed test 'lib => 'msvcrt''
   at t/found.t line 53.
          got: 'Can't link/include C library 'msvcrt', aborting.
 '
     expected: ''
       STDOUT:
       STDERR:

   Failed test '... and check_lib is true'
   at t/found.t line 54.

   Failed test 'lib => 'kernel32''
   at t/found.t line 53.
          got: 'Can't link/include C library 'kernel32', aborting.
 '
     expected: ''
       STDOUT:
       STDERR:

   Failed test '... and check_lib is true'
   at t/found.t line 54.

   Failed test 'lib => ['msvcrt', 'kernel32']'
   at t/found.t line 53.
          got: 'Can't link/include C library 'msvcrt', 'kernel32', aborting.
 '
     expected: ''
       STDOUT:
       STDERR:

   Failed test '... and check_lib is true'
   at t/found.t line 54.
 Looks like you failed 6 tests of 6.
t/found.t ..................... Dubious, test returned 6 (wstat 1536, 0x600)
Failed 6/6 subtests
t/headers.t ................... 1/5
   Failed test 'incpath => '.',         header => 't/inc/headerfile.h''
t/headers.t ................... 3/5 #   at t/headers.t line 47.
          got: 'Can't link/include C library 't/inc/headerfile.h', aborting.
 '
     expected: ''
       STDOUT:
       STDERR:

   Failed test 'incpath => [qw(t/inc)], header => 'headerfile.h''
   at t/headers.t line 47.
          got: 'Can't link/include C library 'headerfile.h', aborting.
 '
     expected: ''
       STDOUT:
       STDERR:

   Failed test 'INC => '-I. -It/inc',   header => 'headerfile.h''
   at t/headers.t line 47.
          got: 'Can't link/include C library 'headerfile.h', aborting.
 '
     expected: ''
       STDOUT:
       STDERR:
 Looks like you failed 3 tests of 5.
t/headers.t ................... Dubious, test returned 3 (wstat 768, 0x300)
Failed 3/5 subtests
t/multi-word-compiler.t ....... ok
t/not_found.t ................. 3/12
   Failed test 'missing 'foo' detected'
   at t/not_found.t line 39.
                   'Can't link/include C library 'msvcrt', 'foo', aborting.
 '
     doesn't match '/^Can't link/include C library 'foo'/ms'
 Looks like you failed 1 test of 12.
t/not_found.t ................. Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/12 subtests

Test Summary Report
-------------------
t/found.t                   (Wstat: 1536 Tests: 6 Failed: 6)
  Failed tests:  1-6
  Non-zero exit status: 6
t/headers.t                 (Wstat: 768 Tests: 5 Failed: 3)
  Failed tests:  3-5
  Non-zero exit status: 3
t/not_found.t               (Wstat: 256 Tests: 12 Failed: 1)
  Failed test:  5
  Non-zero exit status: 1
Files=10, Tests=27,  9 wallclock secs ( 0.09 usr +  0.06 sys =  0.16 CPU)
Result: FAIL
Failed 3/10 test programs. 10/27 subtests failed.
NMAKE : fatal error U1077: 'C:\Perl5.14\bin\perl.exe' : return code '0x1'
Stop.
  MATTN/Devel-CheckLib-0.98.tar.gz
  C:\PROGRA~2\MICROS~1.0\VC\bin\nmake.exe test -- NOT OK
//hint// to see the cpan-testers results for installing this module, try:
  reports MATTN/Devel-CheckLib-0.98.tar.gz
Running make install
  make test had returned bad status, won't install without force
Running install for module 'ExtUtils::MakeMaker'
Running make for M/MS/MSCHWERN/ExtUtils-MakeMaker-6.62.tar.gz
Checksum for C:\Perl5.14\cpan\sources\authors\id\M\MS\MSCHWERN\ExtUtils-MakeMaker-6.62.tar.gz ok
........
........
t/Liblist_Kid.t ........... 1/? Note (probably harmless): No library found for unreal_test
Note (probably harmless): No library found for unreal_test
Note (probably harmless): No library found for -llibtest
Note (probably harmless): No library found for -lunreal_test
Note (probably harmless): No library found for unreal_test
Note (probably harmless): No library found for dir_test
Warning: '-Ldir' changed to '-LC:/Perl5.14/cpan/build/ExtUtils-MakeMaker-6.62-IJCHpC/t/liblist/win32/dir'
Warning: '-Ldi r' changed to '-LC:/Perl5.14/cpan/build/ExtUtils-MakeMaker-6.62-IJCHpC/t/liblist/win32/di r'
Note (probably harmless): No library found for unreal_test
Note (probably harmless): No library found for unreal_test
t/Liblist_Kid.t ........... ok
...........
...........
t/MM_Win32.t .............. 1/61
   Failed test 'pasthru()'
   at t/MM_Win32.t line 273.
          got: 'PASTHRU = -nologo'
     expected: 'PASTHRU = '
 Looks like you failed 1 test of 61.
t/MM_Win32.t .............. Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/61 subtests
        (less 6 skipped subtests: 54 okay)
................
................

t/xs.t .................... skipped: ExtUtils::CBuilder not installed or couldn't find a compiler

Test Summary Report
t/MM_Win32.t            (Wstat: 256 Tests: 61 Failed: 1)
  Failed test:  49
  Non-zero exit status: 1
Files=59, Tests=976, 139 wallclock secs ( 0.47 usr +  0.17 sys =  0.64 CPU)
Result: FAIL
Failed 1/59 test programs. 1/976 subtests failed.
NMAKE : fatal error U1077: 'C:\Perl5.14\bin\perl.exe' : return code '0xff'
Stop.
  MSCHWERN/ExtUtils-MakeMaker-6.62.tar.gz
  C:\PROGRA~2\MICROS~1.0\VC\bin\nmake.exe test -- NOT OK
//hint// to see the cpan-testers results for installing this module, try:
  reports MSCHWERN/ExtUtils-MakeMaker-6.62.tar.gz
Running make install
  make test had returned bad status, won't install without force
Running make for D/DM/DMAKI/ZeroMQ-0.21.tar.gz
Warning: Prerequisite 'Devel::CheckLib => 0.4' for 'DMAKI/ZeroMQ-0.21.tar.gz' failed when processing 'MATTN/Devel-CheckLib-0.98.tar.gz' with 'make_test =
chances to succeed are limited.
Warning: Prerequisite 'ExtUtils::MakeMaker => 6.62' for 'DMAKI/ZeroMQ-0.21.tar.gz' failed when processing 'MSCHWERN/ExtUtils-MakeMaker-6.62.tar.gz' with
inuing, but chances to succeed are limited.

  CPAN.pm: Going to build D/DM/DMAKI/ZeroMQ-0.21.tar.gz

Probing environment variables:
 + Detected ZMQ_INCLUDES from ZMQ_HOME...
 + Detected ZMQ_LIBS from ZMQ_HOME...
Probing libzmq via pkg-config ...
Package libzmq was not found in the pkg-config search path.
Perhaps you should add the directory containing `libzmq.pc'
to the PKG_CONFIG_PATH environment variable
No package 'libzmq' found
 - No libzmq found...
Probing zeromq2 via pkg-config ...
Package zeromq2 was not found in the pkg-config search path.
Perhaps you should add the directory containing `zeromq2.pc'
to the PKG_CONFIG_PATH environment variable
No package 'zeromq2' found
 - No zeromq2 found...
Detected the following ZMQ settings:
 + ZMQ_HOME = C:\Users\kallol\Desktop\perl_test\zeromq-2.2.0
 + ZMQ_H = C:\Users\kallol\Desktop\perl_test\zeromq-2.2.0\include
 + ZMQ_INCLUDES = C:\Users\kallol\Desktop\perl_test\zeromq-2.2.0\include
 + ZMQ_LIBS = -LC:\Users\kallol\Desktop\perl_test\zeromq-2.2.0\lib
 + ZMQ_TRACE = (null)
Can't link/include C library 'zmq.h', 'zmq', aborting.
Warning: No success on command[C:\Perl5.14\bin\perl.exe Makefile.PL INSTALLDIRS=site]
  DMAKI/ZeroMQ-0.21.tar.gz
  C:\Perl5.14\bin\perl.exe Makefile.PL INSTALLDIRS=site -- NOT OK
Running make test
  Make had some problems, won't test
Running make install
  Make had some problems, won't install
Failed during this command:
 MATTN/Devel-CheckLib-0.98.tar.gz             : make_test NO
 MSCHWERN/ExtUtils-MakeMaker-6.62.tar.gz      : make_test NO
 DMAKI/ZeroMQ-0.21.tar.gz                     : writemakefile NO 'C:\Perl5.14\bin\perl.exe Makefile.PL INSTALLDIRS=site' returned status 512

The installer seems to be using nmake from Visual C 2010.

by Kallol at March 06, 2015 06:42 PM

UnixOverflow

Can you make a process pool with shell scripts?

Say I have a great number of jobs (dozens or hundreds) that need doing, but they're CPU intensive and only a few can be run at once. Is there an easy way to run X jobs at once and start a new one when one has finished? The only thing I can come up with is something like below (pseudo-code):

jobs=(...);
MAX_JOBS=4;
cur_jobs=0;
pids=(); # hash/associative array
while (jobs); do
    while (cur_jobs < MAX_JOBS); do
        pop and spawn job and store PID and anything else needed;
        cur_jobs++;
    done
    sleep 5;
    for each PID:
        if no longer active; then
            remove PID;
            cur_jobs--;
done

I feel like I'm over-complicating the solution, as I often do. The target system is FreeBSD, if there might be some port that does all the hard work, but a generic solution or common idiom would be preferable.

by Jason Lefler at March 06, 2015 06:42 PM

StackOverflow

How to set remoteHost in spark RetryingBlockFetcher IOException

I apologize for such an extremely long post, but I wanted to be better understood.

I have built up my cluster, where master in on another machine than workers. Workers are allocated on a quite efficient machine. Between these two machines no firewall is applied.

URL: spark://MASTER_IP:7077
Workers: 10
Cores: 10 Total, 0 Used
Memory: 40.0 GB Total, 0.0 B Used
Applications: 0 Running, 0 Completed
Drivers: 0 Running, 0 Completed
Status: ALIVE

Before launching the app, in the workers logfile is (an example for one worker)

15/03/06 18:52:19 INFO Worker: Registered signal handlers for [TERM, HUP, INT]
15/03/06 18:52:19 INFO SecurityManager: Changing view acls to: szymon
15/03/06 18:52:19 INFO SecurityManager: Changing modify acls to: szymon
15/03/06 18:52:19 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(szymon); users with modify permissions: Set(szymon)
15/03/06 18:52:20 INFO Slf4jLogger: Slf4jLogger started
15/03/06 18:52:20 INFO Remoting: Starting remoting
15/03/06 18:52:20 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkWorker@WORKER_MACHINE_IP:42240]
15/03/06 18:52:20 INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkWorker@WORKER_MACHINE_IP:42240]
15/03/06 18:52:20 INFO Utils: Successfully started service 'sparkWorker' on port 42240.
15/03/06 18:52:20 INFO Worker: Starting Spark worker WORKER_MACHINE_IP:42240 with 1 cores, 4.0 GB RAM
15/03/06 18:52:20 INFO Worker: Spark home: /home/szymon/spark
15/03/06 18:52:20 INFO Utils: Successfully started service 'WorkerUI' on port 8081.
15/03/06 18:52:20 INFO WorkerWebUI: Started WorkerWebUI at http://WORKER_MACHINE_IP:8081
15/03/06 18:52:20 INFO Worker: Connecting to master spark://MASTER_IP:7077...
15/03/06 18:52:20 INFO Worker: Successfully registered with master spark://MASTER_IP:7077

I launch my application on a cluster (on the master machine)

./bin/spark-submit --class SimpleApp --master spark://MASTER_IP:7077 --executor-memory 3g --total-executor-cores 10 code/trial_2.11-0.9.jar

My app is then fetched by workers, this is an example of the log output for a worker (@WORKER_MACHINE)

15/03/06 18:07:45 INFO ExecutorRunner: Launch command: "/usr/java/jdk1.8.0_31/bin/java" "-cp" "::/home/machine/spark/conf:/home/machine/spark/assembly/target/scala-2.10/spark-assembly-1.2.1-hadoop2.4.0.jar" "-Dspark.driver.port=56753" "-Dlog4j.configuration=file:////home/machine/spark/conf/log4j.properties" "-Dspark.driver.host=MASTER_IP" "-Xms3072M" "-Xmx3072M" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "akka.tcp://sparkDriver@MASTER_IP:56753/user/CoarseGrainedScheduler" "4" "WORKER_MACHINE_IP" "1" "app-20150306181450-0000" "akka.tcp://sparkWorker@WORKER_MACHINE_IP:45288/user/Worker"

The app wants to connect to localhost at address 127.0.0.1 instead of MASTER_IP (I believe). How could it be fixed?

15/03/06 18:58:52 ERROR RetryingBlockFetcher: Exception while beginning fetch of 1 outstanding blocks
java.io.IOException: Failed to connect to localhost/127.0.0.1:56545
    at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:191) 

The problem is caused by createClient method in TransportClientFactory which is in spark-network-common_2.10-1.2.1-sources.jar, String remoteHost is set up as localhost

/**
    * Create a {@link TransportClient} connecting to the given remote host / port.
    *
   * We maintains an array of clients (size determined by spark.shuffle.io.numConnectionsPerPeer)
   * and randomly picks one to use. If no client was previously created in the randomly selected
   * spot, this function creates a new client and places it there.
   *
   * Prior to the creation of a new TransportClient, we will execute all
   * {@link TransportClientBootstrap}s that are registered with this factory.
   *
   * This blocks until a connection is successfully established and fully bootstrapped.
   *
   * Concurrency: This method is safe to call from multiple threads.
   */
  public TransportClient createClient(String remoteHost, int remotePort) throws IOException {
// Get connection from the connection pool first.
// If it is not found or not active, create a new one.
final InetSocketAddress address = new InetSocketAddress(remoteHost, remotePort);
  .
  .
  .   
  clientPool.clients[clientIndex] = createClient(address);

}

Here is the file spark-env.sh on the workers site

export SPARK_HOME=/home/szymon/spark
export SPARK_MASTER_IP=MASTER_IP
export SPARK_MASTER_WEBUI_PORT=8081
export SPARK_LOCAL_IP=WORKER_MACHINE_IP
export SPARK_DRIVER_HOST=WORKER_MACHINE_IP
export SPARK_LOCAL_DIRS=/home/szymon/spark/slaveData
export SPARK_WORKER_INSTANCES=10
export SPARK_WORKER_CORES=1
export SPARK_WORKER_MEMORY=4g
export SPARK_WORKER_DIR=/home/szymon/spark/work

And on the master

export SPARK_MASTER_IP=MASTER_IP
export SPARK_LOCAL_IP=MASTER_IP
export SPARK_MASTER_WEBUI_PORT=8081
export SPARK_JAVA_OPTS="-Dlog4j.configuration=file:////home/szymon/spark/conf/log4j.properties -Dspark.driver.host=MASTER_IP"
export SPARK_MASTER_OPTS="-Dspark.deploy.defaultCores=10"

This is the full log output with more details

15/03/06 18:58:50 INFO Worker: Asked to launch executor app-20150306190555-0000/0 for Simple Application
15/03/06 18:58:50 INFO ExecutorRunner: Launch command: "/usr/java/jdk1.8.0_31/bin/java" "-cp" "::/home/szymon/spark/conf:/home/szymon/spark/assembly/target/scala-2.10/spark-assembly-1.2.1-hadoop2.4.0.jar" "-Dspark.driver.port=49407" "-Dlog4j.configuration=file:////home/szymon/spark/conf/log4j.properties" "-Dspark.driver.host=MASTER_IP" "-Xms3072M" "-Xmx3072M" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "akka.tcp://sparkDriver@MASTER_IP:49407/user/CoarseGrainedScheduler" "0" "WORKER_MACHINE_IP" "1" "app-20150306190555-0000" "akka.tcp://sparkWorker@WORKER_MACHINE_IP:42240/user/Worker"
15/03/06 18:58:50 INFO CoarseGrainedExecutorBackend: Registered signal handlers for [TERM, HUP, INT]
15/03/06 18:58:51 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/03/06 18:58:51 INFO SecurityManager: Changing view acls to: szymon
15/03/06 18:58:51 INFO SecurityManager: Changing modify acls to: szymon
15/03/06 18:58:51 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(szymon); users with modify permissions: Set(szymon)
15/03/06 18:58:51 INFO Slf4jLogger: Slf4jLogger started
15/03/06 18:58:51 INFO Remoting: Starting remoting
15/03/06 18:58:51 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://driverPropsFetcher@WORKER_MACHINE_IP:52038]
15/03/06 18:58:51 INFO Utils: Successfully started service 'driverPropsFetcher' on port 52038.
15/03/06 18:58:52 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
15/03/06 18:58:52 INFO SecurityManager: Changing view acls to: szymon
15/03/06 18:58:52 INFO SecurityManager: Changing modify acls to: szymon
15/03/06 18:58:52 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(szymon); users with modify permissions: Set(szymon)
15/03/06 18:58:52 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
15/03/06 18:58:52 INFO Slf4jLogger: Slf4jLogger started
15/03/06 18:58:52 INFO Remoting: Starting remoting
15/03/06 18:58:52 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
15/03/06 18:58:52 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkExecutor@WORKER_MACHINE_IP:37114]
15/03/06 18:58:52 INFO Utils: Successfully started service 'sparkExecutor' on port 37114.
15/03/06 18:58:52 INFO CoarseGrainedExecutorBackend: Connecting to driver: akka.tcp://sparkDriver@MASTER_IP:49407/user/CoarseGrainedScheduler
15/03/06 18:58:52 INFO WorkerWatcher: Connecting to worker akka.tcp://sparkWorker@WORKER_MACHINE_IP:42240/user/Worker
15/03/06 18:58:52 INFO WorkerWatcher: Successfully connected to akka.tcp://sparkWorker@WORKER_MACHINE_IP:42240/user/Worker
15/03/06 18:58:52 INFO CoarseGrainedExecutorBackend: Successfully registered with driver
15/03/06 18:58:52 INFO Executor: Starting executor ID 0 on host WORKER_MACHINE_IP
15/03/06 18:58:52 INFO SecurityManager: Changing view acls to: szymon
15/03/06 18:58:52 INFO SecurityManager: Changing modify acls to: szymon
15/03/06 18:58:52 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(szymon); users with modify permissions: Set(szymon)
15/03/06 18:58:52 INFO AkkaUtils: Connecting to MapOutputTracker: akka.tcp://sparkDriver@MASTER_IP:49407/user/MapOutputTracker
15/03/06 18:58:52 INFO AkkaUtils: Connecting to BlockManagerMaster: akka.tcp://sparkDriver@MASTER_IP:49407/user/BlockManagerMaster
15/03/06 18:58:52 INFO DiskBlockManager: Created local directory at /home/szymon/spark/slaveData/spark-b09c3727-8559-4ab8-ab32-1f5ecf7aeaf2/spark-0c892a4d-c8b9-4144-a259-8077f5316b52/spark-89577a43-fb43-4a12-a305-34b267b01f8a/spark-7ad207c4-9d37-42eb-95e4-7b909b71c687
15/03/06 18:58:52 INFO MemoryStore: MemoryStore started with capacity 1589.8 MB
15/03/06 18:58:52 INFO NettyBlockTransferService: Server created on 51205
15/03/06 18:58:52 INFO BlockManagerMaster: Trying to register BlockManager
15/03/06 18:58:52 INFO BlockManagerMaster: Registered BlockManager
15/03/06 18:58:52 INFO AkkaUtils: Connecting to HeartbeatReceiver: akka.tcp://sparkDriver@MASTER_IP:49407/user/HeartbeatReceiver
15/03/06 18:58:52 INFO CoarseGrainedExecutorBackend: Got assigned task 0
15/03/06 18:58:52 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
15/03/06 18:58:52 INFO Executor: Fetching http://MASTER_IP:57850/jars/trial_2.11-0.9.jar with timestamp 1425665154479
15/03/06 18:58:52 INFO Utils: Fetching http://MASTER_IP:57850/jars/trial_2.11-0.9.jar to /home/szymon/spark/slaveData/spark-b09c3727-8559-4ab8-ab32-1f5ecf7aeaf2/spark-0c892a4d-c8b9-4144-a259-8077f5316b52/spark-411cd372-224e-44c1-84ab-b0c3984a6361/fetchFileTemp7857926599487994869.tmp
15/03/06 18:58:52 INFO Utils: Copying /home/szymon/spark/slaveData/spark-b09c3727-8559-4ab8-ab32-1f5ecf7aeaf2/spark-0c892a4d-c8b9-4144-a259-8077f5316b52/spark-411cd372-224e-44c1-84ab-b0c3984a6361/-19284804851425665154479_cache to /home/szymon/spark/work/app-20150306190555-0000/0/./trial_2.11-0.9.jar
15/03/06 18:58:52 INFO Executor: Adding file:/home/szymon/spark/work/app-20150306190555-0000/0/./trial_2.11-0.9.jar to class loader
15/03/06 18:58:52 INFO TorrentBroadcast: Started reading broadcast variable 0
15/03/06 18:58:52 ERROR RetryingBlockFetcher: Exception while beginning fetch of 1 outstanding blocks
java.io.IOException: Failed to connect to localhost/127.0.0.1:56545
    at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:191)
    at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:156)
    at org.apache.spark.network.netty.NettyBlockTransferService$$anon$1.createAndStart(NettyBlockTransferService.scala:78)
    at org.apache.spark.network.shuffle.RetryingBlockFetcher.fetchAllOutstanding(RetryingBlockFetcher.java:140)
    at org.apache.spark.network.shuffle.RetryingBlockFetcher.start(RetryingBlockFetcher.java:120)
    at org.apache.spark.network.netty.NettyBlockTransferService.fetchBlocks(NettyBlockTransferService.scala:87)
    at org.apache.spark.network.BlockTransferService.fetchBlockSync(BlockTransferService.scala:89)
    at org.apache.spark.storage.BlockManager$$anonfun$doGetRemote$2.apply(BlockManager.scala:595)
    at org.apache.spark.storage.BlockManager$$anonfun$doGetRemote$2.apply(BlockManager.scala:593)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at org.apache.spark.storage.BlockManager.doGetRemote(BlockManager.scala:593)
    at org.apache.spark.storage.BlockManager.getRemoteBytes(BlockManager.scala:587)
    at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$org$apache$spark$broadcast$TorrentBroadcast$$readBlocks$1.org$apache$spark$broadcast$TorrentBroadcast$$anonfun$$getRemote$1(TorrentBroadcast.scala:126)
    at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$org$apache$spark$broadcast$TorrentBroadcast$$readBlocks$1$$anonfun$1.apply(TorrentBroadcast.scala:136)
    at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$org$apache$spark$broadcast$TorrentBroadcast$$readBlocks$1$$anonfun$1.apply(TorrentBroadcast.scala:136)
    at scala.Option.orElse(Option.scala:257)
    at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$org$apache$spark$broadcast$TorrentBroadcast$$readBlocks$1.apply$mcVI$sp(TorrentBroadcast.scala:136)
    at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$org$apache$spark$broadcast$TorrentBroadcast$$readBlocks$1.apply(TorrentBroadcast.scala:119)
    at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$org$apache$spark$broadcast$TorrentBroadcast$$readBlocks$1.apply(TorrentBroadcast.scala:119)
    at scala.collection.immutable.List.foreach(List.scala:318)
    at org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$readBlocks(TorrentBroadcast.scala:119)
    at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$readBroadcastBlock$1.apply(TorrentBroadcast.scala:174)
    at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1090)
    at org.apache.spark.broadcast.TorrentBroadcast.readBroadcastBlock(TorrentBroadcast.scala:164)
    at org.apache.spark.broadcast.TorrentBroadcast._value$lzycompute(TorrentBroadcast.scala:64)
    at org.apache.spark.broadcast.TorrentBroadcast._value(TorrentBroadcast.scala:64)
    at org.apache.spark.broadcast.TorrentBroadcast.getValue(TorrentBroadcast.scala:87)
    at org.apache.spark.broadcast.Broadcast.value(Broadcast.scala:70)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:58)
    at org.apache.spark.scheduler.Task.run(Task.scala:56)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:200)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused: localhost/127.0.0.1:56545
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:208)
    at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:287)
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
    at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
    ... 1 more
15/03/06 18:58:52 INFO RetryingBlockFetcher: Retrying fetch (1/3) for 1 outstanding blocks after 5000 ms
15/03/06 18:58:57 ERROR RetryingBlockFetcher: Exception while beginning fetch of 1 outstanding blocks (after 1 retries)
java.io.IOException: Failed to connect to localhost/127.0.0.1:56545
    at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:191)
    at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:156)
    at org.apache.spark.network.netty.NettyBlockTransferService$$anon$1.createAndStart(NettyBlockTransferService.scala:78)
    at org.apache.spark.network.shuffle.RetryingBlockFetcher.fetchAllOutstanding(RetryingBlockFetcher.java:140)
    at org.apache.spark.network.shuffle.RetryingBlockFetcher.access$200(RetryingBlockFetcher.java:43)
    at org.apache.spark.network.shuffle.RetryingBlockFetcher$1.run(RetryingBlockFetcher.java:170)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused: localhost/127.0.0.1:56545
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
    at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:208)
    at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:287)
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
    at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
    ... 1 more
.
.
.
15/03/06 19:00:22 INFO RetryingBlockFetcher: Retrying fetch (1/3) for 1 outstanding blocks after 5000 ms
15/03/06 19:00:24 ERROR CoarseGrainedExecutorBackend: Driver Disassociated [akka.tcp://sparkExecutor@WORKER_MACHINE_IP:37114] -> [akka.tcp://sparkDriver@MASTER_IP:49407] disassociated! Shutting down.
15/03/06 19:00:24 WARN ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkDriver@MASTER_IP:49407] has failed, address is now gated for [5000] ms. Reason is: [Disassociated].
15/03/06 19:00:24 INFO Worker: Asked to kill executor app-20150306190555-0000/0
15/03/06 19:00:24 INFO ExecutorRunner: Runner thread for executor app-20150306190555-0000/0 interrupted
15/03/06 19:00:24 INFO ExecutorRunner: Killing process!
15/03/06 19:00:25 INFO Worker: Executor app-20150306190555-0000/0 finished with state KILLED exitStatus 1
15/03/06 19:00:25 INFO Worker: Cleaning up local directories for application app-20150306190555-0000
15/03/06 19:00:25 WARN ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkExecutor@WORKER_MACHINE_IP:37114] has failed, address is now gated for [5000] ms. Reason is: [Disassociated].
15/03/06 19:00:25 INFO LocalActorRef: Message [akka.remote.transport.ActorTransportAdapter$DisassociateUnderlying] from Actor[akka://sparkWorker/deadLetters] to Actor[akka://sparkWorker/system/transports/akkaprotocolmanager.tcp0/akkaProtocol-tcp%3A%2F%2FsparkWorker%40WORKER_MACHINE_IP%3A45806-2#1549100100] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.

There is a warning, which I believe is not the case at this issue

15/03/06 18:07:46 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

by Szymon Roziewski at March 06, 2015 06:39 PM

CompsciOverflow

Regular expression question

I have the following question in my homework but I'm not sure what the answer is. I'd appreciate it if someone could help me.

The question is as follows:

The language of regular expression (0+10)∗ is the set of all strings of 0's and 1's such that every 1 is immediately followed by a 0. Identify in the list below the regular expression whose language is the complement of L((0+10)∗).

*Select One:

A.) 0∗11(0+1)∗+(0+1)∗1

B.) (0+10)∗1(ε+11(0+1)∗)

C.) (0+10)∗11(0+10)∗+(0+1)∗1

D.) (0+10)∗(1+11(0+1)∗)*

Thank you for your time

by kyriacoss at March 06, 2015 06:38 PM

Turing Machine That Accepts Machines With Undecidable Languages

So I'm reviewing my Computability notes for my final, and I understand how reduction arguments work, but I'm having trouble framing one for the following Turing machine: Undecidable TM = { ⟨M⟩ | L(M) is undecidable }. (in words, a Turing machine that accepts encodings of machines that accept undecidable languages.)

I'm trying to reduce the accepting Turing machine (A TM) to Undecidable TM in the following manner:

  1. Take Turing machine M and string x as inputs.
  2. Create M' that works as follows: For input y, M' simulates M on x. If M accepts x, then M' acts on y as some Turing machine that accepts an undecidable language. Else, M rejects y.
  3. Pass M' to Undecidable TM. Iff Undecidable TM accepts M', then M must accept x, else M' does not accept an undecidable language. Thus, we can decide whether a machine M accepts input x.

The problem is that I cannot create a machine that accepts an undecidable language to put inside M' for this reduction. I have read my lecture notes and looked for advice on Google, but haven't made any headway. I'd appreciate it if someone has some insight on how to finish this proof or approach this another way. Thanks a lot.

by Impossibility at March 06, 2015 06:37 PM

How to find a subset of potentially maximal vectors (of numbers) in a set of vectors

I have a set S (so no duplicates) of d-dimensional vectors of non-negative real numbers (or if you would prefer, floats).

I say a vector u "covers" a vector v if, in every dimension 1..d, u[i] >= v[i]. So for d=3, (3,3,2) covers (2, 3, 1), but (3, 3, 1) doesn't cover (2, 2, 2).

I am interested in finding a subset T of S, such that for every v in T, there is no u in S with u != v, such that u covers v. Alternatively, I'm interested in removing from S those vectors that are covered by other vectors in S.

What is an efficient algorithm for this? Barring that, what is at least the "real name" of this problem to help me search for it?

An O(n^2) algorithm is obvious: for every vector in S, check it against every other vector in S, and add it to T if nothing covers it. I'm having trouble doing much better than that.

I have considered trying to use kd-trees and range trees, but this is for a real world problem, and in practice d is too high and the number of vectors too low, so that e.g. the O(n * log^d(n)) running time is actually worse than the naive O(n^2). (d is approximately 20-40, and the number of vectors is in the millions).

by James Dowdell at March 06, 2015 06:37 PM

Spanning tree with chosen leaves

I'm working on the following problem:

Suppose that we're given a connected, undirected graph $G = (V, E)$ with edge weights $w_e$ and a subset of vertices $U \subset V$. We want to find the lightest spanning tree in which the nodes of $U$ are leaves (they may be other leaves as well). We want to do so in $O(|E|\log(|V|))$ time.

Here's my thinking: since every node $v \in U$ must be a leaf, there must exist a vertex $u \in V \setminus U$ that is the source (i.e. each leaf in $U$ is connected to $u$). However, I'm having trouble find a way to do this that doesn't involve running a polynomial time algorithm. Can anyone help?

by Dave at March 06, 2015 06:36 PM

StackOverflow

ZeroMQ word count app gives error when you compile in spark 1.2.1

I'm trying to setup zeromq data stream to spark. Basically I took the ZeroMQWordCount.scala app an tried to recompile it and run it.

I have zeromq 2.1 installed, and spark 1.2.1 here is my scala code:

package org.apache.spark.examples.streaming

import akka.actor.ActorSystem
import akka.actor.actorRef2Scala
import akka.zeromq._
import akka.zeromq.Subscribe
import akka.util.ByteString

import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.streaming.StreamingContext._
import org.apache.spark.streaming.zeromq._

import scala.language.implicitConversions
import org.apache.spark.SparkConf

object ZmqBenchmark {
  def main(args: Array[String]) {
    if (args.length < 2) {
      System.err.println("Usage: ZmqBenchmark <zeroMQurl> <topic>")
      System.exit(1)
    }
    //StreamingExamples.setStreamingLogLevels()
    val Seq(url, topic) = args.toSeq
    val sparkConf = new SparkConf().setAppName("ZmqBenchmark")
    // Create the context and set the batch size
    val ssc = new StreamingContext(sparkConf, Seconds(2))

    def bytesToStringIterator(x: Seq[ByteString]) = (x.map(_.utf8String)).iterator

    // For this stream, a zeroMQ publisher should be running.
    val lines = ZeroMQUtils.createStream(ssc, url, Subscribe(topic), bytesToStringIterator _)
    val words = lines.flatMap(_.split(" "))
    val wordCounts = words.map(x => (x, 1)).reduceByKey(_ + _)
    wordCounts.print()
    ssc.start()
    ssc.awaitTermination()
  }
}

and this is my .sbt file for dependencies:

name := "ZmqBenchmark"

version := "1.0"

scalaVersion := "2.10.4"

resolvers += "Typesafe Repository" at "http://repo.typesafe.com/typesafe/releases/"

resolvers += "Sonatype (releases)" at "https://oss.sonatype.org/content/repositories/releases/"

libraryDependencies += "org.apache.spark" % "spark-core_2.10" % "1.2.1"

libraryDependencies += "org.apache.spark"  %% "spark-streaming" % "1.2.1"

libraryDependencies += "org.apache.spark" % "spark-streaming-zeromq_2.10" % "1.2.1"

libraryDependencies += "com.typesafe.akka" %% "akka-actor" % "2.2.0"

libraryDependencies += "org.zeromq" %% "zeromq-scala-binding" % "0.0.6"

libraryDependencies += "com.typesafe.akka" % "akka-zeromq_2.10.0-RC5" % "2.1.0-RC6"

libraryDependencies += "org.apache.spark" % "spark-examples_2.10" % "1.1.1"

libraryDependencies += "org.spark-project.zeromq" % "zeromq-scala-binding_2.11" % "0.0.7-spark"

The application compiles without any errors using sbt package, however when i run the application with spark submit, i get an error:

zaid@zaid-VirtualBox:~/spark-1.2.1$ ./bin/spark-submit --master local[*] ./zeromqsub/example/target/scala-2.10/zmqbenchmark_2.10-1.0.jar tcp://127.0.0.1:5553 hello
15/03/06 10:21:11 WARN Utils: Your hostname, zaid-VirtualBox resolves to a loopback address: 127.0.1.1; using 192.168.220.175 instead (on interface eth0)
15/03/06 10:21:11 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/spark/streaming/zeromq/ZeroMQUtils$
    at ZmqBenchmark$.main(ZmqBenchmark.scala:78)
    at ZmqBenchmark.main(ZmqBenchmark.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:358)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.spark.streaming.zeromq.ZeroMQUtils$
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
    ... 9 more

Any ideas why this happens? i know the app should work because when i run the same example using the $/run-example $ script and point to the ZeroMQWordCount app from spark, it runs without the exception. My guess is the sbt file is incorrect, what else do I need to have in the sbt file?

Thanks

by zaidrickk at March 06, 2015 06:28 PM

Example of Applicative composition in Scala

This is a followup to my old questions:

I know that monads are not composable, i.e. if M1[_] and M2[_] are monads M2[M1[_]] is not necessarily a monad. For instance, List[Int] and Option[Int] are monads but Option[List[Int]] is not automatically a monad and therefore I need a monad transformer to use it as a monad (as in here)

I know that applicative functors are composable. I guess it means that if A1[_] and A2[_] are applicatives then A2[A1[_]] is always an applicative. Is it correct ?

Could you provide an example of such a composition when A1 is List and A2 is Option ? Could you give an example of other applicatives composed ?

by Michael at March 06, 2015 06:24 PM

Lobsters

StackOverflow

Dynamic handler update in Clojure Ring/Compojure REPL

I've created a new Compojure Leiningen project using lein new compojure test. Web server is run by lein repl and then

user=> (use 'ring.adapter.jetty)
user=> (run-jetty test.handler/app {:port 3000})

Routes and app handler specification is trivial:

(defroutes app-routes
  (GET "/*.do" [] "Dynamic page")
  (route/not-found "Not Found"))

(def app
  (wrap-defaults app-routes site-defaults))

Now, after changing anything in app-routes definition (e.g. changing "Dynamic page" text to anything else, or modifying URI matching string), i do not get the updated text/routes in the browser. But, when changing app-routes definition slightly to

(defn dynfn [] "Dynamic page fn")
(defroutes app-routes
  (GET "/*.do" [] (dynfn))
  (route/not-found "Not Found"))

i do get dynamic updates when changing the return value of dynfn. Also, following the advice from this article and modifying the app definition to

(def app
  (wrap-defaults #'app-routes site-defaults))

(note the #' that transparently creates a var for app-routes) also helps!

Why is that so? Is there any other way one could get a truly dynamic behaviour in defroutes?

Thanks!

by siphiuel at March 06, 2015 06:17 PM

How to show the scheme (including type) of a parquet file from command line or spark shell?

I have determined how to use the spark-shell to show the field names but it's ugly and does not include the type

val sqlContext = new org.apache.spark.sql.SQLContext(sc)

println(sqlContext.parquetFile(path))

prints:

ParquetTableScan [cust_id#114,blar_field#115,blar_field2#116], (ParquetRelation /blar/blar), None

by samthebest at March 06, 2015 06:12 PM

Fefe

Kurze Durchsage des iranischen Außenministers:“It ...

Kurze Durchsage des iranischen Außenministers:
“It is truly, truly regrettable that bigotry gets to the point of making allegations against an entire nation which has saved Jews three times in its history: once during that time of a prime minister who was trying to kill the Jews, and the king saved the Jews; again during the time of Cyrus the Great, where he saved the Jews from Babylon, and during the Second World War, where Iran saved the Jews.”

“Every 150,000 Iranian Muslims has a representative in the parliament, whereas less than 20,000 Jews in Iran have a representative in the parliament. So we’re not about annihilation of Jews.

March 06, 2015 06:01 PM

Aktueller Cloud-Bullshit-Überblick:In der Public Cloud ...

Aktueller Cloud-Bullshit-Überblick:
In der Public Cloud ist die Groupware mit 46 Prozent ebenfalls vorne, beliebter sind hier mit 36 Prozent Anwendungen für das Kundenmanagement und mit 23 Prozent Security as a Service
Bingo!

March 06, 2015 06:01 PM

Neues von der Flughafensicherheit:Further, the system ...

Neues von der Flughafensicherheit:
Further, the system is capable of detecting an enormous amount of the scannee's highly sensitive personal medical information, ranging from detection of arrhythmias and cardiovascular disease, to asthma and respiratory failures, physiological abnormalities, psychiatric conditions, or even a woman's stage in her ovulation cycle.
Wait, what?!

March 06, 2015 06:01 PM

Es lohnt sich, mal seine Mail-Logs durchzugucken. ...

Es lohnt sich, mal seine Mail-Logs durchzugucken. Da findet man teilweise echt lustige Dinge. Wie z.B. das hier:
Ich bekomme hier auf unseren Mailservern regelmäßig diese Logeinträge:

TLS error on connection to mx02.t-online.de [194.25.134.9] (gnutls_handshake): The Diffie-Hellman prime sent by the server is not acceptable (not long enough).
TLS session failure: delivering unencrypted to mx02.t-online.de [194.25.134.9] (not in hosts_require_tls)

Wir nutzen exim auf Debian wheezy und squeeze praktisch out of the box und bekommen auf allen Servern die gleichen Logausgaben.

Laut http://gnutls.org/faq.html setzt gnutls die Größe einer DH group mit mindestens 768 bit voraus, was auf unseren Servern nur für t-online.de, rzone.de ("CrononAG-Professional IT-Services") und relay2.it.nrw.de scheitert.

Einmal mit Profis arbeiten!

Hey, warte mal, sind das nicht die Leute hinter "E-Mail made in Germany"?

March 06, 2015 06:01 PM

Wonach hat die NSA eigentlich so gesucht bei Projekt ...

Wonach hat die NSA eigentlich so gesucht bei Projekt Eikonal? Da kommt ihr NIE drauf!1!!
Der pensionierte Brigadegeneral bestätigte am Donnerstag im NSA-Untersuchungsausschuss des Bundestags einen Bericht, wonach es der US-Geheimdienst im Kooperationsprojekt Eikonal etwa auch auf "EADS", "Eurocopter" und "französische Behörden" abgesehen hatte.
Also so einen glasklaren Fall von Terrorbekämpfung hatten wir ja schon lange nicht mehr!

March 06, 2015 06:01 PM

Halfbakery

CompsciOverflow

why this recurrence can be solved by Master method? [duplicate]

This question already has an answer here:

I have studied the following recurrence. The ratio between f(n) and n^log_b(a) is log n so there is non polynomial difference but I have studied from book that it can be solved by master method.

$T (n) = 2T (n/2) + n log n$

On the other hand ratio in $T (n) = 2T (n/2) + n /log n$ is $1/log n$ so this also has non polynomial difference but it can not be solved by master method. Because of this, I am having confusion in recognizing recurrences which can be solved by master method. Please help.

by user4129542 at March 06, 2015 05:55 PM

Why there is light source?

In ray tracing if we are tracing a ray from camera to the object, then why there is need for light source as in this image and also rays are going towards the light source instead of outgoing from the light source?

Ray Tracing

by Mr.Grey at March 06, 2015 05:49 PM

StackOverflow

Using Om, Generate Nested divs From a Nested Map

Say I have the following map:

(def m {"a" {"d" {}
             "e" {}}
        "b" {"f" {}
             "g" {"h" {}
                  "i" {}}}
        "c" {}})

I need to render it like this:

(om.dom/div #js {} "a" 
  (om.dom/div #js {} "d")
  (om.dom/div #js {} "e"))

(om.dom/div #js {} "b" 
  (om.dom/div #js {} "f")
  (om.dom/div #js {} "g"
    (om.dom/div #js {} "h")
    (om.dom/div #js {} "i")))

(om.dom/div #js {} "c")

How would I go about doing this? I have messed around with clojure.walk, but couldn't get it to call om.dom/div on the leaves first, then the direct parents, etc.

I am thinking that the solution might involve mapping a recursive function over the vals of a given sub-map. It would break the map apart until it sees a leaf and then bubble the om.dom/div calls back up the map.

So far I have this function:

(defn build-tree [data]
  (apply dom/div #js {}
         (->> (clojure.walk/postwalk
                #(cond (map? %) (vec %)
                       (vector? %) %
                       (string? %) %) data)
              (clojure.walk/postwalk
                #(if (and (vector? %) (string? (first %)))
                   (apply dom/div #js {} %) %)))))

Which results in:

With this in the inspector:


Bonus points for generating nested dom/ul and dom/li elements..

by broma0 at March 06, 2015 05:44 PM

CompsciOverflow

Collecting data(images) using a crawler [on hold]

Please let me know, if this goes here, if not, please point out where I should post this. Thanks in advance.

So I require huge number of training data, mostly images. This is a pet project, only for learning. So how and where do I collect them, free of cost?

Will it be suitable if I use a crawler and pull out images, tagged with, say for example: "Cat". Or is there another way to do this.

Thanks in advance.

by Vigneshwaren at March 06, 2015 05:40 PM

TheoryOverflow

Machine learning classifiers

I have been trying to find a good summary for the usage of popular classifiers, kind of like rules of thumb for when to use which classifier. For example, if there are lots of features, if there are millions of samples, if there are streaming samples coming in, etc., which classifier would be better suited in which scenarios?

Any help is appreciated.

by AD.Net at March 06, 2015 05:33 PM

/r/clojure

Small Question about integers in hashmaps

Sorry for a lame question but I'm learning clojure and I don't have anyone to ask. Please tell me if there's a better place to post.

I'm trying to make an integer or a string'd integer as a key in a hash-map, but it doesn't work. Here's the code.

 (def hashed-links (atom {})) (defn hashLink [link] (str (hash link))) (defn createDBLink [link] (let [hashed-link (hashLink link)] (swap! hashedLinks assoc hashed-link link)) ) (println @hashed-links) ; prints {} (createDBLink "red") ; prints nil (println @hashed-links) ; prints {112785 "red"} (println (hashLink "red")) ; prints 112785 (println (@hashed-links (keyword (hashLink "red")))) ; prints nil (println (@hashed-links (hashLink "red"))) ; prints nil (swap! hashed-links assoc "hello" "world") (println (@hashed-links "hello")) ; prints world 

I don't understand why that hashed value doesn't return my key. I tried coercing it into a keyword and into a string in my hashlink function but it doesn't work.

What am I doing wrong? What don't I understand about what is going on?

Here's a codeshare: http://www.codeshare.io/rg0vn

submitted by bills_anabranch
[link] [5 comments]

March 06, 2015 05:29 PM

StackOverflow

Why are "pure" functions called "pure"? [on hold]

A pure function is one that has no side effects -- it cannot do any kind of I/O and it cannot modify the state of anything -- and it is referentially transparent -- when called multiple times with the same inputs, it always gives the same outputs.

Why is the word "pure" used to describe functions with those properties? Who first used the word "pure" in that way, and when? Are there other words that mean roughly the same thing?

by MatrixFrog at March 06, 2015 05:23 PM

Planet Clojure

Survey, etc

Just a few quick notes on recent Immutant happenings...

Survey

Yesterday we published a short survey to help us gauge how folks have been using Immutant. Please take a few moments to complete it if you haven't already.

Luminus

The Luminus web toolkit now includes an Immutant profile in its Leiningen project template, so you can now do this:

$ lein new luminus yourapp +immutant
      $ cd yourapp
      $ lein run -dev
      

That -dev option is triggering the use of immutant.web/run-dmc instead of immutant.web/run so it should plop you in your browser with code-reloading enabled. You can pass most of the other run options on the command line as well, e.g.

$ lein run port 3000
      

Beta2 bugs

In our last release, 2.0.0-beta2, we updated our dependency on the excellent potemkin library to version 0.3.11. Unfortunately, that exposed a bug whenever clojure.core/find was used on our Ring request map. Fortunately, it was already fixed in potemkin's HEAD, and Zach was kind enough to release 0.3.12. We've bumped up to that in our incrementals and hence our next release.

We've also fixed a thing or two to improve async support when running inside WildFly.

Plans

We're still hoping to release 2.0.0-Final within a month or so. Now would be a great time to kick the tires on beta2 or the latest incremental to ensure it's solid when we do!

by Jim Crossley at March 06, 2015 05:16 PM

Immutant 2 (The Deuce) Beta2 Released

We're just bananas to announce The Deuce's second beta: Immutant 2.0.0-beta2. At this point, we feel pretty good about the stability of the API, the performance, and the compatibility with both WildFly 8 and the forthcoming WildFly 9.

We expect a final release before spring (in the Northern Hemisphere). We would appreciate all interested parties to try out this release and submit whatever issues you find. And again, big thanks to all our early adopters who provided invaluable feedback on the alpha, beta, and incremental releases.

What is Immutant?

Immutant is an integrated suite of Clojure libraries backed by Undertow for web, HornetQ for messaging, Infinispan for caching, Quartz for scheduling, and Narayana for transactions. Applications built with Immutant can optionally be deployed to a WildFly cluster for enhanced features. Its fundamental goal is to reduce the inherent incidental complexity in real world applications.

What's changed in this release?

The biggest change in this release is a new API for communicating with web clients asynchronously, either via an HTTP stream, over a WebSocket, or using Server-Sent Events. As part of this change, the immutant.web.websocket namespace has been removed, but wrap-websocket still exists, and has been moved to immutant.web.middleware. For more details, see the web guide.

In conjunction with this new API, we've submitted changes to Sente that will allow you to use its next release with Immutant.

For a full list of changes, see the issue list below.

How to try it

If you're already familiar with Immutant 1.x, you should take a look at our migration guide. It's our attempt at keeping track of what we changed in the Clojure namespaces.

The guides are another good source of information, along with the rest of the apidoc.

For a working example, check out our Feature Demo application!

Get It

There is no longer any "installation" step as there was in 1.x. Simply add the relevant dependency to your project as shown on Clojars. See the installation guide for more details.

Get In Touch

If you have any questions, issues, or other feedback about Immutant, you can always find us on #immutant on freenode or our mailing lists.

Issues resolved in 2.0.0-beta2

  • [IMMUTANT-439] - Provide SSE support in web
  • [IMMUTANT-515] - Add :servlet-name to the options for run to give the servlet a meaningful name
  • [IMMUTANT-517] - Allow undertow-specific options to be passed directly to web/run
  • [IMMUTANT-518] - Error logged for every websocket/send!
  • [IMMUTANT-520] - WunderBoss Options don't load properly under clojure 1.7.0
  • [IMMUTANT-521] - Add API for async channels
  • [IMMUTANT-524] - immutant.web/run no longer accepts a Var as the handler
  • [IMMUTANT-526] - Improve the docs for messaging/subscribe to clarify subscription-name

by The Immutant Team at March 06, 2015 05:16 PM

QuantOverflow

How to filter and normalize market data obtained from distinct sources (FIX 4.4, bloomberg, etc) in an algorithmic trading system?

I'm wondering if some of you known how to resolve this requirement:

I have to define the architecture of an algorithmic trading system (but I'm not an architect, so I'm trying to do my best). I have defined an initial architecture, but just now I'm stuck at the data feed handler component.

I mean, the system will receive market data from different sources (bloomberg, FIX4.4, etc) and must normalize that data to produce an usable data feed which it's supposed to be consumed by algorithms to make some calculations an create some orders, something like this:

mk data providers => data component => normalize data => usable data => algorithms (consume normalized data)

So, I'm wondering if you know a good way to make this or maybe you know a good opensource market data feed handler that can receive data from different providers and produce one clean and normalized stream of market data.

I will appreciate very much your answer. Thanks in advance.

PD: I have been doing some research and I found this:

And for now I'm just reviewing those sites...

by macrux at March 06, 2015 05:11 PM

StackOverflow

compojure destructuring making integers... not integers?

This Compojure GET route with hard-coded id...

   ;posts 
(GET "/post:id" [id :as request]
  ;(str "the post id is... " id)
   (def email (get-in request [:session :ze-auth-email]))
   (vb/post-page-draw email 17592186045616))

Works^

However, with symbolic id (on the last line)...

   ;posts 
(GET "/post:id" [id :as request]
  ;(str "the post id is... " id)
   (def email (get-in request [:session :ze-auth-email]))
   (vb/post-page-draw email id)

Where the url is:

localhost:4000/post17592186045616  ;;i.e. the number from above

(edit: no colon between the word post and the id)

Returns a huuuge stack trace, mainly breaking on

java.lang.Exception
processing rule: (q__7967 ?title ?content ?tags ?eid), 
message: processing clause: [?eid post/title ?title], 
message: Cannot resolve key: 17592186045616

So, I've been able to isolate it to compojure destructuring just not liking the integer I'm passing... how can I get my (vb/post-page-draw email id) to work with parameters passed via the URL?

by sova at March 06, 2015 05:07 PM

TheoryOverflow

any connection between Schaefer's dichotomy theorem and SAT transition point?

Schaefer 1978 found a dichotomy theorem for SAT formulas where, roughly stated, clause structure determines whether an instance is either in P or NP complete. on the surface this seems to have parallels to the SAT transition point finding/ research for random formulas.

has the Schaefer dichotomy theorem ever been somehow linked/ connected to the SAT transition point dynamics/ framework?

by vzn at March 06, 2015 05:02 PM

StackOverflow

Is there a good clojure library for manipulating and creating docx? Or even a wrapper for docx4java?

As the question suggests really. I am looking (so far in vain) for a good clojure library to create/update Microsoft docx files.

Thanks

by user3231690 at March 06, 2015 05:01 PM

Fefe

Österreich hat Staatsanleihen mit Negativzinsen abgesetzt. ...

Österreich hat Staatsanleihen mit Negativzinsen abgesetzt. Die Leute leihen also Österreich Geld — und kriegen weniger zurück. Und dann gibt es oben drauf noch Inflation. Das muss diese unsichtbare Hand des Marktes sein, die im Kapitalismus alles optimal zuordnet!

Update: Gibt es in Deutschland auch.

March 06, 2015 05:01 PM

DataTau

High Scalability

Stuff The Internet Says On Scalability For March 6th, 2015

Hey, it's HighScalability time:


The future of technology in one simple graph (via swardley)
  • $50 billion: the worth of AWS (is it low?); 21 petabytes: size of the Internet Archive; 41 million: # of views of posts about a certain dress

  • Quotable Quotes:
    • @bpoetz: programming is awesome if you like feeling dumb and then eventually feeling less dumb but then feeling dumb about something else pretty soon
    • @Steve_Yegge: Saying you don't need debuggers because you have unit tests is like saying you don't need detectives because you have jails.
    • Nasser Manesh: “Some of the things we ran into are issues [with Docker] that developers won’t see on laptops. They only show up on real servers with real BIOSes, 12 disk drives, three NICs, etc., and then they start showing up in a real way. It took quite some time to find and work around these issues.”
    • Guerrilla Mantras Online: Best practices are an admission of failure.
    • Keine Kommentare: Using master-master for MySQL? To be frankly we need to get rid of that architecture. We are skipping the active-active setup and show why master-master even for failover reasons is the wrong decision.
    • Ed Felten: the NSA’s actions in the ‘90s to weaken exportable cryptography boomeranged on the agency, undermining the security of its own site twenty years later.
    • @trisha_gee: "Java is both viable and profitable for low latency" @giltene at #qconlondon
    • @michaelklishin: Saw Eric Brewer trending worldwide. Thought the CAP theorem finally went mainstream. Apparently some Canadian hockey player got traded.
    • @ThomasFrey: Mobile game revenues will grow 16.5% in 2015, to more than $3B
    • John Allspaw: #NoEstimates is an example of something that engineers seem to do a lot, communicating a concept by saying what it’s not.

  • Improved thread handling contention, NDB API receive processing, scans and PK lookups in the data nodes has lead to a monster 200M reads per second in MySQL Cluster 7.4. That's on an impressive hardware configuration to be sure, but it doesn't matter how mighty the hardware if your software can't drive it.

  • Have you heard of this before? LinkedIn shows how to use SDCH, an HTTP/1.1-compatible extension, which reduces the required bandwidth through the use of a dictionary shared between the client and the server, to achieve impressive results: When sdch and gzip are combined, we have seen additional compression as high as 81% on certain files when compared to gzip only. Average additional compression across our static content was about 24%. 

  • Double awesome description of How we [StackExchange] upgrade a live data center. It was an intricate multi-day highly choreographed dance. Toes were stepped on, but there was also great artistry. The result of the new beefier hardware: The decrease on question render times (from approx30-35ms to 10-15ms) is only part of the fun. Great comment thread on reddit.

  • A jaunty exploration of Microservices: What are They and Why Should You Care?  Indix is following Twitter and Netflix by tossing their monolith for microservices. The main idea: Microservices decouples your systems and gives more options and choices to evolve them independently.

  • Wired's new stack: WordPress, PHP, Stylus for CSS, jQuery, starting with React.js, JSON, Vagrant, Gulp for task automation, Git hooks, Lining, GitHub, Jenkins.

  • Have you ever wanted to search Hacker News? Algolia has created a great search engine for HN.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

by Todd Hoff at March 06, 2015 04:56 PM

StackOverflow

Filtering values on right side of scalaz disjunction

I have a result consisting of a list of Vectors in a scalaz disjunction and I want to be able to examine and filter out elements from within the right side.

simplified example:

import scalaz._
import Scalaz._

type TL = Throwable \/ List[Vector[Int]]

val goodTL: TL = \/-(List(Vector(1,2,3),Vector(), Vector(2,3,4)))

If I want to remove empty element and also any values != 2 from populated elements I can do the following:

for {
  v <- goodTL
  f = v.flatten
} yield for {
  i <- f
  if i != 2
} yield i

giving a scalaz.\/[Nothing,List[Int]] = \/-(List(1, 3, 3, 4)) which is what I want but I would like to know if there is a less convoluted way of achieving this.

by Gavin at March 06, 2015 04:55 PM

Outputting 'null' for Option[T] in play-json serialization when value is None

I'm using play-json's macros to define implicit Writes for serializing JSON. However, it seems like by default play-json omits fields for which Option fields are set to None. Is there a way to change the default so that it outputs null instead? I know this is possible if I define my own Writes definition, but I'm interested in doing it via macros to reduce boilerplate code.

Example

case class Person(name: String, address: Option[String])

implicit val personWrites = Json.writes[Person]    
Json.toJson(Person("John Smith", None))

// Outputs: {"name":"John Smith"}
// Instead want to output: {"name":"John Smith", "address": null}

by Dia Kharrat at March 06, 2015 04:53 PM

Lobsters

/r/netsec

StackOverflow

IntelliJ Scala: Package name doesn't correspond to directories structure

I have an existing project in IntelliJ. I have ensured the project conforms to the Maven Standard Directory layout. I have used g8 to install sbt. I have been able to run sbt successfully and am now trying to write some tests using scalatest.

My classSpec.scala cannot see the classes within the rest of the project.

Given the File Structure is as follows:

~/projects/scala-project/build.sbt

~/projects/scala-project/project/build.properties

~/projects/scala-project/src/main/scala/class.scala

~/projects/scala-project/src/test/scala/classSpec.scala

Within the build.sbt file I have:

name := "Scala Project"

organization := "com.examples"

version := "0.1.0-SNAPSHOT"

scalaVersion := "2.11.2"

crossScalaVersions := Seq("2.10.4", "2.11.2")

libraryDependencies ++= Seq(
  "org.scalatest" %% "scalatest" % "2.2.1",
  "org.scalacheck" %% "scalacheck" % "1.11.5"
)

initialCommands := "import com.example.scalaproject._"

Does anyone know if this is the issue?

by dan.mi.sun at March 06, 2015 04:27 PM

QuantOverflow

Real-time market data from the exchanges: what should we be aware of?

We receive daily end-of-day data from a data vendor (i.e. not direct from an exchange) and are comfortable with this.

We are now wanting to receive live data, and after a few enquiries we are feeling tempted to go direct to the exchanges rather than to a vendor.

Obviously we expect the work of dealing with live feeds to be different from dealing with CSV files (which is typically what you do for end-of-day data), and we are carrying out our due diligence to see what is involved in receiving/parsing/storing real-time data.

Questions:

  • Do you have any suggestions or know of any guides, tutorials, or advice pages on what is involved if you want to receive real-time data feeds?
    For example, the CME Group pages (link) seem very thorough, but I am hoping to find something like a 'Real Time Data Feeds for Dummies'.

  • Are there any significant reasons why you would recommend getting live data from a vendor rather than direct from the exchanges?

FYI:

  • we are focussed on fixed-income products futures & options, and need data from only 2 exchanges (CME Group and ICE),
  • we work mostly with Python and R, and have experienced C/C++/C# coders in the team,
  • we do not need any GUI front-end applications for browsing the data or for doing analysis, we just want to get the data into our database so that our in-house applications can use it.

Update The reply from @chollida raises an important point about the connectivity: wherever you get your real-time streaming data from you are going to have to demonstrate, to some degree, that you have a properly secure connection and that you have a proper audit on the way you use that data. So let me add another couple of questions:

  • For real-time data are the network security/connectivity and data-usage compliance/audit obligations something that we should worry about to the point that we should consider bringing a network expert into the team?
  • Are the network security/connectivity and data-usage compliance/audit obligations easier to satisfy if you take your data from a vendor or from the exchange?

by Robert at March 06, 2015 04:26 PM

StackOverflow

JSON4S unknown error

i am trying to get json4s to extract something but i get a "unknown error"

My code looks like:

import org.json4s._
import org.json4s.jackson.JsonMethods._ 
implicit val formats = org.json4s.DefaultFormats

case class Person(name: String, age: Int)
val json = """{"name":"joe","age":15}"""

print(parse(json).extract[Person])

Error Path:

org.json4s.MappingException: unknown error
    at org.json4s.Extraction$.extract(Extraction.scala:50)
    at org.json4s.ExtractableJsonAstNode.extract(ExtractableJsonAstNode.scala:21)
    at Main$$anon$1.<init>(test.scala:8)
    at Main$.main(test.scala:1)
    at Main.main(test.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at scala.tools.nsc.util.ScalaClassLoader$$anonfun$run$1.apply(ScalaClassLoader.scala:78)
    at scala.tools.nsc.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:24)
    at scala.tools.nsc.util.ScalaClassLoader$URLClassLoader.asContext(ScalaClassLoader.scala:88)
    at scala.tools.nsc.util.ScalaClassLoader$class.run(ScalaClassLoader.scala:78)
    at scala.tools.nsc.util.ScalaClassLoader$URLClassLoader.run(ScalaClassLoader.scala:101)
    at scala.tools.nsc.ObjectRunner$.run(ObjectRunner.scala:33)
    at scala.tools.nsc.ObjectRunner$.runAndCatch(ObjectRunner.scala:40)
    at scala.tools.nsc.ScriptRunner.scala$tools$nsc$ScriptRunner$$runCompiled(ScriptRunner.scala:171)
    at scala.tools.nsc.ScriptRunner$$anonfun$runScript$1.apply(ScriptRunner.scala:188)
    at scala.tools.nsc.ScriptRunner$$anonfun$runScript$1.apply(ScriptRunner.scala:188)
    at scala.tools.nsc.ScriptRunner$$anonfun$withCompiledScript$1.apply$mcZ$sp(ScriptRunner.scala:157)
    at scala.tools.nsc.ScriptRunner$$anonfun$withCompiledScript$1.apply(ScriptRunner.scala:131)
    at scala.tools.nsc.ScriptRunner$$anonfun$withCompiledScript$1.apply(ScriptRunner.scala:131)
    at scala.tools.nsc.util.package$.waitingForThreads(package.scala:26)
    at scala.tools.nsc.ScriptRunner.withCompiledScript(ScriptRunner.scala:130)
    at scala.tools.nsc.ScriptRunner.runScript(ScriptRunner.scala:188)
    at scala.tools.nsc.ScriptRunner.runScriptAndCatch(ScriptRunner.scala:201)
    at scala.tools.nsc.MainGenericRunner.runTarget$1(MainGenericRunner.scala:58)
    at scala.tools.nsc.MainGenericRunner.process(MainGenericRunner.scala:80)
    at scala.tools.nsc.MainGenericRunner$.main(MainGenericRunner.scala:89)
    at scala.tools.nsc.MainGenericRunner.main(MainGenericRunner.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
Caused by: java.lang.NoSuchFieldException: MODULE$
    at java.lang.Class.getField(Class.java:1520)
    at org.json4s.Meta$$anonfun$mappingOf$1$$anonfun$8.apply(Meta.scala:208)
    at org.json4s.Meta$$anonfun$mappingOf$1.apply(Meta.scala:207)
    at org.json4s.Meta$$anonfun$mappingOf$1.apply(Meta.scala:195)
    at org.json4s.Meta$Memo.memoize(Meta.scala:240)
    at org.json4s.Meta$.mappingOf(Meta.scala:195)
    at org.json4s.Extraction$.mkMapping$1(Extraction.scala:207)
    at org.json4s.Extraction$.org$json4s$Extraction$$extract0(Extraction.scala:214)
    at org.json4s.Extraction$.extract(Extraction.scala:47)
    ... 34 more

I run SCALA 2.9.2 with json4s 2.9.2:3.1.0 with the jackson parser 2.1.1

before i went to "json4s" i tryed jackson directly and got not even the the provided examples to work with extraction so i figure something is off here but not rly a clue where to start looking.

Thanks in advance for help :)

by Oliver Zachau at March 06, 2015 04:26 PM

/r/compsci

StackOverflow

Infinite loop seems to confuse Scala's type system

Here is an artificial toy example that demonstrates my problem:

def sscce(): Int = {
  val rand = new Random()
  var count = 0
  while (true) {   // type mismatch; found: Unit, required: Int
    count += 1
    if (rand.nextInt() == 42) return count
  }
}

How can I help the compiler understand that this method will always return an Int?

I know the above toy example could easily be refactored to get rid of the infinite loop altogether, but I really want to have the infinite loop in my actual code. Trust me on this ;)

by FredOverflow at March 06, 2015 04:21 PM

How can i do this better with Ramda.js

So I have a list of divs: list

i want a subset of the list removing the divs with the .fade class. and also just grabbing the list from the div with .selected class.

so using R.takeWhile and R.dropWhile.

then i want to map over that new list and add a .active class on a subset of that list with R.take and R.forEach or R.map

something like :

var takeFromSelected = R.dropWhile(function(item){!$(item).hasClass('selected')};

var removeFadeItems = R.takeWhile(function(item){!$(item).hasClass('fade')});

var addActiveClass = function(x){ $(x).addClass('.active')};

var processList = R.pipe(R.map(addActiveClass), removeFadeItems, takeFromSelected);

processList(list);

Im really new to this FP stuff and trying to get the grip of it.

Any insight would be greatly apreciated!! Thanks! :)

Update

for future reference this is what i did :

@addActiveClass = (x)->  
  $(x).addClass('active') 
  return

@takeFromSelected = R.dropWhile((item)-> !$(item).hasClass('selected'))

@removeFadeItems = R.takeWhile((item)-> !$(item).hasClass('fade'))

@addWeekView = R.compose(addActiveClass, removeFadeItems, takeFromSelected)

by kevohagan at March 06, 2015 04:17 PM

QuantOverflow

Any one know examples for co-integrated stocks?

Can anyone give examples for pair of shares/stocks that can be used in Pair trading ??

one such example is Royal Dutch Shell A vs Royal Dutch Shell B shares..

i want few more examples to check in-order to implement a trading strategy ..

by Dhanushka Rajapaksha at March 06, 2015 04:16 PM

Lobsters

QuantOverflow

What is an efficient data structure to model order book?

What is an efficient data structure to model order book of prices and quantities to ensure:

  1. constant look up
  2. iteration in order of prices
  3. retrieving best bid and ask in constant time
  4. fast quantity updates

by Sam Hayen at March 06, 2015 04:13 PM

/r/netsec

StackOverflow

How to include transitive dependencies when using ProjectRef in sbt?

I would like to reference a project from another sbt project without creating an aggregate ("root") project.

I have two projects that depend on some shared code, however they are independent enough so that putting them under an aggregate ("root") project does not really makes sense (i.e. I am never going to build both projects at the same time and I would like them to be in separate git repositories).

My current solution is as follows:

lazy val core = ProjectRef(file("../Core"), "core")

lazy val console = project.in(file(".")).dependsOn(core)

This worked until I had to add a library dependency to the "core" project, and now I can't build the "console" project. It fails with the following message:

[error] missing or invalid dependency detected while loading class file 'Settings.class'.
[error] Could not access term typesafe in package com,
[error] because it (or its dependencies) are missing. Check your build definition for
[error] missing or conflicting dependencies. (Re-run with `-Ylog-classpath` to see the problematic classpath.)
[error] A full rebuild may help if 'Settings.class' was compiled against an incompatible version of com.

It is obviously referring to the missing "com.typesafe.config" library that core depends on.

Is there any way to fix this so that the console project is compiled with dependencies of core?

by Ivan Poliakov at March 06, 2015 03:58 PM

scala match from list of Boolean to list of summed Int

I have a general question about scala/spark's list matching. Say I have a List of Boolean in the form of:

List[true,false,false,true,true]

I wish to convert this List of Boolean to something like:

List[1, 1, 1, 2, 3]

such that each time there is a true, the List adds 1, and each time there is a false, it outputs the previous result. I think there are some really efficient ways to do this without looping, but cannot think of any right now..

by qmeeeeeee at March 06, 2015 03:51 PM

Planet Theory

TR15-033 | An Ultimate Trade-Off in Propositional Proof Complexity | Alexander Razborov

We exhibit an unusually strong trade-off between resolution proof width and tree-like proof size. Namely, we show that for any parameter $k=k(n)$ there are unsatisfiable $k$-CNFs that possess refutations of width $O(k)$, but such that any tree-like refutation of width $n^{1-\epsilon}/k$ must necessarily have {\em double} exponential size $\exp(n^{\Omega(k)})$. Conceptually, this means that there exist contradictions that allow narrow refutations, but in order to keep the size of such a refutation even within a single exponent, it must necessarily use a high degree of parallelism. Viewed differently, every tree-like narrow refutation is exponentially worse not only than wide refutations of the same contradiction, but of any other contradiction with the same number of variables. This seems to significantly deviate from the established pattern of most, if not all, trade-off results in complexity theory. Our construction and proof methods combine, in a non-trivial way, two previously known techniques: the hardness escalation method based on substitution formulas and expansion. This combination results in a {\em hardness compression} approach that strives to preserve hardness of a contradiction while significantly decreasing the number of its variables.

March 06, 2015 03:32 PM

Fred Wilson

Feature Friday: Twitter Video

I tried posting video to Twitter today. It works simply and easily.

I haven’t seen a ton of video in my feed so far, so it’s not clear that posting video has become popular with Twitter users.

But it’s just as easy as posting a photo so I expect it will become more and more common over time.

by Fred Wilson at March 06, 2015 03:28 PM

Jeff Darcy

Content From hekafs.org

Executive summary: all of that stuff's over here now. If you have links to it, just change "http://hekafs.org" to "http://pl.atyp.us/hekafs.org" and almost everything should work.

A while ago, I got a notice that the hekafs.org domain was about to expire. Even though domains don't cost much, I didn't feel particularly thrilled about continuing to bear that cost in perpetuity for a site that has been inactive for years and seems likely to remain so. Knowing that there's some content people might still find useful, I tried to find out if anyone at Red Hat could take it over. After all, we sponsor hundreds of similar projects, the same issue must have come up for some of them, surely there must be a well established procedure for this. Right? Well, if there is, I couldn't find it. Everyone seemed to think this was Somebody Else's Problem. Sometimes I forget that, despite the many ways it's unique, Red Hat is still a large company with many of the usual large-company dysfunction. [sigh] I let the domain expire.

I guess I didn't realize just how many people, from developers at Red Hat to complete strangers, rely on that content. I get email about the broken links, especially the "Translator 101" series, multiple times per week. For every person who sends email, there are probably two more who didn't bother. Unfortunately, I couldn't renew the domain if I wanted to. First it was in some sort of "timeout" period when it couldn't be re-registered (even by me as its prior owner) and then some domain-parking doofus snapped it up to serve ads.

For a while now, all of the content has actually been available under pl.atyp.us (see the executive summary). Today I went through and fixed up all of the hyperlinks, image links, and everything else I could think of so that the site almost works normally. The only thing I know of that doesn't work is the search box, because that relies on PHP and all of the content is actually static now. However, having links both in this post and on my archives page should allow Google to see all that stuff again, so in a while people will be able to search that way (as most of them probably do already).

March 06, 2015 03:26 PM

QuantOverflow

How to test the 5 Factor CAPM of Fama & French (2014)?

I would like to conduct a study testing the 5 factor CAPM, using UK stocks.

Does anyone have any suggestions of how I can do this?

Could this task be as simple as regressing average returns for a stock with its different factors?

I'd be grateful for any specific pointers/articles or books I can read!

by Harry at March 06, 2015 03:25 PM

StackOverflow

Naming convention for MongoDB in combination with Play Framework (Scala)

I'm new to mongoDB and Play Framework (Scala). I came from Java and RDBMS. With regards to mongoDB, then everything is clear and on this subject much has been written. Use javascript syntax (first_name and even better "fn" to save space). But what about the work in the scala code (I use camelCase). For example, I would have written such a hierarchy

case class User (... personalInfo: PersonalInfo, ...)
case class PersonalInfo (... fullName: FullName, ...)
case class FullName (firstName: String, middleName: Option [String], lastName: String)

If I want to work with mongoDB simply (namely use JSON Macro Inception and do not write custom readers and writers), I must change these classes something like this

case class User (... p_inf: PersonalInfo, ...)
case class PersonalInfo (... f_nm: FullName, ...)
case class FullName (fn: String, mn: Option [String], ln: String)

But this code looks ugly for me (as the reduction of words, and a whole style)

val firstName = user.p_inf.f_nm.fn

I'm going to make another constant for the fields, which are then used in javascript/html code (and here this style is suitable), for example

trait JsFields {
val jsFirstName = "p_inf.f_nm.fn"
}

user.scala.html:

import ... JsFields._
<input name="@jsFirstName" />

So MongoDB and JS/html styles are appropriate, but the business logic is not. Please write best naming practice in conjunction MongoDB/Play Framework with Scala/Html/JS. Please with examples (e.g. fields above). Maybe more suggestions

Thank in andvance

by Alex at March 06, 2015 03:24 PM

Why does Scala ignore exclamation points in command line arguments?

If I write a Scala script (or App) that just prints out the command line arguments, I notice that it ignores all the '!' characters. For example, if my script looks like:

println(args(0))

And I run it with:

scala MyScript "Hello!"

I get the output:

Hello

And if I run it with:

scala MyScript "Hello! World!"

I also get:

Hello

What's going on? What is the purpose of the ! character in arguments that causes this behaviour?

by emote_control at March 06, 2015 03:23 PM

Scala: Dispatch

I'm willing how to implement an extensible dispatch mechanism in Scala.

For example:

I have a trait called Sender (with a method 'send') and a bunch of classes that implement that trait (MailSender, IPhoneSender, AndroidSender). On top of them there is a class which implements the same trait but dispatches the message to the above senders depending the type of the message.

I know I can use pattern matching, but my problem with that approach is about extensibility: If someone wants to add another sender (i.e. WindowsPhoneSender), he must add a new case to the pattern matching method (thus breaking the open-closed principle). I don't want developers to modify the library's code, so I need this to be as extensible as possible.

I thought about a chain of responsibility approach (in Java I would do that), but is there a better way in Scala? (My knowledge in Scala is limited, but I know the Scala compiler does a lot of magical things)

Thanks!

by Santiago Ignacio Poli at March 06, 2015 03:19 PM

TheoryOverflow

Distinguishing semantics vs syntactic techniques and the syntax of your semantic domains

Consider a denotational semantics from simply-typed $\lambda$-calculus into dependent type theory. Is that actually a (trivial) term transformation into that dependent type theory? After all, type theory has a syntax.

In fact, even set theory has a syntax*! So how do we distinguish a denotational semantics from a compositional term transformation?

Now, let's generalize to less trivial program transformations — say, transformation to continuation-passing style (or store-passing style, environment passing style, ...). You can show the same idea through a non-standard semantics (here, a continuation-passing semantics) or a term transformation into a continuation-passing term, and they're distinguished by a binding-time shift. Again, isn't the non-standard semantics also a term transformation?

This is a concrete confusion which I've observed at least twice:

  • In my work (on incremental computation) I've used a non-standard denotational semantics into type theory (a "change-passing" semantics). After a presentation of that, Gabriel Scherer remarked (kindly) that for him, that was a term transformation into a dependently typed language.
  • "F-ing modules" preempts this confusion — they defend their presentation of the syntax of semantic objects.

    Semantic signatures. The syntax of semantic signatures is given in Figure 9. (And no, this is not an oxymoron, for in our setting the “semantic objects” we are using to model modules are merely pieces of Fω syntax.) [Emphasis added.]

*Apparently, some (non-formalists) claim that set theory is not "just syntax", but something ontologically different. I'll ignore this subtle philosophical issue; the only reference I know on it is Raymond Turner's Understanding Programming Languages.

by Blaisorblade at March 06, 2015 03:14 PM

For what languages is there already a theory of observational equivalence?

For a correctness proof, I'm looking for a usable notion of program equivalence $\cong$ for Barendregt's pure type systems (PTSs); missing that, for enough specific type systems. My goal is simply to use the notion, not to investigate it for its own sake.

This notion should be "extensional" — in particular, to prove that $t_1 \cong t_2$, it should be enough to prove that $t_1\; v \cong t_2\; v$ for all values $v$ of the appropriate type.

Denotational equivalence easily satisfies all the right lemmas, but a denotational semantics for arbitrary PTS seems rather challenging — it'd appear hard already for System F.

The obvious alternative are then various forms of contextual equivalence (two terms are equivalent if no ground context can distinguish them), but its definition is not immediately usable; the various lemmas aren't trivial to prove. Have they been proved for PTS? Alternatively, would the theory be an "obvious extension", or is there reason to believe the theory would be significantly different?

I'm actually tempted to make my proofs conditionally on a conjectured theory of equivalence for PTS, but the actual theories require nontrivial arguments, so I'm not sure how likely such a conjecture would be to hold.

I'm aware (though not in detail) of the following works:

  • Andrew Pitts (for instance in ATTAPL for an extended System F, and in a few papers).
  • Practical Foundations of Programming Languages (chapters 47-48), which is inspired by Pitts (but claims to have simpler proofs).
  • A logical study of program equivalence. I can't find an English abstract, but it seems to spend a great deal of effort for side effects (references), which seems an orthogonal complication.

by Blaisorblade at March 06, 2015 03:10 PM

CompsciOverflow

Binary digit problem?

Question:

If a system has $32k$ bytes and each such byte has unique address(so $32k$ addresses), what is the smallest possible bits that can be use by every byte for the address ? All the bytes use the same number of bits for the address.

I thought the answer can be found by finding the value to which $2$ can be raised such that it equals $32,000$. I got $2$ raised to $ \sim 14.96$ gives $32,000$. But a byte only has $8$ bits so this can't make any sense.

Can anyone give any pointer on how to solve this problem ?

by Jenna Maiz at March 06, 2015 02:56 PM

Lobsters

StackOverflow

Elasticsearch Scripts to add element into an array

I am working on ElasticSearch in a scala project. I am using elastic4s as the client. I am trying to add elements to a document, from an iterator one by one.

while (iterator.hasNext) {
  counter +=1
  client.execute {
    update id reportID in "reports/report" script "ctx._source.elasticData += output" params Map("output" -> iterator.next().toStringifiedJson)
  }.await
}

The above code does not work yielding the following error:

    [ERROR] [03/06/2015 14:44:23.515] [SparkActorSystem-akka.actor.default-dispatcher-5] [akka://SparkActorSystem/user/spark-actor] failed to execute script
    org.elasticsearch.ElasticsearchIllegalArgumentException: failed to execute script
        at org.elasticsearch.action.update.UpdateHelper.prepare(UpdateHelper.java:189)
        at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:176)
        at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:170)
        at org.elasticsearch.action.support.single.instance.TransportInstanceSingleOperationAction$AsyncSingleAction$1.run(TransportInstanceSingleOperationAction.java:187)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
    Caused by: org.elasticsearch.ElasticsearchIllegalArgumentException: script_lang not supported [groovy]
        at org.elasticsearch.script.ScriptService.dynamicScriptEnabled(ScriptService.java:521)
        at org.elasticsearch.script.ScriptService.verifyDynamicScripting(ScriptService.java:398)
        [ERROR] [03/06/2015 14:44:23.515] [SparkActorSystem-akka.actor.default-dispatcher-5] [akka://SparkActorSystem/user/spark-actor] failed to execute script
    org.elasticsearch.ElasticsearchIllegalArgumentException: failed to execute script
        at org.elasticsearch.action.update.UpdateHelper.prepare(UpdateHelper.java:189)
        at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:176)
        at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:170)
        at org.elasticsearch.action.support.single.instance.TransportInstanceSingleOperationAction$AsyncSingleAction$1.run(TransportInstanceSingleOperationAction.java:187)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
    Caused by: org.elasticsearch.ElasticsearchIllegalArgumentException: script_lang not supported [groovy]
        at org.elasticsearch.script.ScriptService.dynamicScriptEnabled(ScriptService.java:521)
        at org.elasticsearch.script.ScriptService.verifyDynamicScripting(ScriptService.java:398)
        at org.elasticsearch.script.ScriptService.compile(ScriptService.java:363)
        at org.elasticsearch.script.ScriptService.executable(ScriptService.java:503)
        at org.elasticsearch.action.update.UpdateHelper.prepare(UpdateHelper.java:183)
        ... 6 moreat org.elasticsearch.script.ScriptService.compile(ScriptService.java:363)
        at org.elasticsearch.script.ScriptService.executable(ScriptService.java:503)
        at org.elasticsearch.
action.update.UpdateHelper.prepare(UpdateHelper.java:183)
    ... 6 more

The problems is with the script I assume but I could not find any solution. Please help...

by igalbenardete at March 06, 2015 02:41 PM

/r/clojure

Lobsters

QuantOverflow

What is an appropriate algorithm to use for tax loss harvesting?

I've been reading into how Betterment and Wealthfront have architected their tax loss harvesting algorithms, but they stop short of providing any real examples.

Essentially, they both reduce to:

Benefit – Cost ≥ Threshold

They differ in how they define each term, however. Threshold is proprietary and the result of Monte Carlo research, so let's leave it aside for now.

Benefit looks like it is best measured as a percentage of the asset class in the portfolio. For example, if cost basis of the emerging markets fund makes up 10,000 of a portfolio and the loss is $400, does it make sense to say the benefit is

($60 ÷ $10,000) = .6%

Then cost, for example, would equal the management fee difference. Let's say .09% and .18%, plus any commissions or bid/ask spread losses expressed as a percent.

(.09% - .18%) - 0 - 0 = .09%

Making the difference

.6% - .09% ≥ Threshold

Have I thought about this correctly or am I missing IRR and time horizons?

by Maletor at March 06, 2015 02:35 PM

StackOverflow

Is the FoldLeft function available in R?

I would like to know if there is an implementation of the foldLeft function (and foldRight?) in R.

The language is supposed to be "rather" functional oriented and hence I think there should be something like this, but I could not find it in the documentation.

To me, foldLeft function applies on a list and has the following signature:

foldLeft[B](z : B)(f : (B, A) => B) : B

It is supposed to return the following result:

f(... (f(f(z, a0), a1) ...), an) if the list is [a0, a1, ..., an].

(I use the definition of the Scala List API)

Does anybody know if such a function exists in R?

by SRKX at March 06, 2015 02:29 PM

Android Studio install ubuntu 12.04

I try to install android studio on ubuntu machine. I download android studio zip file from here. I run /android-studio/bin$ sh studio.sh command on terminal, it says that OpenJDK 6 is not supported. Please use Oracle Java or newer OpenJDK. To fix this problem, I run sudo apt-get install openjdk-7-jre. But this way cannot solve my problem. What should I do?

by hola zollil at March 06, 2015 02:29 PM

Lobsters

StackOverflow

Scala swing panel disappears when trying to change contents (only when running a Thread)

So I'm writing a boid simulation program as a project for school. My program supports multiple different groups of these boids that don't flock with other groups, they all have different settings which I do by adding a BoxPanel to the main GUI of the program when a new tribe is made, and those BoxPanels have a settings button that opens a new frame with the groups settings.

This works perfectly when I start the program up and add all the pre defined tribes that are in the code. Now I made a new part of the GUI that let's you make new groups of these boids and adds them while the simulation is running, and here is when the problems start for me.

For some weird reason it adds the group just fine, with the right settings in to the simulation but it wont add the BoxPanels to the main GUI. It makes the whole settings bar that I have in the side of my simulation disappear completely. I tested this out and if I add the tribes in the beginning of my calculation thread it does the same thing, so this seems to be a problem with multiple threads and swing. Any ideas what is causing this or how to fix this? I am completely perplexed by this.

tl;dr: The code below for adding tribes works fine when I haven't started the thread but if I try to use it after starting the thread the optionPanel appears empty.

Here's the code that adds the BoxPanels to the main gui:

      def addTribe(tribe: Tribe) = {
        tribeFrames += new TribeSettingFrame(tribe)
        tribeBoxPanels += new TribeBoxPanel(tribe)
        this.refcontents
      }

      private def refcontents = {
        top.optionPanel.contents.clear()
        top.optionPanel.contents += new BoxPanel(Orientation.Vertical) {
          tribeBoxPanels.foreach(contents += _.tribeBoxPanel)
        }
        top.optionPanel.contents += new BoxPanel(Orientation.Horizontal) {
          contents += top.addTribeButton
        }
        top.optionPanel.contents += new BoxPanel(Orientation.Horizontal) {
          contents += top.vectorDebugButton
        }
      }


  new Thread(BoidSimulation).start()

Oh and I tested if it really adds the contents that it should by printing out the sizes of the contents, and everything matches fine, it just won't draw them.

EDIT: After some digging around it really seems to be a thing with updating swing from a Thread. A lot of places suggest to use SwingWorker but from the info I gathered about it I don't think it would fit in my program since it is a continuous simulation and and I would have to keep making new SwingWorkers every frame.

EDIT2: Tried calling the method from the thread like this:

SwingUtilities.invokeLater(new Runnable() {
  override def run() {
    GUI2D.addTribe(tribe)
  }
});

Didn't make any difference. I am starting to think that this is a problem with how I use TribeBoxPanel and TribeSettingFrame. These are objects that both contain only one method that returns the wanted BoxPanel or Frame. Is this implementation bad? If so what is the better way of creating dynamic BoxPanels and Frames?

by RusinaRange at March 06, 2015 02:20 PM

How to run `npm install && bower install` before `sbt compile` for Heroku Deployment?

I am working on a Playframework project which has front-end codes in sub-directory ./ui and managed by Grunt using https://github.com/tuplejump/play-yeoman

Currently I used https://github.com/heroku/heroku-buildpack-multi and set

https://github.com/heroku/heroku-buildpack-nodejs.git
https://github.com/heroku/heroku-buildpack-scala.git

in the .buildpacks file.

And set

{
  "name": "scala-grunt",
  "dependencies": {
    "grunt-cli": "0.1.13"
  },
  "devDependencies": {
    "grunt": "~0.4.5"
  },
  "version": "0.1.0",
  "engines": {
    "node": "~0.10.21"
  }

}

in the package.json file of root directory.

However, when I pushed the code base to heroku it will throw an exception Fatal error: Unable to find local grunt. I think that is because sbt doesn't to run npm install && bower install in the ./ui directory.

Does anyone have ideas about how to run a command npm install && bower install before sbt compile in heroku?

by hanfeisun at March 06, 2015 02:18 PM

QuantOverflow

How to adjust historical data highs/lows for splits and dividends?

When I download historical data from yahoo finance, only the closing price is adjusted for dividends/splits.

Is there a way of figuring out the factor by which lets say the highs/lows have to be multiplied in order to adjust them for splits/dividends?

by user2665710 at March 06, 2015 02:17 PM

StackOverflow

SBT Won´t update .classpath in a Play Scala project

I am creating a new Play Scala project within Eclipse + Scala IDE.

The original build.sbt file is:

name := """portal"""

version := "1.0-SNAPSHOT"

lazy val root = (project in file(".")).enablePlugins(PlayScala)

scalaVersion := "2.11.1"

libraryDependencies ++= Seq(
  jdbc,
  cache,
  ws
)

I've edited it to include some more dependencies:

libraryDependencies ++= Seq(
  jdbc,
  javaEbean,
  cache,
  ws,
  "org.postgresql" % "postgresql" % "9.3-1100-jdbc4",
  "org.scalatestplus" %% "play" % "1.1.0" % "test"
)

I can't figure out why SBT won't include Ebean, postgresql, nor scalatest in my classpath. Any help?

by RafaelTSCS at March 06, 2015 02:09 PM

Planet Theory

TR15-032 | Graph Isomorphism, Color Refinement, and Compactness | Vikraman Arvind, Johannes Köbler, Gaurav Rattan, Oleg Verbitsky

Color refinement is a classical technique used to show that two given graphs $G$ and $H$ are non-isomorphic; it is very efficient, although it does not succeed on all graphs. We call a graph $G$ amenable to color refinement if the color-refinement procedure succeeds in distinguishing $G$ from any non-isomorphic graph $H$. Babai, Erdos, and Selkow (1982) have shown that random graphs are amenable with high probability. Our main results are the following: – We determine the exact range of applicability of color refinement by showing that the class of amenable graphs is recognizable in time $O((n + m) \log n)$, where $n$ and $m$ denote the number of vertices and the number of edges in the input graph. – Furthermore, we prove that amenable graphs are compact in the sense of Tinhofer (1991). That is, their polytopes of fractional automorphisms are integral. The concept of compactness was introduced in order to identify the class of graphs G for which isomorphism $G \cong H$ can be decided by computing an extreme point of the polytope of fractional isomorphisms from $G$ to $H$ and checking if this point is integral. Our result implies that the applicability range for this linear programming approach to isomorphism testing is at least as large as for the combinatorial approach based on color refinement.

March 06, 2015 02:05 PM

CompsciOverflow

Amdahl's law or gustafson's law

I am a little confused which of the two laws above i should use:

Suppose I have a computer program that can be parallelized by 70%. 30% cannot be parallelized. Every single data (100% of data) will pass the parallelizable part and also pass the non-parallelizable part.

I want to calculate how much more data i can process in a fixed amount of time if i use 2 processors instead of 1 processor.

My thinking is that since the 30% doesn't change when I increase the data, the total time spent in the non-parallelizable part will increase. Therefore, I would guess I have to use Amdahl's Law.

I think Gustafson's Law is used when the total time spent in the non-parallelizable part is constant, therefore the parallism will go up.

I cannot find a solution to this since usually Gustafson's Law is related to problems where the time is fixed and you have to find the amount of data that can be processed for given number of processors. But this case might be different?

by NoMorePen at March 06, 2015 02:03 PM

UnixOverflow

How can I resize an md device in FreeBSD?

I have 1GB RAM installed and I want to enlarge both nodes md0 and md1

/dev/md0              38M    216K     35M     1%    /tmp
/dev/md1              58M     20M     33M    39%    /var

I tried this but it fails:

# mdconfig -r -s 128M -u 0
mdconfig: ioctl(/dev/mdctl): Operation not supported`

What command should I use?

by Jherek Carnelian at March 06, 2015 01:58 PM

Lobsters

CompsciOverflow

What do we gain by having "dependent types"?

I thought I understood dependent typing (DT) properly, but the answer to this question: Why was there a need for Martin-Löf to create intuitionistic type theory? has had me thinking otherwise.

After reading up on DT and trying to understand what they are, I'm trying to wonder, what do we gain by this notion of DTs? They seem to be more flexible and powerful than simply typed lambda calculus (STLC), although I can't understand "how/why" exactly.

What is that we can do with DTs that cannot be done with STLC? Seems like adding DTs makes the theory more complicated, but what's the benefit?

From the answer to the above question:

Dependent types were proposed by de Bruijn and Howard who wanted to extend the Curry-Howard correspondence from propositional to first-order logic.

This seems to make sense at some level, but I'm still unable to grasp the big-picture of "how/why"? Maybe an example explicitly show this extension of the C-H correspondence to FO logic could help hit the point home in understanding what is the big deal with DTs? I'm not sure I comprehend this as well I ought to.

by PhD at March 06, 2015 01:48 PM

StackOverflow

How do you generate collections with a specific property (like stddev) using test.check

I want to use clojure's test.check library to generate collections that I can do some simple stats on, things like computing mean, stddev, confidence intervals and that sort of thing.

How can I generate the collections so that they have pre-determined values for these properties?

by Pieter Breed at March 06, 2015 01:41 PM

Planet Emacsen

Jorgen Schäfer: Circe 1.6 released

We just released version 1.6 of Circe, a Client for IRC in Emacs.

The package is available from github, Marmalade, MELPA stable and MELPA unstable, even though the latter will track further development changes, so use at your own risk.

Changes

  • The auto-join-channels setting now also accepts an :after-cloak specifier to join channels only after the user’s hostname was successfully cloaked.
  • Scrolling behavior now tries to keep the input line at the bottom of the window via various methods by default. See the docstring for lui-stroll-behavior for details.
  • A channel buffer is now created already when a /JOIN command is issued. This makes new buffer behavior less confusing.
  • Tab completion now excludes your own nick. Sorry, talking to yourself has become more complicated.
  • Circe’s various modes now include separate keymaps so you can easily specify keys that work in all chat modes or even in all Circe modes the same way.
  • Logging can now create subdirectories, allowing for formats like "{buffer}/%Y-%m-%d.txt".
  • The circe-color-nicks module won’t highlight Nicks anymore after they have left the channel. It also now includes a way of excluding nicks from highlighting, to avoid common-word nicks polluting normal messages. Various other bugfixes were included, too.

Thanks to defanor, Taylan Ulrich Bayırlı, and Vasilij Schneidermann for making this release possible!

by Jorgen Schäfer (noreply@blogger.com) at March 06, 2015 01:26 PM

Planet Clojure

Smart classname completion for CIDER

Quite often Clojure hackers need to interoperate with various Java classes. To do that they have to import those classes into the namespace. It is easy if you know the full name of the class, together with its package, but what if you don't? In that case you have to google where the sought class is located, and then type in the whole path to it.

Java developers who use any decent IDE are not familiar with such problem. They just type in the short name of the class, and the auto-completion offers them the list of all classes short name of which matches what they typed in. But can we have something like that for Clojure? Of course we can!

Smart classname completion in action

It will work only inside the :import block of namespace declaration since it uses Compliment context parsing to figure out where the completion was called.

This feature will soon land to CIDER. You can try it out right now if you use the bleeding edge cider-nrepl (that is, 0.9.0-SNAPSHOT), and also update the ac-cider from MELPA to the latest version.

by Alex Yakushev at March 06, 2015 01:21 PM

StackOverflow

Comparing core.async and Functional Reactive Programming (+Rx)

I seem to be a little bit confused when comparing Clojure's core.async to the so called Reactive Extensions (Rx) and FRP in general. They seem to tackle similar problem of async-hronicity, so I wonder what are the principal differences and in what cases is one preferred over the other. Can someone please explain?

EDIT: To encourage more in-depth answers I want to make the question more specific:

  1. Core.async allows me to write synchronous-looking code. However as I understand it RFP only needs one level of nested callbacks (all the function that handle logic are passed as arguments to RFP API). This seems that both approaches make the callback pyramids unnecessary. It is true that in JS I have to write function() {...} many times, but the main problem, the nested callbacks, is gone in FRP also. Do I get it right?

  2. "[FRP] complects the communication of messages with flow of control" Can you (someone) please give a more specific explanation?

  3. Can't I pass around FRP's observable endpoints the same way as i pass channels?

In generel I understand where both approaches historically come from and I have tried few tutorials in both of them. However I seem to be "paralyzed" by the non-obviousness of differences. Is there some example of a code that would be hard to write in one of these and easy using the other? And what is the architectural reason of that?

by tillda at March 06, 2015 01:12 PM

CompsciOverflow

Planet Emacsen

Irreal: Customizing the Ace-Window Selection Face

Abo-abo has made a slight enhancement to ace-window that makes it possible to customize the face of the selection character of each window. Take a look at the example in abo-abo's post to see what I mean. It makes the character much easier to see and looks very nice.

If you're trying to squeeze the last drop of efficiency from your key bindings, notice how abo-abo has mapped the selection characters to be a s d f g h j k l rather than the default 1 2 3 4 5 6 7 8 9. That means the selection keys are on the home row and very easy to reach. Another micro-optimization that helps make Emacs use as frictionless as possible.

by jcs at March 06, 2015 01:08 PM

QuantOverflow

The role of Gamma in replicating a put

I am analyzing portfolio protection by replication of a put.

Having my portfolio with value $V$ I could buy put giving me the payoff $P$ resulting in a call like pay-off scenario $C=V+P$. Say, I don't want to buy the put but replicate it by taking positions according to the Delta.

I know there are problems involved:

  • Black-Scholes is wrong, we have jumps, changing volatility and other things
  • however if we do it nevertheless then we have to trade frequently (reestimate volatility, take positions with the new Delta, ...)

If I do this often and correctly. What about the Gamma of the put. I am a bit confused. Do I have to address Gamma? Gamma punishes me if I do not trade frequent enough - I know. But how does Gamma influence the success of my procedure. Say vol is constant and the stock price follows GBM and the only decision is how often I trade. How does Gamma harm me? Can I do something else besides buying other options to hedge Gamma risk or can I do something using the underlying (I assume not)?

by Richard at March 06, 2015 01:04 PM

StackOverflow

When to use actors instead of messaging solutions such as WebSphere MQ or Tibco Randevous?

I've already read the question and answers to What design decisions would favour Scala's Actors instead of JMS?.

Usually, we use messaging solutions which have existed for years already: either a JMS implementation such as WebSphere MQ or Apache ActiveMQ is used for Point-To-Point communication, or Tibco Rendevous for Multicast messaging.

They are very stable, proven and offer high availability and performance. Nevertheless, configuration and setup seem much more complex than in Akka.

When and why should I use Akka for some use cases where the aforementioned products - WebSphere MQ or ActiveMQ - have been used successfully so far? Why should I consider using Akka instead of WebSphere MQ or Tibco RV in my future project?

And when should I avoid Akka? Does it offer the same high availability and performance as the other solutions? Or is it a bad idea to even compare Akka to the other messaging middlewares?

Maybe there also is another messaging solution in the JVM environment which I should consider besides JMS (Point-to-Point), TibcoRV (Multicast) and Akka?

by Kai Wähner at March 06, 2015 12:52 PM

Class path error using spark-submit

I've built a fat jar using version 2.10.4 of scala, but it'll be running in on Amazon's EMR which has scala 2.11.1.

When I copy the jar (created using the assembly plugin) onto the EMR cluster and run it with java -jar my.jar, I get the expected output (scopt, the command line parser, tells me that there are missing arguments).

When I run it using scala my.jar I get the same thing. This is the same if I run the jar on the master or the slave nodes.

However, when I run it using spark-submit my.jar I get an error:

Exception in thread "main" java.lang.NoSuchMethodError: scopt.Read$.seqRead(Lscopt/Read;)Lscopt/Read;

So for some reason, using spark-submit, it can't find scopt, even if I pass --master local.

What am I missing here?

by jbrown at March 06, 2015 12:37 PM

Cannot get uTest to see my tests

I'm trying to get uTest to work with ScalaJS and SBT. SBT is compiling the files, and uTest is running, but it simply ignores my tests. Try as I might I cannot find any difference between my code and the tutorial examples.

build.sbt:

enablePlugins(ScalaJSPlugin)
name := "Scala.js Stuff"
scalaVersion := "2.11.5" // or any other Scala version >= 2.10.2
scalaJSStage in Global := FastOptStage
libraryDependencies += "com.lihaoyi" %% "utest" % "0.3.0"
testFrameworks += new TestFramework("utest.runner.Framework")

src/test/scala/com/mysite/jovian/GeometryTest.scala:

package com.mysite.jovian
import utest._
object GeometryTest extends TestSuite {
  def tests = TestSuite { 
      'addPoints {
        val p: Point = new Point(3,4)
        val q: Point = new Point(4,3)
        val expected: Point = new Point(8,8)
        assert(p.plus(q).equals(expected))
        throw new Exception("foo") 
    }
    'fail {
        assert(1==2)
    }
  }
}

Output:

> reload
[info] Loading project definition from /Users/me/Dropbox (Personal)/mysite/flocks/project
[info] Set current project to Scala.js Stuff (in build file:/Users/me/Dropbox%20(Personal)/mysite/flocks/)
> test
[success] Total time: 1 s, completed Mar 6, 2015 7:01:41 AM
> test-only -- com.mysite.jovian.GeometryTest
[info] Passed: Total 0, Failed 0, Errors 0, Passed 0
[info] No tests to run for test:testOnly
[success] Total time: 1 s, completed Mar 6, 2015 7:01:49 AM

If I introduce a syntax error, sbt test does see it:

> test
[info] Compiling 1 Scala source to /Users/me/Dropbox (Personal)/mysite/flocks/target/scala-2.11/test-classes...
[error] /Users/me/Dropbox (Personal)/mysite/flocks/src/test/scala/com/mysite/jovian/GeometryTest.scala:21: not found: value blablablablabla
[error]   blablablablabla
[error]   ^
[error] one error found
[error] (test:compile) Compilation failed
[error] Total time: 1 s, completed Mar 6, 2015 7:03:54 AM

So it's definitely seeing the code, it just doesn't seem to think that "tests" contains any tests.

Otherwise, in the non-test code, SBT+ScalaJS seems to be working fine...

Thanks for any help, I am mystified.

by Benjamin Rosenbaum at March 06, 2015 12:37 PM

CompsciOverflow

Suurballe's Algorithm: Proof of Correctness

I was reading about Suurballe's algorithm on Wikipedia, for the shortest edge-disjoint paths problem, i.e. given nodes $s$ and $t$ finding a pair of paths between these nodes, whose accumulated weight is minimal.

I understand why the output consists of two $s$-$t$ paths, and the relationship between this and augmenting path flow algorithms on an intuitive level, however I fail to understand why the two paths the algorithm chooses necessarily minimize the weight.

I would appreciate if somebody could explain why the algorithm is correct. I don't have access to Suurballe's original paper and I can't follow the paper by Suurballe and Tarjan.

by Me. at March 06, 2015 12:35 PM

/r/netsec

StackOverflow

What Java or Scala libraries provide MOF abstractions?

I'm writing a Scala DSL for modeling. I want to follow partially the OMG standard MOF so I looked for libraries that would implement its abstractions (classes).

So far, I found the eclipse ECore but it's a MOF subset. Also found this Java library (A MOF 2.0 for Java) that seems abandoned (old .jar and dead link to supposed new location):

http://www2.informatik.hu-berlin.de/sam/meta-tools/aMOF2.0forJava/download.html

Can anyone validate this last library? Or indicate others for Scala or Java?

by Filipe Oliveira at March 06, 2015 12:26 PM

Lobsters

/r/osdev

i686-gcc (Windows) producing no output file

Hello all,

I decided to finally give osdeving (on Windows at least) a crack and I'm having a few problems with the environment.

I'm using the precompiled ghost i686 binaries found on the OSDev cross compiler page, got boot.s to assemble, however, when I execute the following command on the basic kernel on the from the bare-bones page: $TARGET-gcc -c kern.c -o kern.o -std=gnu99 -ffreestanding -O2 -Wall -Wextra

I get no output, nothing. No errors, warnings or .o output file in the directory. I'm absolutely stumped and have no idea why this is happening. Any help would be appreciated.

submitted by Quaker762
[link] [comment]

March 06, 2015 12:15 PM

/r/netsec

StackOverflow

Stackoverflow when starting ring server

I'm trying to start up my ring server (lein ring server-headless) which was working fine now for a long time. This time however I'm getting a stackoverflow. The tests (lein test) work all fine. I'm not aware of anything that has changed, even checked out a previous working version of the code.. I'm in the dark.

The error I'm getting is :

 lein ring server-headless
Mar 06, 2015 1:02:50 PM com.mchange.v2.log.MLog <clinit>
INFO: MLog clients using java 1.4+ standard logging.
Exception in thread "main" java.lang.StackOverflowError, compiling:(/tmp/form-init8517593759265628340.clj:1:72)
    at clojure.lang.Compiler.load(Compiler.java:7142)
    at clojure.lang.Compiler.loadFile(Compiler.java:7086)
    at clojure.main$load_script.invoke(main.clj:274)
    at clojure.main$init_opt.invoke(main.clj:279)
    at clojure.main$initialize.invoke(main.clj:307)
    at clojure.main$null_opt.invoke(main.clj:342)
    at clojure.main$main.doInvoke(main.clj:420)
    at clojure.lang.RestFn.invoke(RestFn.java:421)
    at clojure.lang.Var.invoke(Var.java:383)
    at clojure.lang.AFn.applyToHelper(AFn.java:156)
    at clojure.lang.Var.applyTo(Var.java:700)
    at clojure.main.main(main.java:37)
Caused by: java.lang.StackOverflowError
    at clojure.lang.PersistentHashMap$NodeSeq.create(PersistentHashMap.java:1124)
    at clojure.lang.PersistentHashMap$BitmapIndexedNode.nodeSeq(PersistentHashMap.java:691)
    at clojure.lang.PersistentHashMap.seq(PersistentHashMap.java:215)
at clojure.lang.RT.seqFrom(RT.java:491)
at clojure.lang.RT.seq(RT.java:486)
at clojure.lang.RT.keys(RT.java:514)
at clojure.lang.APersistentSet.seq(APersistentSet.java:46)
at clojure.lang.RT.seqFrom(RT.java:491)
at clojure.lang.RT.seq(RT.java:486)
at clojure.core$seq.invoke(core.clj:133)
at clojure.core$set.invoke(core.clj:3782)
at ns_tracker.dependency$seq_union.invoke(dependency.clj:13)
at ns_tracker.dependency$transitive$fn__977.invoke(dependency.clj:21)
at clojure.core.protocols$fn__6074.invoke(protocols.clj:79)
at clojure.core.protocols$fn__6031$G__6026__6044.invoke(protocols.clj:13)

The last part keeps on repeating.

Maybe something to do with circular dependecies?

by Peter at March 06, 2015 12:06 PM

Akka : the proper use of `ask` pattern?

I'm trying to grok Futures and ask pattern in akka.

So, I make two actors, and one asking another to send him back a message. Well, according to akka's Futures documentation, actor should ask(?) for message and it shall give him a Future instanse. Then actor should block (using Await) to get Future results.

Well, I never get my future done. Why is that?

Code is:

package head_thrash

import akka.actor._
import akka.util.Timeout
import scala.concurrent.Await
import scala.concurrent.duration._

object Main extends App {

  val system = ActorSystem("actors")

  val actor1 = system.actorOf(Props[MyActor], "node_1")
  val actor2 = system.actorOf(Props[MyActor], "node_2")

  actor2 ! "ping_other"

  system.awaitTermination()

  Console.println("Bye!")
}

class MyActor extends Actor with ActorLogging {
  import akka.pattern.ask

  implicit val timeout = Timeout(100.days)

  def receive = {
    case "ping_other" => {
      val selection = context.actorSelection("../node_1")
      log.info("Sending ping to node_1")
      val result = Await.result(selection ? "ping", Duration.Inf) // <-- Blocks here forever!
      log.info("Got result " + result)
    }
    case "ping" => {
      log.info("Sending back pong!")
      sender ! "pong"
    }
  }
}

If I change Duration.Inf to 5.seconds, then actor waits 5 seconds, tells that my future is Timeouted (by throwing TimeoutException), and then other actor finally replies with needed message. So, no async happens. Why? :-(

How should I properly implement that pattern? Thanks.

by head_thrash at March 06, 2015 11:33 AM

Clojure nested map - change value

have to say I started learning Clojure about two weeks ago and now I'm stuck on a problem since three full days.

I got a map like this:

{
  :agent1 {:name "Doe" :firstname "John" :state "a" :time "VZ" :team "X"}
  :agent2 {:name "Don" :firstname "Silver" :state "a" :time "VZ" :team "X"}
  :agent3 {:name "Kim" :firstname "Test" :state "B" :time "ZZ" :team "G"}
}

and need to change :team "X" to :team "H". I tried with a lot of stuff like assoc, update-in etc. but nothing works.

How can I do my stuff? Thank you so much!

by m-arv at March 06, 2015 11:09 AM

Multiplying two numbers by successive sums in F#

Giving m and n as integers, I can multiply them by successive sums like this:

    m * n = m + m + m + ... + m (n times) 

So, let's consider the pseudo code below:

    m = ... (first number)
    n = ... (second number)

    Result = 0;
    while (n > 0)
    {
        Result = Result + m;
        n = n - 1;
    }

How can I implement this algorithm in F# knowing that variables are immutable? Put the question in another way, how can I update the variables Result and n along the successive iterations?

Can someone write this algorithm in F#?

Thank you

Foot note: I'm start to study F# and I'm complete puzzled with functional programming.

by Den Tada at March 06, 2015 11:00 AM

Is there a function that takes a list and returns a list of duplicate elements in that list?

Is there a Haskell function that takes a list and returns a list of duplicates/redundant elements in that list?

I'm aware of the the nub and nubBy functions, but they remove the duplicates; I would like to keep the dupes and collects them in a list.

by JeanJouX at March 06, 2015 10:53 AM

Planet Theory

TR15-031 | Amplification of One-Way Information Complexity via Codes and Noise Sensitivity | Grigory Yaroslavtsev, Marco Molinaro, David Woodruff

We show a new connection between the information complexity of one-way communication problems under product distributions and a relaxed notion of list-decodable codes. As a consequence, we obtain a characterization of the information complexity of one-way problems under product distributions for any error rate based on covering numbers. This generalizes the characterization via VC dimension for constant error rates given by Kremer, Nisan, and Ron (CCC, 1999). It also provides an exponential improvement in the error rate, yielding tight bounds for a number of problems. In addition, our framework gives a new technique for analyzing the complexity of composition (e.g., XOR and OR) of one-way communication problems, connecting the difficulty of these problems to the noise sensitivity of the composing function. Using this connection, we strengthen the lower bounds obtained by Molinaro, Woodruff and Yaroslavtsev (SODA, 2013) for several problems in the distributed and streaming models, obtaining optimal lower bounds for finding the approximate closest pair of a set of points and the approximate largest entry in a matrix product. Finally, to illustrate the utility and simplicity of our framework, we show how it unifies proofs of existing $1$-way lower bounds for sparse set disjointness, the indexing problem, the greater than function under product distributions, and the gap-Hamming problem under the uniform distribution.

March 06, 2015 10:52 AM

TheoryOverflow

Is there a stand-alone statistical ZK argument with concurrent knowledge extraction?

Is any known construction for an interactive argument of knowledge that

  • is stand-alone statistical zero-knowledge, and
  • allows concurrent knowledge extraction?

This is a weakening of my previous question from 6 months ago, and is something that would presumably be easier to construct.

by Ricky Demer at March 06, 2015 10:51 AM

Lobsters

StackOverflow

Learn programming for CnC lathes

I would like to learn programming for CnC lathes. First, what open-source programmes (similar BobCAD-CAM) would be best? Second, what is the best way to proceed in learning to use the programmes? I look forward to learning the answers to my questions. Thank you!

by Ben Walker at March 06, 2015 10:45 AM

Lobsters

TheoryOverflow

Translation of context-free parsing into SAT

Is there a published algorithm for translating a context-free parsing problem into SAT? That is, an algorithm that translates a context-free grammar and an input string into a set clauses that is satisfiable iff the input string is well-formed according to the grammar.

by Atamiri at March 06, 2015 10:16 AM

StackOverflow

Partition a sequence of tuple in scala

I often write the same code to partition a sequence of tuples:

def groupBy[A, B](s: Seq[(A, B)]) : Map[A, Seq[B]] = s.groupBy (_._1) map { case (k, values) => (k, values.map(_._2))}

Is there a better way ?

by Yann Moisan at March 06, 2015 10:06 AM

Currying n arguments create n functions

I have a function

//normal version
let addTwoParameters x y = 
   x + y

translate to curry version it looks like:

//explicitly curried version
let addTwoParameters x  =      // only one parameter!
   let subFunction y = 
      x + y                    // new function with one param
   subFunction                 // return the subfunction

What when I have a function with 4 arguments like:

let addTwoParameters a b c d = 
       a + b + c + d

How the currying version would be?

by zero_coding at March 06, 2015 10:02 AM

Haskell - Sum values in a tree

My tree looks like this, a tree which at each node can or cannot have an integer:

data Tree = Empty | Node (Maybe Integer) Tree Tree deriving Show

I want to sum all the values in the tree, not including Nothing values and if tree is not empty but has only Nothing values, just return Nothing, or empty tree is 0. These cases I understand how to.

I want thinking depth first traversal would be best, or just some basic traversal in general, but struggling on how to implement it elegantly.

treeValues :: Tree -> Maybe Integer

by Teodorico Levoff at March 06, 2015 09:59 AM

Lobsters

OS tag

I propose “os” tag for “Operating systems”.

Here are some stories I think fit: Re-imagining Operating Systems: Xen, Unikernels, and the Library OS, OS Technologies To Watch. Curiously, both are tagged “devops”, which I think is inappropriate.

by sanxiyn at March 06, 2015 09:57 AM

StackOverflow

Displaying comments in scala template using Play Framework with java language

My domain model have only three classes (a simple blog): Article, Parent, Comment which are in one-to-many relationship (Article->Parent->Comment). I have problems to display a tree of comments. My template of displaying comments is as bellow:

@(article: Article)
@main("Article") {
<article class="article">
    <p>@article.getTitle()</p>
    @article.getContent()
</article>

<a href="@routes.CommentController.createComment(article.getId())" class="view">Add Comment</a>

@for(parentComment <- article.parents){
    <p class="comment">
    @parentComment.getContent()
    </p>
    <a href="@routes.CommentController.createComment(parentComment.getId())" class="view">Reply</a>

    @for(child <- parentComment.comments) {
        <p class="comment">
        @child.getContent()
        </p>
        <a href="@routes.CommentController.createComment(child.getId())" class="view">Reply</a>
    }


}

}

but in database are stored just the comments for the article (the parent comment), the comments of parent aren't stored in database.

How can I manage the code above to store child comments properly?

I'm really new to Play Framework and Scala, any advise how to fix the code will be very welcome.

by Andritchi Alexei at March 06, 2015 09:57 AM

QuantOverflow

Foreign exchange - Dealer spreads and order size

Is it true that in foreign exchange markets, dealer spreads are lower for smaller order and increases for larger orders? This seems counter-intuitive when compared to other markets where dealer spreads would decrease for larger orders due to efficiencies of scale.

by user2601814 at March 06, 2015 09:51 AM

StackOverflow

Conditional text in jinja2 templates for ansible

I have playbook which may set lot of options for the daemon command line. I want to allow set them all from variables, but the same time I want to make them all optional.

Now I have template (j2) with all variables mandatory:

{% for router in flowtools.captures %}
-d {{router.debug_level}} -e {{router.expire_count}} -E {{router.expire_size}} -f {{router.fiter_name}} -F {{router.filter_definition}} -n {{router.rotations}} -N {{router.netsting_level}} -S {{router.start_interval}} -t {{router.tag_name}} -T {{router.active_def}} -V {{pdu_version}} -w {{router.workdir}} -x {{router.xlate_fname}} -z {{router.z_level}}
{% endfor %}

I want:

  • To allow undefined variables in 'router' (without failing a playbook).
  • Not to include option (-z, -b, etc) to the output if related variable is empty.

For example above, if flowtools.captures[0] contains only debug_level=2 and workdir=/tmp it should generate:

-d 2 -w /tmp.

I can add huge list of {% if %}'s but this would be very bulky. Is it possible to make it gracefully?

by George Shuklin at March 06, 2015 09:51 AM

Lobsters

StackOverflow

Failed to Install fluentd redshift plugin on ubuntu 14.04.1 Can't find the PostgreSQL client library (libpq)

It's a brand new AWS ubuntu 14.04.1 VM.

After launching it, did the following trying to install fluentd redshift plugin.

sudo su 
curl -L http://toolbelt.treasuredata.com/sh/install-ubuntu-trusty-td-agent2.sh | sh 
apt-get install libpq-dev
apt-get install ruby ruby-dev
apt-get install make
gem install pg
/usr/sbin/td-agent-gem install fluent-plugin-redshift

Should be easily reproducible. Please help. Thanks a lot.

root@ip-172-30-0-131:/home/ubuntu# sudo /usr/sbin/td-agent-gem install fluent-plugin-redshift
sudo: unable to resolve host ip-172-30-0-131
Building native extensions.  This could take a while...
ERROR:  Error installing fluent-plugin-redshift:
    ERROR: Failed to build gem native extension.

    /opt/td-agent/embedded/bin/ruby extconf.rb
checking for pg_config... yes
Using config values from /usr/bin/pg_config
checking for libpq-fe.h... yes
checking for libpq/libpq-fs.h... yes
checking for pg_config_manual.h... yes
checking for PQconnectdb() in -lpq... no
checking for PQconnectdb() in -llibpq... no
checking for PQconnectdb() in -lms/libpq... no
Can't find the PostgreSQL client library (libpq)
*** extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of necessary
libraries and/or headers.  Check the mkmf.log file for more details.  You may
need configuration options.

Provided configuration options:
    --with-opt-dir
    --with-opt-include
    --without-opt-include=${opt-dir}/include
    --with-opt-lib
    --without-opt-lib=${opt-dir}/lib
    --with-make-prog
    --without-make-prog
    --srcdir=.
    --curdir
    --ruby=/opt/td-agent/embedded/bin/ruby
    --with-pg
    --without-pg
    --with-pg-config
    --without-pg-config
    --with-pg_config
    --without-pg_config
    --with-pg-dir
    --without-pg-dir
    --with-pg-include
    --without-pg-include=${pg-dir}/include
    --with-pg-lib
    --without-pg-lib=${pg-dir}/lib
    --with-pqlib
    --without-pqlib
    --with-libpqlib
    --without-libpqlib
    --with-ms/libpqlib
    --without-ms/libpqlib

extconf failed, exit code 1

Gem files will remain installed in /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/pg-0.17.2.pre.546 for inspection.
Results logged to /opt/td-agent/embedded/lib/ruby/gems/2.1.0/extensions/x86_64-linux/2.1.0/pg-0.17.2.pre.546/gem_make.out

The mkft.log file has the following content:

find_executable: checking for pg_config... -------------------- yes

--------------------

find_header: checking for libpq-fe.h... -------------------- yes

"gcc -o conftest -I/opt/td-agent/embedded/include/ruby-2.1.0/x86_64-linux -I/opt/td-agent/embedded/include/ruby-2.1.0/ruby/backward -I/opt/td-agent/embedded/include/ruby-2.1.0 -I. -I/usr/include/postgresql  -I/opt/td-agent/embedded/include   -I/opt/td-agent/embedded/include -O3 -g -pipe -fPIC conftest.c  -L. -L/opt/td-agent/embedded/lib -Wl,-R/opt/td-agent/embedded/lib -L/usr/lib -Wl,-R/usr/lib -L/opt/td-agent/embedded/lib -Wl,-R/opt/td-agent/embedded/lib -L. -Wl,-rpath,/opt/td-agent/embedded/lib -fstack-protector -L/opt/td-agent/embedded/lib -rdynamic -Wl,-export-dynamic -L/opt/td-agent/embedded/lib  -Wl,-R/opt/td-agent/embedded/lib      -Wl,-R -Wl,/opt/td-agent/embedded/lib -L/opt/td-agent/embedded/lib -lruby  -lpthread -ldl -lcrypt -lm   -lc"
checked program was:
/* begin */
1: #include "ruby.h"
2: 
3: int main(int argc, char **argv)
4: {
5:   return 0;
6: }
/* end */

"gcc -E -I/opt/td-agent/embedded/include/ruby-2.1.0/x86_64-linux -I/opt/td-agent/embedded/include/ruby-2.1.0/ruby/backward -I/opt/td-agent/embedded/include/ruby-2.1.0 -I. -I/usr/include/postgresql  -I/opt/td-agent/embedded/include   -I/opt/td-agent/embedded/include -O3 -g -pipe -fPIC  conftest.c -o conftest.i"
checked program was:
/* begin */
1: #include "ruby.h"
2: 
3: #include <libpq-fe.h>
/* end */

--------------------

find_header: checking for libpq/libpq-fs.h... -------------------- yes

"gcc -E -I/opt/td-agent/embedded/include/ruby-2.1.0/x86_64-linux -I/opt/td-agent/embedded/include/ruby-2.1.0/ruby/backward -I/opt/td-agent/embedded/include/ruby-2.1.0 -I. -I/usr/include/postgresql  -I/opt/td-agent/embedded/include   -I/opt/td-agent/embedded/include -O3 -g -pipe -fPIC  conftest.c -o conftest.i"
checked program was:
/* begin */
1: #include "ruby.h"
2: 
3: #include <libpq/libpq-fs.h>
/* end */

--------------------

find_header: checking for pg_config_manual.h... -------------------- yes

"gcc -E -I/opt/td-agent/embedded/include/ruby-2.1.0/x86_64-linux -I/opt/td-agent/embedded/include/ruby-2.1.0/ruby/backward -I/opt/td-agent/embedded/include/ruby-2.1.0 -I. -I/usr/include/postgresql  -I/opt/td-agent/embedded/include   -I/opt/td-agent/embedded/include -O3 -g -pipe -fPIC  conftest.c -o conftest.i"
checked program was:
/* begin */
1: #include "ruby.h"
2: 
3: #include <pg_config_manual.h>
/* end */

--------------------

have_library: checking for PQconnectdb() in -lpq... -------------------- no

"gcc -o conftest -I/opt/td-agent/embedded/include/ruby-2.1.0/x86_64-linux -I/opt/td-agent/embedded/include/ruby-2.1.0/ruby/backward -I/opt/td-agent/embedded/include/ruby-2.1.0 -I. -I/usr/include/postgresql  -I/opt/td-agent/embedded/include   -I/opt/td-agent/embedded/include -O3 -g -pipe -fPIC conftest.c  -L. -L/opt/td-agent/embedded/lib -Wl,-R/opt/td-agent/embedded/lib -L/usr/lib -Wl,-R/usr/lib -L/opt/td-agent/embedded/lib -Wl,-R/opt/td-agent/embedded/lib -L. -Wl,-rpath,/opt/td-agent/embedded/lib -fstack-protector -L/opt/td-agent/embedded/lib -rdynamic -Wl,-export-dynamic -L/opt/td-agent/embedded/lib  -Wl,-R/opt/td-agent/embedded/lib      -Wl,-R -Wl,/opt/td-agent/embedded/lib -L/opt/td-agent/embedded/lib -lruby -lpq  -lpthread -ldl -lcrypt -lm   -lc"
/usr/lib/libpq.so: undefined reference to `SSL_CTX_use_certificate_chain_file@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `SSL_write@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `SSL_set_fd@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `SSL_use_PrivateKey_file@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `CRYPTO_set_locking_callback@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `X509_NAME_get_text_by_NID@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `SSL_connect@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `ENGINE_init@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `X509_STORE_load_locations@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `SSL_CTX_get_cert_store@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `SSL_ctrl@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `SSL_free@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `SSL_library_init@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `SSL_CTX_ctrl@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `ERR_get_error@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `SSL_pending@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `ENGINE_free@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `X509_get_subject_name@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `SSL_use_certificate_file@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `SSL_check_private_key@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `SSL_load_error_strings@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `ENGINE_by_id@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `SSL_get_peer_certificate@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `SSL_CTX_new@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `CRYPTO_num_locks@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `ENGINE_load_private_key@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `ENGINE_finish@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `SSL_set_verify@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `X509_free@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `CRYPTO_set_id_callback@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `SSL_get_error@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `SSL_new@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `SSL_shutdown@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `SSL_use_PrivateKey@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `TLSv1_method@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `X509_STORE_set_flags@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `SSL_read@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `OPENSSL_config@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `SSL_CTX_load_verify_locations@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `ERR_reason_error_string@OPENSSL_1.0.0'
/usr/lib/libpq.so: undefined reference to `SSL_set_ex_data@OPENSSL_1.0.0'
collect2: error: ld returned 1 exit status
checked program was:
/* begin */
 1: #include "ruby.h"
 2: 
 3: #include <libpq-fe.h>
 4: 
 5: /*top*/
 6: extern int t(void);
 7: int main(int argc, char **argv)
 8: {
 9:   if (argc > 1000000) {
10:     printf("%p", &t);
11:   }
12: 
13:   return 0;
14: }
15: int t(void) { void ((*volatile p)()); p = (void ((*)()))PQconnectdb; return 0; }
/* end */

"gcc -o conftest -I/opt/td-agent/embedded/include/ruby-2.1.0/x86_64-linux -I/opt/td-agent/embedded/include/ruby-2.1.0/ruby/backward -I/opt/td-agent/embedded/include/ruby-2.1.0 -I. -I/usr/include/postgresql  -I/opt/td-agent/embedded/include   -I/opt/td-agent/embedded/include -O3 -g -pipe -fPIC conftest.c  -L. -L/opt/td-agent/embedded/lib -Wl,-R/opt/td-agent/embedded/lib -L/usr/lib -Wl,-R/usr/lib -L/opt/td-agent/embedded/lib -Wl,-R/opt/td-agent/embedded/lib -L. -Wl,-rpath,/opt/td-agent/embedded/lib -fstack-protector -L/opt/td-agent/embedded/lib -rdynamic -Wl,-export-dynamic -L/opt/td-agent/embedded/lib  -Wl,-R/opt/td-agent/embedded/lib      -Wl,-R -Wl,/opt/td-agent/embedded/lib -L/opt/td-agent/embedded/lib -lruby -lpq  -lpthread -ldl -lcrypt -lm   -lc"
conftest.c: In function ‘t’:
conftest.c:15:1: error: too few arguments to function ‘PQconnectdb’
 int t(void) { PQconnectdb(); return 0; }
 ^
In file included from conftest.c:3:0:
/usr/include/postgresql/libpq-fe.h:250:16: note: declared here
 extern PGconn *PQconnectdb(const char *conninfo);
                ^
checked program was:
/* begin */
 1: #include "ruby.h"
 2: 
 3: #include <libpq-fe.h>
 4: 
 5: /*top*/
 6: extern int t(void);
 7: int main(int argc, char **argv)
 8: {
 9:   if (argc > 1000000) {
10:     printf("%p", &t);
11:   }
12: 
13:   return 0;
14: }
15: int t(void) { PQconnectdb(); return 0; }
/* end */

--------------------

have_library: checking for PQconnectdb() in -llibpq... -------------------- no

"gcc -o conftest -I/opt/td-agent/embedded/include/ruby-2.1.0/x86_64-linux -I/opt/td-agent/embedded/include/ruby-2.1.0/ruby/backward -I/opt/td-agent/embedded/include/ruby-2.1.0 -I. -I/usr/include/postgresql  -I/opt/td-agent/embedded/include   -I/opt/td-agent/embedded/include -O3 -g -pipe -fPIC conftest.c  -L. -L/opt/td-agent/embedded/lib -Wl,-R/opt/td-agent/embedded/lib -L/usr/lib -Wl,-R/usr/lib -L/opt/td-agent/embedded/lib -Wl,-R/opt/td-agent/embedded/lib -L. -Wl,-rpath,/opt/td-agent/embedded/lib -fstack-protector -L/opt/td-agent/embedded/lib -rdynamic -Wl,-export-dynamic -L/opt/td-agent/embedded/lib  -Wl,-R/opt/td-agent/embedded/lib      -Wl,-R -Wl,/opt/td-agent/embedded/lib -L/opt/td-agent/embedded/lib -lruby -llibpq  -lpthread -ldl -lcrypt -lm   -lc"
/usr/bin/ld: cannot find -llibpq
collect2: error: ld returned 1 exit status
checked program was:
/* begin */
 1: #include "ruby.h"
 2: 
 3: #include <libpq-fe.h>
 4: 
 5: /*top*/
 6: extern int t(void);
 7: int main(int argc, char **argv)
 8: {
 9:   if (argc > 1000000) {
10:     printf("%p", &t);
11:   }
12: 
13:   return 0;
14: }
15: int t(void) { void ((*volatile p)()); p = (void ((*)()))PQconnectdb; return 0; }
/* end */

"gcc -o conftest -I/opt/td-agent/embedded/include/ruby-2.1.0/x86_64-linux -I/opt/td-agent/embedded/include/ruby-2.1.0/ruby/backward -I/opt/td-agent/embedded/include/ruby-2.1.0 -I. -I/usr/include/postgresql  -I/opt/td-agent/embedded/include   -I/opt/td-agent/embedded/include -O3 -g -pipe -fPIC conftest.c  -L. -L/opt/td-agent/embedded/lib -Wl,-R/opt/td-agent/embedded/lib -L/usr/lib -Wl,-R/usr/lib -L/opt/td-agent/embedded/lib -Wl,-R/opt/td-agent/embedded/lib -L. -Wl,-rpath,/opt/td-agent/embedded/lib -fstack-protector -L/opt/td-agent/embedded/lib -rdynamic -Wl,-export-dynamic -L/opt/td-agent/embedded/lib  -Wl,-R/opt/td-agent/embedded/lib      -Wl,-R -Wl,/opt/td-agent/embedded/lib -L/opt/td-agent/embedded/lib -lruby -llibpq  -lpthread -ldl -lcrypt -lm   -lc"
conftest.c: In function ‘t’:
conftest.c:15:1: error: too few arguments to function ‘PQconnectdb’
 int t(void) { PQconnectdb(); return 0; }
 ^
In file included from conftest.c:3:0:
/usr/include/postgresql/libpq-fe.h:250:16: note: declared here
 extern PGconn *PQconnectdb(const char *conninfo);
                ^
checked program was:
/* begin */
 1: #include "ruby.h"
 2: 
 3: #include <libpq-fe.h>
 4: 
 5: /*top*/
 6: extern int t(void);
 7: int main(int argc, char **argv)
 8: {
 9:   if (argc > 1000000) {
10:     printf("%p", &t);
11:   }
12: 
13:   return 0;
14: }
15: int t(void) { PQconnectdb(); return 0; }
/* end */

--------------------

have_library: checking for PQconnectdb() in -lms/libpq... -------------------- no

"gcc -o conftest -I/opt/td-agent/embedded/include/ruby-2.1.0/x86_64-linux -I/opt/td-agent/embedded/include/ruby-2.1.0/ruby/backward -I/opt/td-agent/embedded/include/ruby-2.1.0 -I. -I/usr/include/postgresql  -I/opt/td-agent/embedded/include   -I/opt/td-agent/embedded/include -O3 -g -pipe -fPIC conftest.c  -L. -L/opt/td-agent/embedded/lib -Wl,-R/opt/td-agent/embedded/lib -L/usr/lib -Wl,-R/usr/lib -L/opt/td-agent/embedded/lib -Wl,-R/opt/td-agent/embedded/lib -L. -Wl,-rpath,/opt/td-agent/embedded/lib -fstack-protector -L/opt/td-agent/embedded/lib -rdynamic -Wl,-export-dynamic -L/opt/td-agent/embedded/lib  -Wl,-R/opt/td-agent/embedded/lib      -Wl,-R -Wl,/opt/td-agent/embedded/lib -L/opt/td-agent/embedded/lib -lruby -lms/libpq  -lpthread -ldl -lcrypt -lm   -lc"
/usr/bin/ld: cannot find -lms/libpq
collect2: error: ld returned 1 exit status
checked program was:
/* begin */
 1: #include "ruby.h"
 2: 
 3: #include <libpq-fe.h>
 4: 
 5: /*top*/
 6: extern int t(void);
 7: int main(int argc, char **argv)
 8: {
 9:   if (argc > 1000000) {
10:     printf("%p", &t);
11:   }
12: 
13:   return 0;
14: }
15: int t(void) { void ((*volatile p)()); p = (void ((*)()))PQconnectdb; return 0; }
/* end */

"gcc -o conftest -I/opt/td-agent/embedded/include/ruby-2.1.0/x86_64-linux -I/opt/td-agent/embedded/include/ruby-2.1.0/ruby/backward -I/opt/td-agent/embedded/include/ruby-2.1.0 -I. -I/usr/include/postgresql  -I/opt/td-agent/embedded/include   -I/opt/td-agent/embedded/include -O3 -g -pipe -fPIC conftest.c  -L. -L/opt/td-agent/embedded/lib -Wl,-R/opt/td-agent/embedded/lib -L/usr/lib -Wl,-R/usr/lib -L/opt/td-agent/embedded/lib -Wl,-R/opt/td-agent/embedded/lib -L. -Wl,-rpath,/opt/td-agent/embedded/lib -fstack-protector -L/opt/td-agent/embedded/lib -rdynamic -Wl,-export-dynamic -L/opt/td-agent/embedded/lib  -Wl,-R/opt/td-agent/embedded/lib      -Wl,-R -Wl,/opt/td-agent/embedded/lib -L/opt/td-agent/embedded/lib -lruby -lms/libpq  -lpthread -ldl -lcrypt -lm   -lc"
conftest.c: In function ‘t’:
conftest.c:15:1: error: too few arguments to function ‘PQconnectdb’
 int t(void) { PQconnectdb(); return 0; }
 ^
In file included from conftest.c:3:0:
/usr/include/postgresql/libpq-fe.h:250:16: note: declared here
 extern PGconn *PQconnectdb(const char *conninfo);
                ^
checked program was:
/* begin */
 1: #include "ruby.h"
 2: 
 3: #include <libpq-fe.h>
 4: 
 5: /*top*/
 6: extern int t(void);
 7: int main(int argc, char **argv)
 8: {
 9:   if (argc > 1000000) {
10:     printf("%p", &t);
11:   }
12: 
13:   return 0;
14: }
15: int t(void) { PQconnectdb(); return 0; }
/* end */

--------------------

by xigua at March 06, 2015 09:38 AM

CompsciOverflow

Is it possible to convert a graph with one negative capacity to a graph with only positive capacities?

I am interested in whether a graph (say, a complete graph) with one capacity negative (or many, but one should suffice) can be reconstructed as a graph with all non-negative capacities where the max flow between vertices of the original graph can be identified with max flows between vertices of the new graph. It is ok for the new graph to have more vertices than the original graph. Said another way, consider a complete graph K-n with additional external edges in addition attached to source/sinks. Allow one of the edges of the complete graph to have negative capacity. Can I replace the complete graph with another graph with only positive capacities such that the max flow between all possible subsets of the external sources/sinks remains the same?

CLARIFICATION: We connect each vertex of the complete graph to a distinct external vertex with an edge. If it helps, picture a complete K_6 drawn as a hexagon with an additional edge coming out of each vertex of the hexagon to 6 extra distinct vertices. Those distinct vertices are the s and t's that we are interested in considering. The edges that are not edges of the complete graph that we add are the external edges. I am interested only in cases where the original graph is the complete graph, though edges are allowed to have weight 0 or weight infinity. I hope this helps.

Thanks, Ning

by Ning Bao at March 06, 2015 09:38 AM

StackOverflow

Change war file name in sbt 11.2

I'm using sbt 11.2 and xsbt web plugin for a web project (which is multi module). I'm trying to change the war file name generated by sbt. It has version which I like to not to include.

I tried overriding several keys without luck

lazy val admin = Project("admin", file("admin"),
    settings = baseSettings ++ webSettings ++ jettySettings ++ Seq(
      name := "admin",
      moduleName := "my-admin",

...

Appreciate if someone can show me how to change war file name

Thanks

by Dimuthu at March 06, 2015 09:36 AM

TheoryOverflow

What is the time complexity of clustering sets without/with MinHash algorithm?

If we have $N$ sets should be clustered according to their Jaccard similarity, what is the time complexity of this clustering task without/with MinHash algorithm?

However, for this clustering problem, the time complexity for both the naive way and MinHash is poorly documented. I would like if there is any good reference for their time complexity, so I can do some better comparison with other solutions.

by shihpeng at March 06, 2015 09:34 AM

QuantOverflow

How to price exotic options using Monte-Carlo?

I am actually trying to solve some exercise problem using Monte-Carlo and C++ for exotic options. Namely, the exotic options are geometric Asian options and discrete barrier option.

It is claimed that using log values would enable to get accurate pricing using "fewer approximations” and though results in a gain of time required for computing.

I have tried to look all over the place to see where I could get some hint but failed to do so.

by user2448864 at March 06, 2015 09:30 AM

TheoryOverflow

Many-one reduction from inequality problem to equality problem

Let the k-inequality-MIS problem be the decision problem whether an arbitrary graph $G=(V, E)$ contains a maximal independent set of at least size $k$, that is the corresponding language is:

$$\mathbf{M}_{ineq} = \{(G,k): \exists V'\subset V, |V'|\geq k \wedge V'\in MIS(G)\},$$

where $MIS(G)$ collects all possible maximal independent sets of $G$.

Let the k-equality-MIS problem be the decision problem whether $G$ contains such a set exactly of size $k$. Its language being:

$$\mathbf{M}_{eq} = \{(G,k): \exists V'\subset V, |V'|=k \wedge V'\in MIS(G) \}$$

I'm wondering whether $\mathbf{M}_{eq}$ is many-one reducible to $\mathbf{M}_{ineq}$. My intuition tells me it should be but I cannot get it working. As I suspect similar problems pop up quite often, I figured someone here might know.

Note that this is not trivial as the corresponding independent sets problem, $\mathbf{I}$, where $(G,k)\in\mathbf{I}_{ineq}\Leftrightarrow (G,k)\in\mathbf{I}_{eq}$. As an example, for a graph, $G'$, that is a perfect matching with more than two vertices we have $(G',1)\in\mathbf{M}_{ineq}$ but $(G',1)\not\in\mathbf{M}_{eq}$.

And how about the other way around, is $\mathbf{M}_{ineq}$ many-one reducible to $\mathbf{M}_{eq}$?

I'm quite new to complexity theory, so I hope I get this right and the question is not obvious. Also sorry for the rather vague title, but my question is a bit vague. I hope my example made it clear. My question is, however, really about all problems with similar characteristics.

by Fredrik Savje at March 06, 2015 09:06 AM

StackOverflow

Test actors creation in akka hookers (preStart)

I want to test that preStart() creates right actors tree (Correct me, if I choose wrong place to create actors tree).

class Central extends Actor { 

  var summer : ActorRef = _

  override def preStart() = {
    val printerProps = Props[Printer]
    val printer = context.actorOf(printerProps)
    val summerProps = Props(new Summer(printer))
    summer = context.actorOf(summerProps)
  }

  override def receive = {
    case msg =>
  }
}

For full picture:

class Printer extends Actor {
  override def receive = {
    case msg => println(msg)
  }
}

class Summer(printer: ActorRef) extends Actor {
  override def receive = {
    case (x: Int, y: Int) =>
      printer ! x + y
  }
}

Any idea, how to make clear test of it?

This answer http://stackoverflow.com/a/18877040/1768378 is close for what I'am looking for. But I think that change code because a test reason is bad idea.

Maybe someone knows the better solution.

by vadim_shb at March 06, 2015 09:03 AM

QuantOverflow

What does tradable asset mean?

I see a lot of theorems related to tradable assets in quantitative finance text books.

What is a tradable asset? What does 'tradable' mean exactly? Does it simply mean the asset can be bought and sold in the market? Any example of non-tradable asset?

by tcquant at March 06, 2015 09:01 AM

TheoryOverflow

$k$-clique in $k$-partite graph

Is the decision whether a $k$-clique exists in a $k$-partite graph NP-hard?

I have found only a very limited number of references on this problem, and they seem to be concerned with heuristics to enumerate the cliques (in particular k-cliques in k-partite graphs). On complexity, they only comment that the max-clique problem is generally hard, but nothing on the specific case.

Note: This is an edit of my earlier question: whether the max-clique problem in a $k$-partite graph is NP-hard, with $k$ being part of the input? As Austin pointed out in the comments, it is easy to see that the answer is trivially yes by a reduction from the general max-clique problem; any graph $G$ on $n$ vertices can be considered $n$-partite. The new question, however, is more specific and a reduction does not seem so obvious. For example, (and contrary to the original question) for $k=n$ one can easily check if an $n$-partite graph contains/is an $n$-clique. What about general $k$?

by m.a. at March 06, 2015 08:41 AM

StackOverflow

Skip Ansible task when running in check mode?

I'm writing an Ansible playbook and have a task which will always fail in check mode:

hosts: ...
tasks:
    - set_fact: filename="{{ansible_date_time.iso8601}}"
    - file: state=touch name={{filename}}
    - file: state=link src={{filename}} dest=latest

In check mode, the file will not be created so the link task will always fail. Is there a way to mark such a task to be skipped when running in check mode? Something like:

- file: state=link src={{filename}} dest=latest
  when: not check_mode

by augurar at March 06, 2015 08:40 AM

/r/netsec

StackOverflow

How to define <*> for Option[List[_]] n Scala

This is a followup to my previous question with an example found on the Internet.

Suppose I define a typeclass Applicative as follows:

trait Functor[T[_]]{
  def map[A,B](f:A=>B, ta:T[A]):T[B]
}

trait Applicative[T[_]] extends Functor[T] {
  def unit[A](a:A):T[A]
  def ap[A,B](tf:T[A=>B], ta:T[A]):T[B]
}

I can define an instance of Applicative for List

object AppList extends Applicative[List] {
  def map[A,B](f:A=>B, as:List[A]) = as.map(f)
  def unit[A](a: A) = List(a)
  def ap[A,B](fs:List[A=>B], as:List[A]) = for(f <- fs; a <- as) yield f(a)
}

For convenience I can define an implicit conversion to add a method <*> to List[A=>B]

implicit def toApplicative[A, B](fs: List[A=>B]) = new {
  def <*>(as: List[A]) = AppList.ap(fs, as)
}

Now I can do a cool thing !
zip two lists List[String] and apply f2 to every pair in applicative style

val f2: (String, String) => String = {(first, last) => s"$first $last"}
val firsts = List("a", "b", "c")
val lasts  = List("x", "y", "z")

scala> AppList.unit(f2.curried) <*> firsts <*> lasts
res31: List[String] = List(a x, a y, a z, b x, b y, b z, c x, c y, c z)

So far, so good but now I have:

val firstsOpt = Some(firsts)
val lastsOpt  = Some(lasts) 

I would like to zip firsts and lasts, apply f2, and get Option[List[String]] in applicative style. In other words I need <*> for Option[List[_]]. How can I do it ?

by Michael at March 06, 2015 08:32 AM

Lobsters

Planet Clojure

From Pedestal routes to beautiful documentation

If you're doing server side Clojure I'm sure you know and use Prismatic/schema. If you don't, you probably should. At Juxt we love it and we use it exstensively. The performance hit is not trascurable, but you can always turn checks off for your production environment while reaping the benefits in dev and testing.

We tacitlly agreed to always have a schema whenever we are doing I/O. Schema checks on the way in make user input validation and coercion to known types much easier and declarative, and while the error reporting is not user friendly out of the box it doesn't take much effort to customise it.

Server responses are normally made out of data aggregated from various sources: request parameters, user session, databases, external services, you name it. Because Clojure makes working with data so easy we found it's not uncommond to leave in the response some unnecessary information assoc'd deep down a nested map. Or to forget to convert all map keys to a uniform case (wich one do you use in your JSONs: snake_case or kebab-case?). Schema validation on our way out ensures we don't break the data contract with our client apps.

So when the time came to write an extensive documentation for all our server endpoints we though: why not just generate it from our well defined schemas? That must have been the same thinking the guys from Metosin had when they wrote ring-swagger, a library that converts schemas to a JSON spec understood by Swagger. I leave out to you to find out more about Swagger, but just think that you can easily generate documentation for your server that looks like this.

Two of the most popular web frameworks already have specific libraries that convert annotated endpoints to swagger schemas: Compojure and fnhouse. I decided to write a library to takes care of Pedestal too.

Embrace the interceptor

One thing particularly interesting about pedestal-swagger is that it leverages Pedestal's interceptor mental model for documentation. With Pedestal is common to split your endpoint logic in reusable interceptors. They look a lot like middlewares except they are path specific. For example if you have a resource that offer GET, PUT and DELETE methods you usually want to check that that resource exists first and load it up in memory, or return a 404 if it doesn't. Or you want to protect a path with some kind of authentication by placing an interceptor at its root and return 403 if Forbidden. (Example taken from my previous post)

```clj (defbefore load-order-from-db [{:keys [request] :as context}] (if-let [order (get-order-from-db request)] (assoc-in context [:request :order] order) (-> context terminate (assoc-in [:response] {:status 404}))

(defbefore basic-auth [{:keys [request] :as context}] (if-let [{:keys [username password] :as auth} (check-basic-auth request)] (assoc-in context [:request :user] auth) (-> context terminate (assoc-in [:response] {:status 403}))

(defroutes routes [[["/orders/:id" ^:interceptors [load-order-from-db]

  {:get get-order
   :delete delete-order
   :put update-order}]
["/secure" ^:interceptors [basic-auth]
  ["/path1" {:get secure-1}]
  ["/path2" {:delete secure-2}]]]])

```

With pedestal-swagger you can attach metadata to single interceptors so both the documentation and the schemas are inherited by all the endpoints under that path.

```clj (swagger/defbefore load-order-from-db {:parameters {:path {:id schema/Int}} :responses {404 {:description "Not found"}}} [{:keys [request] :as context}] (if-let [order (get-order-from-db request)] (assoc-in context [:request :order] order) (-> context terminate (assoc-in [:response] {:status 404}))

(swagger/defbefore basic-auth {:description "Requires Basic Auth" :parameters {:header {:basic-auth schema/Str}} :responses {403 {:description "Forbidden"}}} [{:keys [request] :as context}] (if-let [{:keys [username password] :as auth} (check-basic-auth request)] (assoc-in context [:request :user] auth) (-> context terminate (assoc-in [:response] {:status 403}))

``` That ensures that the behaviour and its documentation sit close together. When you change one you don't need to wander around your code for occurencies of the other.

Another cool feature interceptors enable is flexible coercion and validation logic. Instead of rebinding some dynamic vars coercion and validation is provided as two first class interceptors (swagger/coerce-params and swagger/validate-reponse) that can be included in your routes. If you don't like their default behaviour you can simply roll your own and place them at the root of your application. Or you can have different behaviours on different paths. And you can decide wether to include them in your test routes or not.

Data driven or go home

I'm afraid readers will think I'm talking about some other language if I don't mention data at least once. Yes this is Clojure. Yes let's talk about data.

Pedestal is data driven. The configuration needed to create a service, server or servlet is contained in one map. Because of this decision, extending Pedestal is infinitely easier than something like Compojure. All pedestal-swagger has to do is walk thought that map to collect the information it needs and massage it in a format Swagger understands. Compare this to the pain of converting a macro dsl to another macro dsl while inspecting opaque functions. Not as much fun in my opinion. (Big props to compojure-api for unlocking that achievement nonetheless).

Having the configuration all in one place gives you a good hint about where to store your additional layer on top of it. An earlier implementation of pedestal-swagger stored the computed result in a separate var that was overriden at compile time. That plan backfired pretty soon: tests where hard to write, reloading the repl yielded odd results, the library was harder to include safely in other projects.

The solution to avoid place oriented programming is to store all the configuration in one place: in this case, the route table. Because the interceptors are passed the route table as an argument I was able to retrieve the generated documentation and use it for display/coercion/validation.

Things that might explode

Now, Swagger 2.0 release was a complicate one. It was first announced 6 months ago, but related tools like Swagger UI and Swagger Codegen are just now starting to look useful. The guys at metosin have done a great job in keeping their library up to date with the latest swagger updates, but bear in mind: when you're using pedestal-swagger you're using beta code that depends on beta code that depends on beta code. Do your own maths.

That being said the library seems to work well and every tool in the chain is pretty extensible if you like to spend some time with it. I appreciate your feedback.

by Frankie Sardo at March 06, 2015 08:30 AM

CompsciOverflow

Skolemization with multiple arguments -- how to unify

I have two statements to put in FOPC: "Everybody has a house" and "Jane and Mark have a house." I encode them

"Everybody who has a house pays utilities"

forall x: has(x,skolem1(x)) ^ house(skolem1(x)) -> paysUtils(x), or

~has(x, skolem1(x)) v ~house(skolem1(x)) v paysUtils(x)

"Jane and Mark have a house"

has(Jane,skolem2(Jane,Mark)) ^ has(Mark,skolem2(Jane,Mark)) ^ house(skolem2(Jane,Mark)), or

has(Jane,skolem2(Jane,Mark))
has(Mark,skolem2(Jane,Mark))
house(skolem2(Jane,Mark))

I want to use resolution to prove Jane pays utilities. The problem is that skolem1 has 1 argument and skolem2 has 2 arguments, so they don't unify.

I'm not sure if this matters -- could I just refer to skolem1 and skolem2 and forget the arguments entirely? If it does matter, how do I resolve the problem so that skolem1 and skolem2 can unify, and resolution can work?

by Alpha Ralpha Boulevarde at March 06, 2015 08:14 AM

StackOverflow

Play framework stack trace logging and logging configuration

With play framework's default logger, I get full exceptions' stack traces in play's application log, and an abbreviated version of the same stack traces in the console. I have three basic questions about the logging:

  1. In one case, I feel there may be some information missing. What does something like ... 3 common frames omitted actually mean, and can I somehow configure to see those omitted frames?

  2. How do I control whether the console shows a full stack trace v.s. a partial one v.s. just some exception title without a stack trace?

  3. Can I replace the default logger with any of the following ones, or should I use those ones only for my own logging but keep the framework logging as is? will switching to those loggers break the non-blocking nature of play? scala-logging, log4s, zero-log.

Hopefully, with your answer, I can focus more on my application and less so on logging infrastructure, so thanks in advance!

by matt at March 06, 2015 08:09 AM

CompsciOverflow

Sorting tuples with respect to multiple criteria

Weg get $n$ rows with $k$ columns of numbers and sorting criteria of the form "maximize/minimize column $i$".

Is there an efficient way of finding an optimal row?

To exemplify, with $n=3$ and $k=3$:

[1,4,2]
[2,3,5]
[1,4,3]

The program is told to optimise the set of goals $\max!\ 0$ and $\min!\ 1$; the third columns can be ignored.

The rows would then have order [2,3,5], [1,4,2], [1,4,3] or [2,3,5], [1,4,3], [1,4,2]. When two solutions are equally good like this, either the weights should propagate down, or nondeterminism can occur.

Since there may not be a unique best order -- consider e.g. instance [3,3,_], [2,2,_], [1,1,_] with the same criteria as above -- conflicts are resolved through strict priorities.

by A T at March 06, 2015 08:09 AM

Algorithm to decide the Kleene Star of a Language A

Assume $f$ decides a language $A$ in $O(g(n))$ time, where $n$ is the length of the input string. How to write a recursive algorithm to decide $A^*$ (recursive)?

Moreover, can an $O(n^2g(n))$ time algorithm be written to decide $A^*$ (dynamic programming)?

Yes, the dynamic programming portion is a hw assignment. CLRS book recommends using recursion first, then evolving to memoization, then the bottom-up approach. I can't even see how to write a recursive algorithm to solve this.

by intdt at March 06, 2015 07:34 AM

StackOverflow

How to use trait to add new method to class in Scala?

I have 3rd party class A:

class A {
  def methodA = ...
}

I want to use use trait to add a new method methodT to an instance of A

trait Atrait[...] {
  def methodT = {
    // how to get a reference of instance of type A?
  }
}

This methodT is specific to some situation, so I should use constraint in the trait. But I could not figure it out. Also, how can I invoke instance of A's method in a trait?

UPDATE

Trait doesn't work this way. See answer for alternative solution.

by davidshen84 at March 06, 2015 07:30 AM

CompsciOverflow

Find K-th next element in O(log n) time

We want to store a set S of distinct positive integers and support the following operations on S:

Insert(a): insert integer a !∈ S into S.

Delete(a): delete integer a ∈ S from S.

FindNext(a, k), where a ∈ S and k is an integer: if S consists of some elements a1 < a2 < · · · < an, and a = ai (for some i, 1 ≤ i ≤ n), then return ai+k (for simplicity, assume that i + k ≤ n).

The worst-case time complexity of each operation should be O(log n) where n is the size of S. An augmented data structure can be used (without any modification) “as a black box” to achieve the above goal: using the operations that this data structure provides, the algorithms for performing the above three operations are quite simple. Give an algorithm for the FindNext(a, k) operation that uses only the operations of this data structure and consists of at most 5 lines of pseudo-code.

You are allowed to make use of other operations available with this data structure, i.e. everything you can make run in the specified time.

by anond at March 06, 2015 07:30 AM

/r/netsec

CompsciOverflow

Expected number of comparisons of Randomized Quick Sort [on hold]

Consider the Randomized Quicksort algorithm that where a pivot is chosen at random using a random generator. Suppose that we execute this algorithm on input array [5, 11, 7, 8]. In the following questions, do not justify your answers.

  1. What is the probability that elements 5 and 8 are compared to each other?

    attempt: 2 / (4 - 1 + 1).= 50%

  2. What is the probability that elements 11 and 8 are not compared to each other?

attempt: 1 - (2 / (4 - 1 + 1)) =50%

  1. What is the expected total number of comparisons (between the elements of the array)? Your answer should be an exact number.

attempt:= 58/12 comparisons

  1. Now suppose we execute the algorithm on input array [0, 1, 2, 3]. What is the expected total number of comparisons (between the elements of the array)? Your answer should be an exact number.

my attempt at calculating all possible comparisons : 2/4 * 2/3 *6 + 2/4 * 1/3 * 5 + 2/4* 4

= 24/12 + 10/12 + 8/4 = 58/12 comparisons

Am I on the right track? and How are Questions 3 and 4 different

by anond at March 06, 2015 07:15 AM

/r/emacs

Things that make you feel like stuck in the 80's

I love Emacs, even though sometimes I get a little bit frustrated.

The Help System.

While it may have been exciting to have linked pages with clickable buttons some decades ago. Today it feels pretty clunky. Start with describe-mode. It begins with a bulky listing of ~40 Minor Modes, about half of which you don't really care about (you just expect them to do their thing, like e.g. Auto-Compression). The overall presentation looks like a old Usenet message you found in your archive.

Moving on to describe-m{ajor,inor}-mode: It usually gives a short explanation of what the particular mode is good for (which is why you have installed it in the first place) followed by a list of bindings and their command names. This list is sorted in key order, which seems pretty useless. While the command names give an idea of what they might do, though frequently just that. Then you start skimming this randomly sorted list of names, something catches your eye and you click on it. Now the help buffer displays the commands documentation and after finding out that this isn't really the one you were looking for, you go back, and , while the cursor moves back to square one, you're back to skimming the randomly sorted list of names (from the top) (Edit: That's not true).

The Window System.

It is to random, to easy to break your layout and to hard to get back. Sometimes you work in the left window, with some doc in the right one. 10 Minutes later it's the opposite. The idea of invoking some command has to be carefully weighted against the window-micro-management-work you're up for afterwards. There is no 'Let me just quickly scan this other information and then continue here ...'. Everything moves like an aircraft carrier.

submitted by politza
[link] [18 comments]

March 06, 2015 07:06 AM

/r/compilers

CompsciOverflow

Residence time in multi server system

I'm reading Neil Gunther's Practical Performance Analyst, and he provides that when there's one queue and one server, the residence time (total time spent per request) is:

R = S + QS
  • S is service time
  • Q is number of customers ahead
  • R is residence time

Then he describes a case with a single queue and two servers and says that the equation becomes

R = S + 1/2 SpQ
  • p is per-server utilization 0 < p < 1.

I don't understand why p is a factor here. Wouldn't all servers be busy whenever there are customers waiting in line? Why is this different from the single-server case? The explanation given in the book is copied on slide 25 here. I don't understand the reasoning. Why do we need p?

by alexwriteshere at March 06, 2015 06:59 AM

Parallel time is sequential space

Studying for my qualifying exam, have a past exam here, which has the following question, verbatim:

Give a proof of the Folklore statement: "sequential space is parallel time." In other words, the space used by a sequential algorithm for a problem $X$ can be equated (within Big Oh) to the time taken by a parallel algorithm for $X$.

However, in the textbook I have, I haven't found a good definition for a sequential algorithm or parallel algorithm, which is necessary if I'm going to prove anything. Also, googling turned up the wikipedia page on parallel computation thesis but it seems to depend on the model of computation used. So I'm not sure how to answer this question.

by Kuhndog at March 06, 2015 06:55 AM

Problem with Understanding a Recursion Tree

Consider the recursion tree:

$T(p) = 3T(\frac{2p}{8}) + 2T(\frac{p}{8}) + O(p)$.

I determined that there are at most $1 + log_{4}\ p$ levels, because the longest simple path from root to leaf is $p \rightarrow \frac{2p}{8} \rightarrow \frac{4p}{16} \rightarrow \frac{8p}{32} \rightarrow\ ...$.

This means that the time complexity is $O(p\ log\ p)$.

Now, I stopped all my leaves in the above recursion tree have $T(1)$, but say I have the condition $T(0) + O(1)$. Does this change my solution somehow?

by hello.mellower at March 06, 2015 06:53 AM

QuantOverflow

How do I calculate the book value of a company from its balance sheet? [on hold]

I went to a financial modeling seminar, but I didn't understand what was going on because I was so new to the world of finance.

If I'm looking at a balance sheet, how do I get EBITDA? From there how do I value the company?

I'm sorry if I should do more homework before asking this question. I totally would, but I don't know where to look to learn more about this. Feel free to just point me to a site that can tell me this.

by Cole Trumbo at March 06, 2015 06:48 AM

CompsciOverflow

Confusion with space and time usage [duplicate]

This question already has an answer here:

The following is my own set-up code:

Search_and_Sum(key, A,n)    A[1..n]
    int sum = 0;
    for(int i=0; i<=n+n+1000; i++){
        for(int j=A.length; j>=1; j/=2){
            if(key==A[j]) break;
            else          sum+=j;
        }
    }
    return sum;

Well, I think the first loop has time complexity $O(n)$ and the inner loop has time complexity $O(log\ n)$, so in the worst case, the running time is $O(n\ log\ n)$.

But, I can't figure out how to find the running time in the best case. Any ideas?

Edit:

Is this is an equivalent formulation of the code with the same running time?

Search_and_Sum(key, A,n)    A[1..n]
    int sum = 0;
    int k = A.length;
    while(k>=1 && key!=A[k]){
        sum=*key;
        k/=2;
        }
    }
    return sum;

by hello.mellower at March 06, 2015 06:47 AM

Planet Theory

The nearest neighbor in an antimatroid

Franz Brandenburg, Andreas Gleißner, and Andreas Hofmeier have a 2013 paper that considers the following problem: given a finite partial order P and a permutation π of the same set, find the nearest neighbor to π among the linear extensions of P. Here "nearest" means minimizing the Kendall tau distance (number of inversions) between π and the chosen linear extension. Or, to put it another way: you are given a directed acyclic graph whose vertices are tagged with distinct numbers, and you want to choose a topological ordering of the graph that minimizes the number of pairs that are out of numerical order.
Among other results they showed that this is NP-hard, 2-approximable, and fixed-parameter tractable.

An idea I've been pushing (most explicitly in my recent Order paper) is that, when you have a question involving linear extensions of a partial order, you should try to generalize it to the basic words of an antimatroid. So now, let A be an antimatroid and π be a permutation on its elements. What is the nearest neighbor of π among the basic words of A? Can the fixed-parameter algorithm for partial orders be generalized to this problem?

Answer: Yes, no, and I don't know. Yes, the problem is still fixed-parameter tractable with a nice dependence on the parameter. No, not all FPT algorithms generalize directly. And I don't know, because I don't seem to have subscription access to the journal version of the BGH paper, the preprint version doesn't include the FPT algorithm, and I don't remember clearly enough what Franz told me about this a month or so ago, so I can't tell which one they're using.

But anyway, here's an easy FPT algorithm for the partial order version of the problem (that might or might not be the BGH algorithm). For any element x, we can define a set L of the elements coming before x in the given permutation π, and another set R of the elements coming after x in the permutation; L, x, and R form a three-way partition of the elements. We say that x is "safe" if there exists a linear extension of P that gives the same partition for x. Otherwise, we call x "unsafe". Then in the linear extension nearest to π, every safe element has the same position that it has in π. For, if we had a linear extension σ for which this wasn't true, then the sequence (σ ∩ L),x,(σ ∩ R) would also be a linear extension and would have fewer inversions. On the other hand, every unsafe element participates in at least one inversion, so if the optimal solution value is k then there can be at most 2k unsafe elements. Therefore, we can restrict both π and P to the subset of unsafe elements, solve the problem on the resulting linear-sized kernel, and then put back the safe elements in their places, giving an FPT algorithm.

You can define safe elements in the same way for antimatroids but unfortunately they don't necessarily go where they should. As an extreme example, consider the antimatroid on the symbols abcdefghijklmnopqrstuvwxyz* whose basic words are strings of distinct symbols that are alphabetical up to the star and then arbitrary after it, and the permutation π = zyxwvutsrqponmlkjihgfedcba* that wants the symbols in backwards order but keeps the star at the end. The star is safe, but if we put it in its safe place then the only possible basic word is abcdefghijklmnopqrstuvwxyz* with 325 inversions. Instead, putting it first gives us the basic word *zyxwvutsrqponmlkjihgfedcba with only 26 inversions. So the same kernelization doesn't work. It does work to restrict π and P to the elements whose positions in π are within k steps of an unsafe element, but that gives a bigger kernel (quadratic rather than linear).

Instead, let's try choosing the elements of the basic word one at a time. At each step, if the element we choose comes later in π than i other elements that we haven't chosen yet, it will necessarily cause i inversions with those other elements, and the total number of inversions of the word we're finding is just the sum of these numbers i. So when the number of inversions is small, then in most steps we should choose i = 0, and in all steps we should choose small values of i. In fact, whenever it's possible to choose i = 0, it's always necessary to do so, because any basic word consistent with the choices we've already made that doesn't make this choice could be made better by moving the i = 0 element up to the next position.

So this leads to the following algorithm for finding a basic word with distance k: at each step where we can choose i = 0, do so. And at each step where the antimatroid doesn't allow the i = 0 choice, instead recursively try all possible choices of i from 1 to k that are allowed by the antimatroid, but then subtract the value of i we chose from k because it counts against the number of inversions we have left to find.

Each leaf of the recursion takes linear time for all its i = 0 choices, so the main factor in the analysis is how many recursive branches there are. This number is one for k = 0 (because we can never branch), and it's also one for k = 1 (because at a branch point we can only choose i = 1 after which we are in the k = 0 case). For each larger value of k, the first time we branch we will be given a choice of all possible smaller values of k, and the total number of branches in the recursion will be the sum of the numbers of branches for these smaller values. That is, if R(k) denotes the number of recursive branches for parameter k, it obeys the recursion R(0) = R(1) = 1, R(k) = sumi<kR(i), which solves to R(k)=2k−1. So this algorithm is still fixed-parameter tractable, with only single-exponential dependence on k.
If we don't know k ahead of time, we can run the whole algorithm for k = 1,2,3,... and the time bound will stay the same.

Given the existence of this simple O(2knI) algorithm (where I is the time for testing whether the antimatroid allows an element to be added in the current position), does it make sense to worry about a kernelization, which after all doesn't completely solve the problem, but only reduces it to a smaller one? Yes. The reason is that if you kernelize (using the O(k2)-size kernel that restricts to elements that are within k steps of an unsafe element) before recursing, you separate out the exponential and linear parts, and get something more like O(nI + 2kk2I). But the difference between quadratic and linear kernels is swamped by the exponential part of the time bound, so rather than looking for smaller kernels it would be better to look for a more clever recursion with less branching.

The same authors also have another paper on Spearman footrule distance (how far each element is out of its correct position, summed over all the elements) but the kernelization in this paper looks a little trickier and I haven't thought carefully about whether the same approach might work for the antimatroid version of that problem as well.

March 06, 2015 06:46 AM

StackOverflow

Is it possible to turn off qualification of symbols when using clojure syntax quote in a macro?

I am generating emacs elisp code from a clojure function. I originally started off using a defmacro, but I realized since I'm going cross-platform and have to manually eval the code into the elisp environment anyway, I can just as easily use a standard clojure function. But basically what I'm doing is very macro-ish.

I am doing this because my goal is to create a DSL from which I will generate code in elisp, clojure/java, clojurescript/javascript, and maybe even haskell.

My "macro" looks like the following:

(defn vt-fun-3 []
  (let [hlq "vt"]
    (let [
         f0 'list
         f1 '(quote (defun vt-inc (n) (+ n 1)))
         f2 '(quote (ert-deftest vt-inc-test () (should (= (vt-inc 7) 8))))]
      `(~f0 ~f1 ~f2)
      )))

This generates a list of two function definitions -- the generated elisp defun and a unit test:

(list (quote (defun vt-inc (n) (+ n 1))) (quote (ert-deftest vt-inc-test () (should (= (vt-inc 7) 8)))))

Then from an emacs scratch buffer, I utilize clomacs https://github.com/clojure-emacs/clomacs to import into the elisp environment:

(clomacs-defun vt-fun-3 casc-gen.core/vt-fun-3)
(progn
    (eval (nth 0  (eval  (read (vt-fun-3)))))
    (eval (nth 1  (eval  (read (vt-fun-3))))))

From here I can then run the function and the unit test:

(vt-inc 4)
--> 5
(ert "vt-inc-test")
--> t

Note: like all macros, the syntax quoting and escaping is very fragile. It took me a while to figure out the proper way to get it eval properly in elisp (the whole "(quote (list..)" prefix thing).

Anyway, as suggested by the presences of the "hlq" (high-level-qualifier) on the first "let", I want to prefix any generated symbols with this hlq instead of hard-coding it.

Unfortunately, when I use standard quotes and escapes on the "f1" for instance:

 f1 '(quote (defun ~hlq -inc (n) (+ n 1)))

This generates:

    (list (quote (defun (clojure.core/unquote hlq) -inc (n) (+ n 1))) 
(quote (ert-deftest vt-inc-test () (should (= (vt-inc 7) 8)))))

In other words it substitutes 'clojure.core/unquote' for "~" which is not what I want.

The clojure syntax back-quote:

f1 `(quote (defun ~hlq -inc (n) (+ n 1)))

doesn't have this problem:

(list (quote (casc-gen.core/defun vt casc-gen.core/-inc (casc-gen.core/n) (clojure.core/+ casc-gen.core/n 1))) (quote (ert-deftest vt-inc-test () (should (= (vt-inc 7) 8)))))

It properly escapes and inserts "vt" as I want (I still have to work out to concat to the stem of the name, but I'm not worried about that).

Problem solved, right? Unfortunately syntax quote fully qualifies all the symbols, which I don't want since the code will be running under elisp.

Is there a way to turn off the qualifying of symbols when using the syntax quote (back tick)?

It also seems to me that the syntax quote is more "capable" than the standard quote. Is this true? Or can you, by trickery, always make the standard quote behave the same as the syntax quote? If you cannot turn off qualification with syntax quote, how could I get this working with the standard quote? Would I gain anything by trying to do this as a defmacro instead?

The worst case scenario is I have to run a regex on the generated elisp and manually remove any qualifications.

by vt5491 at March 06, 2015 06:33 AM

/r/clojure

fast web dev with clojure

(ns web-test.core (:gen-class) (:use [road.core]) (:require [road.core :as road] [ring.middleware.params :as params] [ring.util.response :as resp] [clojure.java.io :as io] [clojure.tools.logging :as log] [ring.adapter.jetty :as jetty]))

(defn render-test [ret tmt] (-> (resp/response "------render----test------") (#(resp/content-type %1 "text/plain"))))

(defn foo "I don't do a whole lot." [x] (str "来自源码目录的参数:" x))

(defn handler [Integer x] {:$r render-test :text (str "hello world, road goes sucess!" (foo x))})

(defn home [req content Integer num] {:hiccup "home.clj" :content (str "home" content) :num num})

(defroad road (GET "/web-test-0.1.0-SNAPSHOT-standalone/main" handler) (GET "/web-test-0.1.0-SNAPSHOT-standalone/home/:num{\d+}" home))

(defn -main [& args] (log/info "---------log4j test-------") (jetty/run-jetty road {:port 3000}))

https://github.com/zhujinxian/road

submitted by ainixian2004
[link] [comment]

March 06, 2015 06:03 AM

QuantOverflow

How do estimate parameters of geometric brownian motion with time-varying mean?

Does anyone know how to estimate $A$, $\sigma_1$,$\sigma_2$ from the following system?

$$dx = \mu_t x dt + \sigma_1 x dB_x$$

$$d\mu = A(\bar\mu - \mu) dt + \sigma_2 dB_\mu$$

Variation in $x$ could be either attributed to variation in $\mu$, or variation in $dB_x$, right?

Suppose I know $\bar \mu$, but need to estimate all the rest of the parameters.

by Elle at March 06, 2015 06:03 AM

StackOverflow

How do you make a web application in Clojure?

I suppose this is a strange question to the huge majority of programmers that work daily with Java. I don't. I know Java-the-language, because I worked on Java projects, but not Java-the-world. I never made a web app from scratch in Java. If I have to do it with Python, Ruby, I know where to go (Django or Rails), but if I want to make a web application in Clojure, not because I'm forced to live in a Java world, but because I like the language and I want to give it a try, what libraries and frameworks should I use?

by Pablo at March 06, 2015 05:53 AM

TheoryOverflow

On possible existence of OWFs

Assuming $P\neq NP$, is there a computational model in which OWFs cannot exist? What more should we include beyond non-determinism, randomness, quantum model to preclude existence of OWFs within current state of knowledge?

posted http://cs.stackexchange.com/questions/40055/on-possible-existence-of-owfs

by Turbo at March 06, 2015 05:43 AM

StackOverflow

Clojure web application - where do I start?

So lately I've been looking into Clojure, and I love the language. I would like to see if I can make a small web application in it, just to challenge myself. However, I have absolutely no experience setting up any Java-related web applications. In fact, I don't really have much experience with Java at all. Where do I start? I have lots of experience with Apache and the LAMP stack, and I know on Apache I would just use Fast-CGI in most cases, but I don't know the equivalent in the Java world (if there is one).

Basically, I just need help with setting up the server and getting it started. I understand (somewhat) how to deploy a pure Java application, but what about a pure Clojure application? How does that work? I guess, coming from a world where all web applications are written in scripting languages, this is all new to me.

Oh, and by the way, I don't want to use a Clojure framework such as Compojure. That would defeat the learning part of this.

Thanks in advance.

by Sasha Chedygov at March 06, 2015 05:43 AM

/r/compsci

StackOverflow

Comparison of Clojure web frameworks

There are a few web frameworks for Clojure

and also some libraries for dealing with certain web development subtasks, such as

  • Enlive for templating
  • Hiccup for templating
  • Ring to handle lower level stuff with requests/responses
  • ClojureQL for persistence (it doesn't seem very active, though)

There are also hundreds of Java libraries to be used. Some aspects were already discussed here and two of them compared a bit.

I wonder how these frameworks/components compare in terms of maturity, scope, ease of development, Django/RoR feeling, etc.

by Adam Schmideg at March 06, 2015 05:36 AM

Wes Felter

GigaOM: AT&T’s privacy plan may be short-lived and may not even be as bad as we think

GigaOM: AT&T’s privacy plan may be short-lived and may not even be as bad as we think:

Offsetting the cost of Internet access is one thing, but I doubt that’s what AT&T is doing. They can’t be getting $29/month from your browsing habits, so it amounts to an artificial penalty reminiscent of non-prorated ETFs.

March 06, 2015 05:29 AM

TheoryOverflow

Lower bounds for inversion counting in comparison model?

For counting the number of inversions in an array, there are many $O(n \log n)$ algorithms, e.g. the one that modifies Merge Sort. There is an easy $\Omega(n)$ lower bound simply because you have to look at all the elements.

I saw some faster algorithms in the RAM model, such as this $O(n \sqrt{\log n})$ algorithm for a permutation on $n$ elements: http://people.csail.mit.edu/mip/papers/invs/paper.pdf.

Is anything else known in the comparison model for inversion counting? I'm mainly curious if there are better lower bounds.

by Ben Cousins at March 06, 2015 05:15 AM

Planet Theory

בחירות 2015 – בוא נבחר באביב

שינוי3

כמו שזה נראה, הבלוג שלי גוייס על ידי במין צו 8 לעיסוק בבחירות 2015 ולקריאה לשינוי פוליטי בישראל

האביב כמעט כאן

זה שירו הנהדר של דוד גרוסמן קצר פה כל כך האביב שהולחן על ידי יהודה פוליקר שגם מבצע אותו. השיר הזה נראה לי מתאים כהמנון לקריאה לשינוי פוליטי בבחירות 2015 במדינת ישראל. קריאה לאביב ישראלי, אביב דמוקרטי, אביב יהודי וציוני

והנה גם כמה סרטים מצויינים של תנועת ניצחון 2015 הפועלת לשינוי בבחירות

 


by Gil Kalai at March 06, 2015 05:14 AM

Wondermark

XKCD

CompsciOverflow

Gale–Shapley algorithm is man-optimal

I am trying to understand the proof why Gale–Shapley algorithm is optimal, however i am unable to do so. Could you please expand the proof, since the proof on this page https://sites.google.com/site/stablemarriageproblem/intro and here http://www.cs.cmu.edu/afs/cs.cmu.edu/academic/class/15251-f10/Site/Materials/Lectures/Lecture21/lecture21.pdf relies crucially on the fact that the disagreement happens for the 1st time when a man M approaches a woman W and we are trying to prove that in no matching matching M can be paired up with W.

If possible could you also give alternative proofs.

by Varun at March 06, 2015 04:52 AM

/r/compsci

Ideas for a semester long research project as an undergraduate involving information security

Hi all. I'm just looking for some ideas for a research project next semester. This will be my first time doing one, and I'm not exactly sure what information security topic I'd like to dive into. I'd like to find something that isn't way over my head or ridiculously challenging either (a challenge is good though). So if you have any suggestions, let me know. I was thinking something possibly involving cloud storage services, building better captchas (I'm also interested in computer vision), viruses etc.

submitted by cdc143
[link] [comment]

March 06, 2015 04:48 AM

StackOverflow

How to create a synchronized object method in scala

Does scala support synchronized object (/static) methods? I am looking for:

synchronized def myObjectMethod(): <SomeReturnType> = {
.. 
 }

If this were not supported, what is the equivalent in scala ?

by javadba at March 06, 2015 04:25 AM

CompsciOverflow

Numpy.save Permission Denied in IPython, Python [on hold]

Forgive me I am no expert programmer. I am working through the book Python for Data Analysis and I get an issue saving numpy files. The code itself is very simple:

import numpy as np

arr = np.arange(10)

np. save('some_array', arr)

Which results in this error

C...\Enthought\Canopy\User\lib\site-packages\numpy\lib\npyio.pyc in save(file, arr) 444 if not file.enfswith('npy): 445 file = file + '.npy'

446 fid = open(file, "wb") 447 own_fid = True 448 else:

IOError: [Errno 13] Permission denied: 'some_arrray.npy'

by Zander at March 06, 2015 04:04 AM

QuantOverflow

Where can one find realistic historical transaction costs?

I am interested in strategy simulation at different frequencies (high frequency and daily frequency) and I want to compute the optimal frequency of execution.

To do this, I need to obtain realistic historical transaction costs from 1987 onward.

Does anyone know where one might obtain such a database?

by mlachans at March 06, 2015 03:36 AM

/r/clojure

StackOverflow

Spark: Writing to Avro file

I am in Spark, I have an RDD from an Avro file. I now want to do some transformations on that RDD and save it back as an Avro file:

val job = new Job(new Configuration())
AvroJob.setOutputKeySchema(job, getOutputSchema(inputSchema))

rdd.map(elem => (new SparkAvroKey(doTransformation(elem._1)), elem._2))
   .saveAsNewAPIHadoopFile(outputPath, 
  classOf[AvroKey[GenericRecord]], 
  classOf[org.apache.hadoop.io.NullWritable], 
  classOf[AvroKeyOutputFormat[GenericRecord]], 
  job.getConfiguration)

When running this Spark complains that Schema$recordSchema is not serializable.

If I uncomment the .map call (and just have rdd.saveAsNewAPIHadoopFile), the call succeeds.

What am I doing wrong here?

Any idea?

by user1013725 at March 06, 2015 03:21 AM

Define a Typeclass for Shapeless Records

I'm trying to learn Shapeless, and I would like to define a monoid which adds together instances of shapeless records. Note that I'm using algebird monoids (not scalaz), but I'm sure they're quite similar. Here's an example of what I'd like to be able to do:

val result = Monoid.sum(
  ('a ->> 1) :: ('b ->> 1) :: HNil,
  ('a ->> 4) :: ('b ->> 3) :: HNil,
  ('a ->> 2) :: ('b ->> 6) :: HNil)
// result should be: ('a ->> 7) :: ('b ->> 10) :: HNil

I figured out how to write monoid instances for HList, as follows:

  implicit val HNilGroup: Group[HNil] = new ConstantGroup[HNil](HNil)
  implicit val HNilMonoid: Monoid[HNil] = HNilGroup
  class HListMonoid[H, T <: HList](implicit hmon: Monoid[H], tmon: Monoid[T]) extends Monoid[::[H, T]] {
    def zero = hmon.zero :: tmon.zero
    def plus(a: ::[H, T], b: ::[H, T]) = 
      hmon.plus(a.head, b.head) :: tmon.plus(a.tail, b.tail)
  }
  implicit def hListMonoid[H, T <: HList](implicit hmon: Monoid[H], tmon: Monoid[T]) = new HListMonoid[H, T]

This allows me to write:

val result = Monoid.sum(
  1 :: 1 :: HNil,
  4 :: 3 :: HNil,
  2 :: 6 :: HNil)
// result is 7 :: 10 :: HNil

Now that I can sum HList instances, the missing piece seems to be defining monoid instances which can sum fields of form ('name ->> 1), which my IDE tells me has the following type: Int with record.KeyTag[Symbol with tag.Tagged[Constant(name).type] { .. }, Int] { .. }. At this point I'm stuck, as I just don't know how to go about doing this.

by JimN at March 06, 2015 03:08 AM

Fefe

Tolle Idee des Tages: China baut ein experimentelles ...

Tolle Idee des Tages: China baut ein experimentelles Atomkraftwerk.

Die gute Nachricht: Siedewasserreaktor, kein heißer Brüter oder so. Also ein prinzipiell eher abgehangenes Prinzip.

Die schlechte Nachricht: In Pakistan, zwischen den Terroristen mit Urananreicherungsabsichten.

Die schlechtere Nachricht: An einem Tsunami-Strand.

Die noch schlechtere Nachricht: In einem Erdbebengebiet.

Die noch schlechtere Nachricht: Die 20-Millionen-Stadt Karachi liegt zur Hälfte innerhalb der 20-Meilen-Zone.

Hey, was kann da schon schief gehen?

March 06, 2015 03:01 AM

Die Gleichstellung hat die Geheimdienste erreicht. ...

Die Gleichstellung hat die Geheimdienste erreicht. Nein, wirklich!
Britain's security agencies should look to recruit more middle-aged women and mothers to be new spies and should target websites popular with parents to find them, [the Intelligence and Security Committee] said on Thursday.
Denn wenn jemand weiß, wie man mit Terrorismus umgeht, dann sind es ja wohl Mütter von kleinen Kindern!

March 06, 2015 03:01 AM

Lobsters

Add ALL the Things: Abstract Algebra Meets Analytics

Avi Bryant discusses how the laws of group theory provide a useful codification of the practical lessons of building efficient distributed and real-time aggregation systems.

Comments

by SeanTAllen at March 06, 2015 02:35 AM

Planet Theory

News on Intermediate Problems


The Minimum Circuit Size Problem goes front and center

EricDasRyanCody

Eric Allender, Bireswar Das, Cody Murray, and Ryan Williams have proved new results about problems in the range between {\mathsf{P}} and {\mathsf{NP}}-complete. According to the wide majority view of complexity the range is vast, but it is populated by scant few natural computational problems. Only Factoring, Discrete Logarithm, Graph Isomorphism (GI), and the Minimum Circuit Size Problem (MCSP) regularly get prominent mention. There are related problems like group isomorphism and others in subjects such as lattice-based cryptosystems. We covered many of them some years back.

Today we are delighted to report recent progress on these problems.

MCSP is the problem: given a string {x} of length {n = 2^k} and a number {s}, is there a Boolean circuit {C} with {s} or fewer wires such that

\displaystyle  C(0^k)\cdot C(0^{k-1} 1)\cdot C(0^{k-2}10) \cdots C(1^{k-1} 0)\cdot C(1^k) = x?

For {x} of other lengths {m}, {2^{k-1} < m < 2^k}, we catenate the values of {C} for the first {m} strings in {\{0,1\}^k} under the standard order. Since every {k}-ary Boolean function has circuits of size {O(\frac{2^k}{k}) = O(\frac{n}{\log n})} which are encodable in {O(n)} bits, MCSP belongs to {\mathsf{NP}} with linear witness size.

Several Soviet mathematicians studied MCSP in the late 1950s and 1960s. Leonid Levin is said to have desired to prove it {\mathsf{NP}}-complete before publishing his work on {\mathsf{NP}}-completeness. MCSP seemed to stand aloof until Valentine Kabanets and Jin-Yi Cai connected it to Factoring and Discrete Log via the “Natural Proofs” theory of Alexander Razborov and Steven Rudich. Eric and Harry Buhrman and Michal Koucký and Dieter van Melkebeek and Detlef Ronneburger improved their results in a 2006 paper to read:

Theorem 1 Discrete Log is in {\mathsf{BPP}^{\mathrm{MCSP}}} and Factoring is in {\mathsf{ZPP}^{\mathrm{MCSP}}}.

Now Eric and Bireswar have completed the triad of relations to the other intermediate problems:

Theorem 2 Graph Isomorphism is in {\mathsf{RP}^{\mathrm{MCSP}}}. Moreover, every promise problem in {\mathsf{SZK}} belongs to {\mathsf{BPP}^{\mathrm{MCSP}}} as defined for promise problems.

Cody and Ryan show on the other hand that proving {\mathsf{NP}}-hardness of MCSP under various reductions would entail proving breakthrough lower bounds:

Theorem 3

  • If {\mathrm{SAT} \leq_m^p \mathrm{MCSP}} then {\mathsf{EXP} \not\subseteq \mathsf{NP} \cap \mathsf{P/poly}}, so {\mathsf{EXP \neq ZPP}}.

  • If {\mathrm{SAT} \leq_m^{\log} \mathrm{MCSP}} then {\mathsf{PSPACE \neq ZPP}}.

  • If {\mathrm{SAT} \leq_m^{ac_0} \mathrm{MCSP}} then {\mathsf{NP} \not\subset \mathsf{P/poly}} (so {\mathsf{NP \neq P}}), and also {\mathsf{E}} has circuit lower bounds high enough to de-randomize {\mathsf{BPP}}.

  • In any many-one reduction {f} from {\mathrm{Parity}} (let alone {\mathrm{SAT}}) to {\mathrm{MCSP}}, no random-access machine can compute any desired bit {j} of {f(x)} in {|x|^{1/2-\epsilon}} time.

The last result is significant because it is unconditional, and because most familiar {\mathsf{NP}}-completeness reductions {f} are local in the sense that one can compute any desired bit {j} of {f(x)} in only {(\log |x|)^{O(1)}} time (with random access to {x}).

Why MCSP is Hard to Harden

The genius of MCSP is that it connects two levels of scaling—input lengths {k} and {n}—in the briefest way. The circuits {C} can have exponential size from the standpoint of {k}. This interplay of scaling is basic to the theory of pseudorandom generators, in terms of conditions under which they can stretch a seed of {\mathsf{poly}(k)} bits into {n} bits, and to generators of pseudorandom functions {g: \{0,1\}^k \longrightarrow \{0,1\}^k}.

An issue articulated especially by Cody and Ryan is that reductions {f} to MCSP carry seeds of being self-defeating. The ones we know best how to design involve “gadgets” whose size scales as {k} not {n}. For instance, in a reduction from {\mathrm{3SAT}} we tend to design gadgets for individual clauses in the given 3CNF formula {\phi}—each of which has constant-many variables and {O(\log n) = O(k)} encoded size. But if {f} involves only {\mathsf{poly}(k)}-sized gadgets and the connections between gadgets need only {\mathsf{poly}(k)} lookup, then when the reduction outputs {f(\phi) = (y,s)}, the string {y} will be the graph of a {\mathsf{poly}(k)}-sized circuit. This means that:

  • if {s > \mathsf{poly}(k)} then the answer is trivially “yes”;

  • if {s \leq \mathsf{poly}(k)} then the answer can be found in {\mathsf{poly}(n)} time—or at worst quasipolynomial in {n} time—by exhaustively trying all circuits of size {s}.

The two horns of this dilemma leave little room to make a non-trivial reduction to MCSP. Log-space and {\mathsf{AC^0}} reductions are (to different degrees) unable to avoid the problem. The kind of reduction that could avoid it might involve, say, {n^{1/2}}-many clauses per gadget in an indivisible manner. But doing this would seem to require obtaining substantial non-local knowledge about {\phi} in the first place.

Stronger still, if the reduction is from a polynomially sparse language {A \in \mathsf{NP}} in place of {\mathrm{SAT}}, then even this last option becomes unavailable. Certain relations among exponential-time classes imply the existence of hard sparse sets in {\mathsf{NP}}. The hypothesis that MCSP is hard for these sets impacts these relations, for instance yielding the {\mathsf{EXP \neq ZPP}} conclusion.

A paradox that at first sight seems stranger emerges when the circuits {C} are allowed oracle gates. Such gates may have any arity {m} and output 1 if and only if the string {u_1 u_2\cdots u_m} formed by the inputs belongs to the associated oracle set {A}. For any {A} we can define {\mathrm{MCSP}^A} to be the minimum size problem for such circuits relative to {A}. It might seem axiomatic that when {A} is a powerful oracle such as {\mathrm{QBF}} then {\mathrm{MCSP}^{\mathrm{QBF}}} should likewise be {\mathsf{PSPACE}}-complete. However, giving {C} such an oracle makes it easier to have small circuits for meaningful problems. This compresses the above dilemma even more. In a companion paper by Eric with Kabanets and Dhiraj Holden they show that {\mathrm{MCSP}^{\mathrm{QBF}}} is not complete under logspace reductions, nor even hard for {\mathsf{TC}^0} under uniform {\mathsf{AC}^0} reductions. More strikingly, they show that if it is hard for {\mathsf{P}} under logspace reductions, then {\mathsf{EXP = PSPACE}}.

Nevertheless, when it comes to various flavors of bounded-error randomized Turing reductions, MCSP packs enough hardness to solve Factoring and Discrete Log and GI. We say some more about how this works.

Randomized Reductions to MCSP

What MCSP does well is efficiently distinguish strings {x \in \{0,1\}^n} having {n^{\alpha}}-sized circuits from the vast majority having no {n^{\beta}}-sized circuits, where {0 < \alpha < \beta < 1}. The dense latter set {B} is a good distinguisher between pseudorandom and uniform distributions on {x \in \{0,1\}^n}. Since one-way functions suffice to construct pseudorandom generators, MCSP turns into an oracle for inverting functions to an extent codified in Eric’s 2006 joint paper:

Theorem 4 Let {B} be a dense language of strings having no {n^{\beta}}-sized circuits, and let {f(x,y) = z} be computable in polynomial time with {x,y,z} of polynomially-related lengths. Then we can find a polynomial-time probabilistic oracle TM {M} and {c > 0} such that for all {n} and {y},

\displaystyle  \Pr_{x,r}[M^B(y,f(x,y),r) = w \text{ such that } f(w,y) = f(x,y)] \geq \frac{1}{n^c}.

Here {x} is selected uniformly from {\{0,1\}^n} and {r} is uniform over the random bits of the machine. We have restricted {f} and {B} more than their result requires for ease of discussion.

To attack GI we set things up so that “{x}” and “{y}” represent a graph {G} and a permutation {\pi} of its vertices, respectively. More precisely “{G}” means a particular adjacency matrix, and we define {f(\pi,G) = G'} to mean the adjacency matrix {G'} obtained by permuting {G} according to {\pi}. By Theorem 4, using the MCSP oracle to supply {B}, one obtains {M} and {c} such that for all {n} and {n}-vertex graphs {G},

\displaystyle  \Pr_{\pi,r}[M^{\mathrm{MCSP}}(G,f(\pi,G),r) = \rho \text{ such that } f(\rho,G) = f(\pi,G)] \geq \frac{1}{n^c}.

Since {f} is 1-to-1 we can simplify this while also tying “{G'}” symbolically to {f(\pi,G)}:

\displaystyle  \Pr_{\pi,r}[M^{\mathrm{MCSP}}(G,G',r) = \pi] \geq \frac{1}{n^c}. \ \ \ \ \ (1)

Now given an instance {(G,H)} of GI via adjacency matrices, do the following for some constant times {n^c} independent trials:

  1. Pick {\pi} and {r} uniformly at random and put {G' = f(\pi,G)}.

  2. Run {M^{\mathrm{MCSP}}(H,G',r)} to obtain a permutation {\rho}.

  3. Accept if {\rho(H) = G'}, which means {H = \rho^{-1}\pi(G)}.

This algorithm has one-sided error since it will never accept if {G} and {H} are not isomorphic. If they are isomorphic, then {G'} arises as {\rho(H)} with the same distribution over permutations that it arises as {G' = \pi(G)}, so Equation (1) applies equally well with {H} in place of {G}. Hence {M^{\mathrm{MCSP}}(H,G',r)} finds the correct {\rho} with probability at least {\frac{1}{n^c}} on each trial, yielding the theorem {\mathrm{GI} \in \mathsf{RP}^{\mathrm{MCSP}}}.

The proof for {\mathsf{SZK \subseteq BPP}^{\mathrm{MCSP}}} is more detailed but similar in using the above idea. There are many further results in the paper by Cody and Ryan and in the oracle-circuit paper.

Open Problems

These papers also leave a lot of open problems. Perhaps more importantly, they attest that these open problems are attackable. Can any kind of many-one reducibility stricter than {\leq_m^p} reduce every language in {\mathsf{P}} to MCSP? Can we simply get {\mathsf{EXP} \not\subset \mathsf{P/poly}} from the assumption {\mathrm{SAT} \leq_m^p \mathrm{MCSP}}? The most interesting holistic aspect is that we know new lower bounds follow if MCSP is easy, and now we know that new lower bounds follow if MCSP is hard. If we assume that MCSP stays intermediate, can we prove lower bounds that combine with the others to yield some non-trivial unconditional result?


by KWRegan at March 06, 2015 02:13 AM

/r/scala

StackOverflow

Randomness in a nested pure function

I want to provide a function that replaces each occurrence of # in a string with a different random number. In a non-pure language, it's trivial. However, how should it be designed in a pure language? I don't want to use unsafePerformIO, as it rather looks like a hack and not a proper design.

Should this function require a random generator as one of its parameters? And if so, would that generator have to be passed through the whole stack of invocations? Are there other possible approaches? Should I use the State monad, here? I would appreciate a toy example demonstrating a viable approach...

by piotrek at March 06, 2015 02:08 AM

DataTau

/r/compsci

Solutions to "Mathematics for Computer Science" problems

I've been trying to improve my CS related math knowledge and found that the Mathematics for Computer Science text from MIT is available to anyone: http://courses.csail.mit.edu/6.042/spring14/mcs.pdf

It can be a bit hard to work through the text without solutions to some of the problems. And for other problems it would just be nice to confirm that my answer is correct.

I imagine that because the text is used for a current class that solutions to the problems are only available for the class. So it might be that there are no public solutions to the problems.

If that's the case, I'm wondering if there are any other problem sets available online with similar types of questions?

submitted by TheCriticalSkeptic
[link] [comment]

March 06, 2015 01:56 AM

StackOverflow

Bucket Join in Scalding

I need to run some joins on very large datasets.

The 2 datasets that I'm joining are bucketed and sorted on the same columns, they could look like that:

Dataset1 file1:

<Alice, someInfo>,
<Ben, somInfo>,
<Brad, someInfo>,
...

Dataset2 file1:

<Alice, someOtherInfo>,
<Allen, someOtherInfo>,
<Ben, someOtherInfo>,
....

So ideally, joining those 2 files, it's just about iterating them once at the same time. Some people call that kind of join a bucket-join and it be done as a map-side join. Implementing it is not so much of a hustle, but I was wondering if there was any way to perform this kind of operations with Scalding.

I was looking around the TemplatedTSV but I couldn't find anything.

Thanks

by jeremie.a.simon at March 06, 2015 01:48 AM

Cleaner way to update nested structures

Say I have got following two case classes:

case class Address(street: String, city: String, state: String, zipCode: Int)
case class Person(firstName: String, lastName: String, address: Address)

and the following instance of Person class:

val raj = Person("Raj", "Shekhar", Address("M Gandhi Marg", 
                                           "Mumbai", 
                                           "Maharashtra", 
                                           411342))

Now if I want to update zipCode of raj then I will have to do:

val updatedRaj = raj.copy(address = raj.address.copy(zipCode = raj.address.zipCode + 1))

With more levels of nesting this gets even more uglier. Is there a cleaner way (something like Clojure's update-in) to update such nested structures?

by missingfaktor at March 06, 2015 01:46 AM

CompsciOverflow

Can a Von Neumann CPU be pipelined?

Can you pipeline a pure Von Neumann architecture based CPU or do you need seperate data and instruction caches for this? If you include seperate instruction and data caches (then it isn't a von neumann CPU anymore, it's a modified Harvard), how do you unify the data of these caches so that they get stored in a single memory?

by gilianzz at March 06, 2015 01:44 AM

Planet Theory

On Flattenability of Graphs

Authors: Meera Sitharam, Joel Willoughby
Download: PDF
Abstract: We first show, for general $l_p$-norms, the equivalence between $d$-flattenability of $G$ and the convexity of $d$-dimensional, inherent Cayley configuration spaces for all subgraphs of $G$ (for the $l_2$ norm, one direction was proven before). As a corollary, it follows that both properties are minor-closed for general $l_p$ norms.

Using the natural notions of genericity and rigidity matrices introduced by Kitson for frameworks in $l_p$, we show that: $d$-flattenability is not a generic property of frameworks (in arbitrary dimension), and neither is the convexity of Cayley configuration spaces over specified non-edges of the $d$-dimensional framework; $G$ is $d$-flattenable if all its generic frameworks are; existence of one, however is equivalent to independence of the rows of its rigidity matrix -- a generic property of frameworks -- in $d$-dimensions; and rank of $G$ in the $d$-dimensional rigidity matroid is equal to the dimension of the projection of the $d$-dimensional stratum of the $l_p^p$ cone on the edges of $G$.

Finally, we give stronger results for specific norms for $d=2$: we show that 2-flattenable graphs for the $l_1$-norm (and $l_\infty$-norm) are a larger class than 2-flattenable graphs for Euclidean $l_2$-norm case; and prove further results towards characterizing 2-flattenability in the $l_1$-norm.

March 06, 2015 01:44 AM

Efficient Inverse Maintenanceand Faster Algorithms for Linear Programming

Authors: Yin Tat Lee, Aaron Sidford
Download: PDF
Abstract: In this paper, we consider the following inverse maintenance problem: given $A \in \mathbb{R}^{n\times d}$ and a number of rounds $r$, we receive a $n\times n$ diagonal matrix $D^{(k)}$ at round $k$ and we wish to maintain an efficient linear system solver for $A^{T}D^{(k)}A$ under the assumption $D^{(k)}$ does not change too rapidly. This inverse maintenance problem is the computational bottleneck in solving multiple optimization problems. We show how to solve this problem in amortized $\tilde{O}(nnz(A)+d^{2})$ time per round, improving upon previous running times for solving this problem.

Consequently, we obtain the fastest known running times for solving multiple problems including, linear programming, computing a rounding of a polytope, and sampling a point in a polytope. In particular given a feasible point in a linear program with $d$ variables, $n$ constraints, and constraint matrix $A\in\mathbb{R}^{n\times d}$, we show how to solve the linear program in time $\tilde{O}(nnz(A)+d^{2})\sqrt{d}\log(\epsilon^{-1}))$. We achieve our results through a novel combination of classic numerical techniques of low rank update, preconditioning, and fast matrix multiplication as well as recent work on subspace embeddings and spectral sparsification that we hope will be of independent interest.

March 06, 2015 01:44 AM

Dimensionality Reduction of Massive Sparse Datasets Using Coresets

Authors: Dan Feldman, Mikhail Volkov, Daniela Rus
Download: PDF
Abstract: In this paper we present a practical solution with performance guarantees to the problem of dimensionality reduction for very large scale sparse matrices. We show applications of our approach to computing the low rank approximation (reduced SVD) of such matrices. Our solution uses coresets, which is a subset of $O(k/\eps^2)$ scaled rows from the $n\times d$ input matrix, that approximates the sub of squared distances from its rows to every $k$-dimensional subspace in $\REAL^d$, up to a factor of $1\pm\eps$. An open theoretical problem has been whether we can compute such a coreset that is independent of the input matrix and also a weighted subset of its rows. %An open practical problem has been whether we can compute a non-trivial approximation to the reduced SVD of very large databases such as the Wikipedia document-term matrix in a reasonable time. We answer this question affirmatively. % and demonstrate an algorithm that efficiently computes a low rank approximation of the entire English Wikipedia. Our main technical result is a novel technique for deterministic coreset construction that is based on a reduction to the problem of $\ell_2$ approximation for item frequencies.

March 06, 2015 01:41 AM

GDC 2: Compression of large collections of genomes

Authors: Sebastian Deorowicz, Agnieszka Danek, Marcin Niemiec
Download: PDF
Abstract: The fall of prices of the high-throughput genome sequencing changes the landscape of modern genomics. A number of large scale projects aimed at sequencing many human genomes are in progress. Genome sequencing also becomes an important aid in the personalized medicine. One of the significant side effects of this change is a necessity of storage and transfer of huge amounts of genomic data. In this paper we deal with the problem of compression of large collections of complete genomic sequences. We propose an algorithm that is able to compress the collection of 1092 human diploid genomes about 9,500 times. This result is about 4 times better than what is offered by the other existing compressors. Moreover, our algorithm is very fast as it processes the data with speed 200MB/s on a modern workstation. In a consequence the proposed algorithm allows storing the complete genomic collections at low cost, e.g., the examined collection of 1092 human genomes needs only about 700MB when compressed, what can be compared to about 6.7 TB of uncompressed FASTA files. The source code is available at this http URL&project=gdc&subpage=about.

March 06, 2015 01:41 AM

Managing Relocation and Delay in Container Terminals with Flexible Service Policies

Authors: Setareh Borjian, Vahideh H. Manshadi, Cynthia Barnhart, Patrick Jaillet
Download: PDF
Abstract: We introduce a new model and mathematical formulation for planning crane moves in the storage yard of container terminals. Our objective is to develop a tool that captures customer centric elements, especially service time, and helps operators to manage costly relocation moves. Our model incorporates several practical details and provides port operators with expanded capabilities including planning repositioning moves in off-peak hours, controlling wait times of each customer as well as total service time, optimizing the number of relocations and wait time jointly, and optimizing simultaneously the container stacking and retrieval process. We also study a class of flexible service policies which allow for out-of-order retrieval. We show that under such flexible policies, we can decrease the number of relocations and retrieval delays without creating inequities.

March 06, 2015 01:41 AM

Efficient Farthest-Point Queries in Two-Terminal Series-Parallel Networks

Authors: Carsten Grimm
Download: PDF
Abstract: Consider the continuum of points along the edges of a network, i.e., a connected, undirected graph with positive edge weights. We measure the distance between these points in terms of the weighted shortest path distance, called the network distance. Within this metric space, we study farthest points and farthest distances. We introduce a data structure supporting queries for the farthest distance and the farthest points on two-terminal series-parallel networks. This data structure supports farthest-point queries in O(k + log n) time after O(n log p) construction time, where k is the number of farthest points, n is the size of the network, and p parallel operations are required to generate the network.

March 06, 2015 01:41 AM

Scalable Iterative Algorithm for Robust Subspace Clustering

Authors: Sanghyuk Chun, Yung-Kyun Noh, Jinwoo Shin
Download: PDF
Abstract: Subspace clustering (SC) is a popular method for dimensionality reduction of high-dimensional data, where it generalizes Principal Component Analysis (PCA). Recently, several methods have been proposed to enhance the robustness of PCA and SC, while most of them are computationally very expensive, in particular, for high dimensional large-scale data. In this paper, we develop much faster iterative algorithms for SC, incorporating robustness using a {\em non-squared} $\ell_2$-norm objective. The known implementations for optimizing the objective would be costly due to the alternative optimization of two separate objectives: optimal cluster-membership assignment and robust subspace selection, while the substitution of one process to a faster surrogate can cause failure in convergence. To address the issue, we use a simplified procedure requiring efficient matrix-vector multiplications for subspace update instead of solving an expensive eigenvector problem at each iteration, in addition to release nested robust PCA loops. We prove that the proposed algorithm monotonically converges to a local minimum with approximation guarantees, e.g., it achieves 2-approximation for the robust PCA objective. In our experiments, the proposed algorithm is shown to converge at an order of magnitude faster than known algorithms optimizing the same objective, and have outperforms prior subspace clustering methods in accuracy and running time for MNIST dataset.

March 06, 2015 01:41 AM

How friends and non-determinism affect opinion dynamics

Authors: Arnab Bhattacharyya, Kirankumar Shiragur
Download: PDF
Abstract: The Hegselmann-Krause system (HK system for short) is one of the most popular models for the dynamics of opinion formation in multiagent systems. Agents are modeled as points in opinion space, and at every time step, each agent moves to the mass center of all the agents within unit distance. The rate of convergence of HK systems has been the subject of several recent works. In this work, we investigate two natural variations of the HK system and their effect on the dynamics. In the first variation, we only allow pairs of agents who are friends in an underlying social network to communicate with each other. In the second variation, agents may not move exactly to the mass center but somewhere close to it. The dynamics of both variants are qualitatively very different from that of the classical HK system. Nevertheless, we prove that both these systems converge in polynomial number of non-trivial steps, regardless of the social network in the first variant and noise patterns in the second variant.

March 06, 2015 01:40 AM

Resolution space for random 3-SAT

Authors: Patrick Bennett, Mike Molloy
Download: PDF
Abstract: Resolution is a rule of inference for boolean formulas in conjunctive normal form. Specifically, if the formula contains the clauses $(A \vee x)$ and $(B \vee \bar{x})$ then any satisfying assignment must also satisfy the clause $(A \vee B)$. Any unsatisfiable formula can be used to derive the empty clause using repeated applications of the resolution rule. Such a derivation is called a resolution refutation for the formula. The total resolution space of an unsatisfiable formula is the least amount of memory required to verify any resolution refutation for the formula. We show that with high probability the resolution space of random instances of 3-SAT (chosen from a distribution where we know the formula is unsatifiable w.h.p.), the total resolution space is quadratic, which is worst possible up to a constant. Bonacina, Galesi, and Thapen proved the same result for $k$-SAT when $k \ge 4$.

March 06, 2015 01:40 AM

/r/emacs

Lobsters

CompsciOverflow

Something wrong with this definition of factorial with structural recursion? [on hold]

In The Algebra of Programming page 5, the authors defined structural recursion foldn (c, h) over natural numbers:

f  0    = c
f (n+1) = h (f n)

They then went on to defin factorial as follows:

fact = outr . foldn ((0, 1), f)
outr (m, n) = n
f (m, n) = (m + 1, (m + 1) * n)

This doesn't seem right, first of all, foldn ((0, 1), f) does not comply to its definition, secondly, this fold will never terminate, will it?

by qed at March 06, 2015 01:33 AM

arXiv Logic in Computer Science

Modelling the Semantic Web using a Type System. (arXiv:1503.01723v1 [cs.LO])

We present an approach for modeling the Semantic Web as a type system. By using a type system, we can use symbolic representation for representing linked data. Objects with only data properties and references to external resources are represented as terms in the type system. Triples are represented symbolically using type constructors as the predicates. In our type system, we allow users to add analytics that utilize machine learning or knowledge discovery to perform inductive reasoning over data. These analytics can be used by the inference engine when performing reasoning to answer a query. Furthermore, our type system defines a means to resolve semantic heterogeneity on-the-fly.

by <a href="http://arxiv.org/find/cs/1/au:+Moten_R/0/1/0/all/0/1">Rod Moten</a> at March 06, 2015 01:30 AM

Navigo: Interest Forwarding by Geolocations in Vehicular Named Data Networking. (arXiv:1503.01713v1 [cs.NI])

This paper proposes Navigo, a location based packet forwarding mechanism for vehicular Named Data Networking (NDN). Navigo takes a radically new approach to address the challenges of frequent connectivity disruptions and sudden network changes in a vehicle network. Instead of forwarding packets to a specific moving car, Navigo aims to fetch specific pieces of data from multiple potential carriers of the data. The design provides (1) a mechanism to bind NDN data names to the producers' geographic area(s); (2) an algorithm to guide Interests towards data producers using a specialized shortest path over the road topology; and (3) an adaptive discovery and selection mechanism that can identify the best data source across multiple geographic areas, as well as quickly react to changes in the V2X network.

by <a href="http://arxiv.org/find/cs/1/au:+Grassi_G/0/1/0/all/0/1">Giulio Grassi</a>, <a href="http://arxiv.org/find/cs/1/au:+Pesavento_D/0/1/0/all/0/1">Davide Pesavento</a>, <a href="http://arxiv.org/find/cs/1/au:+Pau_G/0/1/0/all/0/1">Giovanni Pau</a>, <a href="http://arxiv.org/find/cs/1/au:+Zhang_L/0/1/0/all/0/1">Lixia Zhang</a>, <a href="http://arxiv.org/find/cs/1/au:+Fdida_S/0/1/0/all/0/1">Serge Fdida</a> at March 06, 2015 01:30 AM

Mapping-equivalence and oid-equivalence of single-function object-creating conjunctive queries. (arXiv:1503.01707v1 [cs.DB])

Conjunctive database queries have been extended with a mechanism for object creation to capture important applications such as data exchange, data integration, and ontology-based data access. Object creation generates new object identifiers in the result, that do not belong to the set of constants in the source database. The new object identifiers can be also seen as Skolem terms. Hence, object-creating conjunctive queries can also be regarded as restricted second-order tuple-generating dependencies (SO tgds), considered in the data exchange literature.

In this paper, we focus on the class of single-function object-creating conjunctive queries, or sifo CQs for short. We give a new characterization for oid-equivalence of sifo CQs that is simpler than the one given by Hull and Yoshikawa and places the problem in the complexity class NP. Our characterization is based on Cohen's equivalence notions for conjunctive queries with multiplicities. We also solve the logical entailment problem for sifo CQs, showing that also this problem belongs to NP. Results by Pichler et al. have shown that logical equivalence for more general classes of SO tgds is either undecidable or decidable with as yet unknown complexity upper bounds.

by <a href="http://arxiv.org/find/cs/1/au:+Bonifati_A/0/1/0/all/0/1">Angela Bonifati</a>, <a href="http://arxiv.org/find/cs/1/au:+Nutt_W/0/1/0/all/0/1">Werner Nutt</a>, <a href="http://arxiv.org/find/cs/1/au:+Torlone_R/0/1/0/all/0/1">Riccardo Torlone</a>, <a href="http://arxiv.org/find/cs/1/au:+Bussche_J/0/1/0/all/0/1">Jan Van den Bussche</a> at March 06, 2015 01:30 AM

Wireless Sensor Network Virtualization: A Survey. (arXiv:1503.01676v1 [cs.NI])

Wireless Sensor Networks (WSNs) are the key components of the emerging Internet-of-Things (IoT) paradigm. They are now ubiquitous and used in a plurality of application domains. WSNs are still domain specific and usually deployed to support a specific application. However, as WSN nodes are becoming more and more powerful, it is getting more and more pertinent to research how multiple applications could share a very same WSN infrastructure. Virtualization is a technology that can potentially enable this sharing. This paper is a survey on WSN virtualization. It provides a comprehensive review of the state-of-the-art and an in-depth discussion of the research issues. We introduce the basics of WSN virtualization and motivate its pertinence with carefully selected scenarios. Existing works are presented in detail and critically evaluated using a set of requirements derived from the scenarios. The pertinent research projects are also reviewed. Several research issues are also discussed with hints on how they could be tackled.

by <a href="http://arxiv.org/find/cs/1/au:+Khan_I/0/1/0/all/0/1">Imran Khan</a>, <a href="http://arxiv.org/find/cs/1/au:+Belqasmi_F/0/1/0/all/0/1">Fatna Belqasmi</a>, <a href="http://arxiv.org/find/cs/1/au:+Glitho_R/0/1/0/all/0/1">Roch Glitho</a>, <a href="http://arxiv.org/find/cs/1/au:+Crespi_N/0/1/0/all/0/1">Noel Crespi</a>, <a href="http://arxiv.org/find/cs/1/au:+Morrow_M/0/1/0/all/0/1">Monique Morrow</a>, <a href="http://arxiv.org/find/cs/1/au:+Polako_P/0/1/0/all/0/1">Paul Polako</a> at March 06, 2015 01:30 AM

More on Decomposing Coverings by Octants. (arXiv:1503.01669v1 [math.CO])

In this note we improve our upper bound given earlier by showing that every 9-fold covering of a point set in the space by finitely many translates of an octant decomposes into two coverings, and our lower bound by a construction for a 4-fold covering that does not decompose into two coverings. We also prove that certain dynamic interval coloring problems are equivalent to the above question. The same bounds also hold for coverings of points in $\R^2$ by finitely many homothets or translates of a triangle.

by <a href="http://arxiv.org/find/math/1/au:+Keszegh_B/0/1/0/all/0/1">Bal&#xe1;zs Keszegh</a>, <a href="http://arxiv.org/find/math/1/au:+Palvolgyi_D/0/1/0/all/0/1">D&#xf6;m&#xf6;t&#xf6;r P&#xe1;lv&#xf6;lgyi</a> at March 06, 2015 01:30 AM

Mosaics of Combinatorial Designs. (arXiv:1503.01643v1 [math.CO])

Looking at incidence matrices of $t$-$(v,k,\lambda)$ designs as $v \times b$ matrices with $2$ possible entries, each of which indicates incidences of a $t$-design, we introduce the notion of a $c$-mosaic of designs, having the same number of points and blocks, as a matrix with $c$ different entries, such that each entry defines incidences of a design. In fact, a $v \times b$ matrix is decomposed in $c$ incidence matrices of designs, each denoted by a different colour, hence this decomposition might be seen as a tiling of a matrix with incidence matrices of designs as well. These mosaics have applications in experiment design when considering a simultaneous run of several different experiments. We have constructed infinite series of examples of mosaics and state some probably non-trivial open problems.

by <a href="http://arxiv.org/find/math/1/au:+Gnilke_O/0/1/0/all/0/1">Oliver W. Gnilke</a>, <a href="http://arxiv.org/find/math/1/au:+Greferath_M/0/1/0/all/0/1">Marcus Greferath</a>, <a href="http://arxiv.org/find/math/1/au:+Pavcevic_M/0/1/0/all/0/1">Mario Osvin Pav&#x10d;evi&#x107;</a> at March 06, 2015 01:30 AM

Minimal Classes of Graphs of Unbounded Clique-width and Well-quasi-ordering. (arXiv:1503.01628v1 [math.CO])

Daligault, Rao and Thomass\'e proposed in 2010 a fascinating conjecture connecting two seemingly unrelated notions: clique-width and well-quasi-ordering. They asked if the clique-width of graphs in a hereditary class which is well-quasi-ordered under labelled induced subgraphs is bounded by a constant. This is equivalent to asking whether every hereditary class of unbounded clique-width has a labelled infinite antichain. We believe the answer to this question is positive and propose a stronger conjecture stating that every minimal hereditary class of graphs of unbounded clique-width has a canonical labelled infinite antichain. To date, only two hereditary classes are known to be minimal with respect to clique-width and each of them is known to contain a canonical antichain. In the present paper, we discover two more minimal hereditary classes of unbounded clique-width and show that both of them contain canonical antichains.

by <a href="http://arxiv.org/find/math/1/au:+Atminas_A/0/1/0/all/0/1">A. Atminas</a>, <a href="http://arxiv.org/find/math/1/au:+Brignall_R/0/1/0/all/0/1">R. Brignall</a>, <a href="http://arxiv.org/find/math/1/au:+Lozin_V/0/1/0/all/0/1">V. Lozin</a>, <a href="http://arxiv.org/find/math/1/au:+Stacho_J/0/1/0/all/0/1">J. Stacho</a> at March 06, 2015 01:30 AM

MSOL-Definability Equals Recognizability for Halin Graphs and Bounded Degree $k$-Outerplanar Graphs. (arXiv:1503.01604v1 [cs.LO])

One of the most famous algorithmic meta-theorems states that every graph property that can be defined by a sentence in counting monadic second order logic (CMSOL) can be checked in linear time for graphs of bounded treewidth, which is known as Courcelle's Theorem. These algorithms are constructed as finite state tree automata, and hence every CMSOL-definable graph property is recognizable. Courcelle also conjectured that the converse holds, i.e. every recognizable graph property is definable in CMSOL for graphs of bounded treewidth. We prove this conjecture for a number of special cases in a stronger form. That is, we show that each recognizable property is definable in MSOL, i.e. the counting operation is not needed in our expressions. We give proofs for Halin graphs, bounded degree $k$-outerplanar graphs and some related graph classes. We furthermore show that the conjecture holds for any graph class that admits tree decompositions that can be defined in MSOL, thus providing a useful tool for future proofs.

by <a href="http://arxiv.org/find/cs/1/au:+Jaffke_L/0/1/0/all/0/1">Lars Jaffke</a>, <a href="http://arxiv.org/find/cs/1/au:+Bodlaender_H/0/1/0/all/0/1">Hans L. Bodlaender</a> at March 06, 2015 01:30 AM

Adaptively Secure Coin-Flipping, Revisited. (arXiv:1503.01588v1 [cs.CR])

The full-information model was introduced by Ben-Or and Linial in 1985 to study collective coin-flipping: the problem of generating a common bounded-bias bit in a network of $n$ players with $t=t(n)$ faults. They showed that the majority protocol can tolerate $t=O(\sqrt n)$ adaptive corruptions, and conjectured that this is optimal in the adaptive setting. Lichtenstein, Linial, and Saks proved that the conjecture holds for protocols in which each player sends a single bit. Their result has been the main progress on the conjecture in the last 30 years.

In this work we revisit this question and ask: what about protocols involving longer messages? Can increased communication allow for a larger fraction of faulty players?

We introduce a model of strong adaptive corruptions, where in each round, the adversary sees all messages sent by honest parties and, based on the message content, decides whether to corrupt a party (and intercept his message) or not. We prove that any one-round coin-flipping protocol, regardless of message length, is secure against at most $\tilde{O}(\sqrt n)$ strong adaptive corruptions. Thus, increased message length does not help in this setting.

We then shed light on the connection between adaptive and strongly adaptive adversaries, by proving that for any symmetric one-round coin-flipping protocol secure against $t$ adaptive corruptions, there is a symmetric one-round coin-flipping protocol secure against $t$ strongly adaptive corruptions. Returning to the standard adaptive model, we can now prove that any symmetric one-round protocol with arbitrarily long messages can tolerate at most $\tilde{O}(\sqrt n)$ adaptive corruptions.

At the heart of our results there is a new technique for converting any one-round secure protocol with arbitrarily long messages into one with messages of $polylog(n)$ bits. This technique may be of independent interest.

by <a href="http://arxiv.org/find/cs/1/au:+Goldwasser_S/0/1/0/all/0/1">Shafi Goldwasser</a>, <a href="http://arxiv.org/find/cs/1/au:+Kalai_Y/0/1/0/all/0/1">Yael Tauman Kalai</a>, <a href="http://arxiv.org/find/cs/1/au:+Park_S/0/1/0/all/0/1">Sunoo Park</a> at March 06, 2015 01:30 AM

Binary-Decision-Diagrams for Set Abstraction. (arXiv:1503.01547v1 [cs.LO])

Whether explicit or implicit, sets are a critical part of many pieces of software. As a result, it is necessary to develop abstractions of sets for the purposes of abstract interpretation, model checking, and deductive verification. However, the construction of effective abstractions for sets is challenging because they are a higher-order construct. It is necessary to reason about contents of sets as well as relationships between sets. This paper presents a new abstraction for sets that is based on binary decision diagrams. It is optimized for precisely and efficiently representing relations between sets while still providing limited support for content reasoning.

by <a href="http://arxiv.org/find/cs/1/au:+Cox_A/0/1/0/all/0/1">Arlen Cox</a> at March 06, 2015 01:30 AM

A Game-Theoretic Analysis of User Behaviors in Crowdsourced Wireless Community Networks. (arXiv:1503.01539v1 [cs.GT])

A crowdsourced wireless community network can effectively alleviate the limited coverage issue of Wi-Fi access points (APs), by encouraging individuals (users) to share their private residential Wi-Fi APs with each other. This paper presents the first study on the users' joint membership selection and network access problem in such a network. Specifically, we formulate the problem as a two-stage dynamic game: Stage I corresponds to a membership selection game, in which each user chooses his membership type; Stage II corresponds to a set of network access games, in each of which each user decides his WiFi connection time on the AP at his current location. We analyze the Subgame Perfect Equilibrium (SPE) systematically, and study whether and how best response dynamics can reach the equilibrium. Through numerical studies, we further explore how the equilibrium changes with the users' mobility patterns and network access evaluations. We show that a user with a more popular home location, a smaller travel time, or a smaller network access evaluation is more likely to choose a specific type of membership called Bill. We further demonstrate how the network operator can optimize its pricing and incentive mechanism based on the game equilibrium analysis in this work.

by <a href="http://arxiv.org/find/cs/1/au:+Ma_Q/0/1/0/all/0/1">Qian Ma</a>, <a href="http://arxiv.org/find/cs/1/au:+Gao_L/0/1/0/all/0/1">Lin Gao</a>, <a href="http://arxiv.org/find/cs/1/au:+Liu_Y/0/1/0/all/0/1">Ya-Feng Liu</a>, <a href="http://arxiv.org/find/cs/1/au:+Huang_J/0/1/0/all/0/1">Jianwei Huang</a> at March 06, 2015 01:30 AM

The Role of Data Cap in Optimal Two-part Network Pricing. (arXiv:1503.01514v1 [cs.NI])

Internet services are traditionally priced at flat rates; however, many Internet service providers (ISPs) have recently shifted towards two-part tariffs where a data cap is imposed to restrain data demand from heavy users and usage over the data cap is charged based on a per-unit fee. Although two-part tariff could generally increase the revenue for ISPs and has been supported by the FCC chairman, the role of data cap and its revenue-optimal and welfare-optimal pricing structures are not well understood. In this paper, we study the impact of data cap on the optimal two-part pricing schemes for congestion-prone service markets, e.g., broadband or cloud services. We model users' demand and preferences over pricing and congestion alternatives and derive the market share and congestion of service providers under a market equilibrium. Based on the equilibrium model, we characterize the two-part structure of the revenue-optimal and welfare-optimal pricing schemes. Our results reveal that 1) the data cap provides a mechanism for ISPs to transition from flat-rate to pay-as-you-go type of schemes, 2) with the growing data demand and network capacity, revenue-optimal pricing moves towards usage-based schemes with diminishing data caps, and 3) the structure of the welfare-optimal tariff comprises lower fees and data cap than those of the revenue-optimal counterpart, suggesting that regulators might want to promote usage-based pricing but regulate the per-unit fees. Our results could help providers design revenue-optimal pricing schemes and guide regulatory authorities to legislate desirable regulations.

by <a href="http://arxiv.org/find/cs/1/au:+Wang_X/0/1/0/all/0/1">Xin Wang</a>, <a href="http://arxiv.org/find/cs/1/au:+Ma_R/0/1/0/all/0/1">Richard T.B. Ma</a>, <a href="http://arxiv.org/find/cs/1/au:+Xu_Y/0/1/0/all/0/1">Yinlong Xu</a> at March 06, 2015 01:30 AM

Random Serial Dictatorship versus Probabilistic Serial Rule: A Tale of Two Random Mechanisms. (arXiv:1503.01488v1 [cs.GT])

For assignment problems where agents, specifying ordinal preferences, are allocated indivisible objects, two widely studied randomized mechanisms are the Random Serial Dictatorship (RSD) and Probabilistic Serial Rule (PS). These two mechanisms both have desirable economic and computational properties, but the outcomes they induce can be incomparable in many instances, thus creating challenges in deciding which mechanism to adopt in practice. In this paper we first look at the space of lexicographic preferences and show that, as opposed to the general preference domain, RSD satisfies envyfreeness. Moreover, we show that although under lexicographic preferences PS is strategyproof when the number of objects is less than or equal agents, it is strictly manipulable when there are more objects than agents. In the space of general preferences, we provide empirical results on the (in)comparability of RSD and PS, analyze economic properties, and provide further insights on the applicability of each mechanism in different application domains.

by <a href="http://arxiv.org/find/cs/1/au:+Hosseini_H/0/1/0/all/0/1">Hadi Hosseini</a>, <a href="http://arxiv.org/find/cs/1/au:+Larson_K/0/1/0/all/0/1">Kate Larson</a>, <a href="http://arxiv.org/find/cs/1/au:+Cohen_R/0/1/0/all/0/1">Robin Cohen</a> at March 06, 2015 01:30 AM

StackOverflow

td-agent is not working for apache logs

I need one help, I'm also using td-agent newest version at my ubuntu 12.04 for parsing apache logs to mongodb, in config if i put "format none", then it creates a mongo document and pushes everything to message key, but when I say "format apache" or "format apache2" or "format /^***********$/" (which is apache regular expression given by td-agent itself) then simply it says pattern not matched,

I checked the permissions, other ways around and all, but didn't get a solution for this, please help me, if u were able to run your logging project using td-agent(Fluentd).

Or should I shift to Logtrash for accomplishing this project.

the /var/log/td-agent/td-agent.log warnings are following.

2015-02-09 18:41:39 +0530 [warn]: pattern not match: "192.168.100.11:80 192.168.100.11 - - [09/Feb/2015:18:41:39 +0530] \"POST /get_details HTTP/1.1\" 200 580 \"http://192.168.100.11/login\" \"Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:35.0) Gecko/20100101 Firefox/35.0\""

2015-02-09 18:41:39 +0530 [warn]: pattern not match: "192.168.100.11:80 192.168.100.11 - - [09/Feb/2015:18:41:39 +0530] \"POST /get_user HTTP/1.1\" 200 365 \"http://192.168.100.11/login\" \"Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:35.0) Gecko/20100101 Firefox/35.0\""

Thanks, Williams.

by Williams at March 06, 2015 01:24 AM

Planet Theory

TR15-030 | ${\mathrm{AC}^{0} \circ \mathrm{MOD}_2}$ lower bounds for the Boolean Inner Product | Mahdi Cheraghchi, Elena Grigorescu, Brendan Juba, Karl Wimmer, Ning Xie

$\mathrm{AC}^{0} \circ \mathrm{MOD}_2$ circuits are $\mathrm{AC}^{0}$ circuits augmented with a layer of parity gates just above the input layer. We study the $\mathrm{AC}^{0} \circ \mathrm{MOD}_2$ circuit lower bound for computing the Boolean Inner Product functions. Recent works by Servedio and Viola (ECCC TR12-144) and Akavia et al. (ITCS 2014) have highlighted this problem as a frontier problem in circuit complexity that arose both as a first step towards solving natural special cases of the matrix rigidity problem and as a candidate for constructing pseudorandom generators of minimal complexity. We give the first superlinear lower bound for the Boolean Inner Product function against $\mathrm{AC}^{0} \circ \mathrm{MOD}_2$ of depth four or greater. Indeed, we prove a superlinear lower bound for circuits of arbitrary constant depth, and an $\tilde{\Omega}(n^2)$ lower bound for the special case of depth-4 $\mathrm{AC}^{0} \circ \mathrm{MOD}_2$. Our proof of the depth-4 lower bound employs a new "moment-matching" inequality for bounded, nonnegative integer-valued random variables that may be of independent interest: we prove an optimal bound on the maximum difference between two discrete distributions' values at $0$, given that their first $d$ moments match.

March 06, 2015 01:12 AM

/r/compsci

Degree or Certificates?

I am a 2 year student at university (don't want to call them out for legal reasons). So, they threaten 50% of their population of expulsion and i am in that 50%. That if I missed one more chapel attendance I will be expelled. So I just decided to leave but what I am wondering is; should I go to another university for my degree, study online with a university for my degree or study online (with websites liek pluralsight) for certificates?

Oh note, I am Jamaican. So I doubt the universities here hold much value in other countries and I also checked UOP (university of Phoenix) but their tuition is too much. Single parent, so I am working with about US$2k per semester.

Any suggestion would be great. TY

submitted by Waquar
[link] [2 comments]

March 06, 2015 01:09 AM

Fefe

Datenschutzreform ala Merkel:Banken, Versicherungen ...

Datenschutzreform ala Merkel:
Banken, Versicherungen und die Werbewirtschaft sollen Kundendaten zu kommerziellen Zwecken nutzen dürfen. Das sieht ein Schlupfloch vor, das die Bundesregierung in Brüssel durchsetzen will.
Ja warum denn auch nicht? Denkt denn hier niemand an die ganzen notleidenden Banken?!

March 06, 2015 01:01 AM

Kurze Durchsage der PARTEI, Landesverband Hessen:Ja, ...

Kurze Durchsage der PARTEI, Landesverband Hessen:
Ja, lieber #Edathy, auch bei der SPD muss man sich zwischen Kindern und Karriere entscheiden.

March 06, 2015 01:01 AM

/r/compsci

StackOverflow

Ansible How to replay notifications

Currently I am switching from puppet to Ansible and I am a bit confused with some concepts or at least how ansible works.

Some info on the setup:

I am using the examples from Ansible Best Practices and have structured my project similar with several roles (playbooks) and so on.

I am using Vagrant for provisioning and the box is Saucy64 VBox.

Where the Confusion comes:

When I provision, and I run ansible, tasks start to execute, then the stack of notifications.

Example:

Last task:

TASK: [mysql | delete anonymous MySQL server user for localhost] ************** 
<127.0.0.1> REMOTE_MODULE mysql_user user='' state=absent 
changed: [default] => {"changed": true, "item": "", "user": ""}

Then first notification:

NOTIFIED: [timezone | update tzdata] ****************************************** 
<127.0.0.1> REMOTE_MODULE command /usr/sbin/dpkg-reconfigure --frontend noninteractive tzdata
changed: [default] => {"changed": true, "cmd": ["/usr/sbin/dpkg-reconfigure", "--frontend", "noninteractive", "tzdata"], "delta": "0:00:00.224081", "end": "2014-02-03 22:34:48.508961", "item": "", "rc": 0, "start": "2014-02-03 22:34:48.284880", "stderr": "\nCurrent default time zone: 'Europe/Amsterdam'\nLocal time is now:      Mon Feb  3 22:34:48 CET 2014.\nUniversal Time is now:  Mon Feb  3 21:34:48 UTC 2014.", "stdout": ""}

Now this is all fine. As the roles increase more and more notifications stuck up.

Now here comes the problem.

When a notification fails the provisioning stops as usual. But then the notification stack is empty! This means that all notifications that where after the faulty one will not be executed!

If that is so then if you changed a vhosts setting for apache and had a notification for the apache service to reload then this would get lost.

Let's give an example (pseudo lang):

- name: Install Apache Modules
  notify: Restart Apache

- name: Enable Vhosts
  notify: Reload Apache

- name: Install PHP
  command: GGGGGG # throws an error

When the above executes:

  1. Apache modules are installed
  2. Vhosts are enables
  3. PHP tries to istall and fails
  4. Script exits
  5. (Where are the notifications?)

Now at this point all seems logical but again Ansible tries to be clever (no!*) stacks notifications and thus reload and restart apache will result in a single restart of apache run at the end of provisioning. That means that all notifications will fail!!!

Now up to here for some people this is fine as well. They will say hey just re-run the provisioning and the notifications will fire up, thus apache will be finally reloaded and site will be up again. This is not the case.

On the second run of the script after the code for installing php is corrected the notifications will not run due to design. Why?

This is why: Ansible will have the tasks that executed successfully, marked as "Done/Green" thus not registering any notifications for these tasks. The provisioning will be successful and in order to trigger the notification and thus the apache restart you can do one of the following:

  1. Run a direct command to the server via ansible or ssh
  2. Edit the script to trigger the task
  3. Add a separate task for that
  4. Destroy instance of box and reprovision

This is quite frustrating because requires total cleanup of the box, or do I not understand something correctly with Ansible?

Is there another way to 'reclaim'/replay/force the notifications to execute?

  • Clever would be either to mark the task as incomplete and then restart the notifications or keep a separate queue with the notifications as tasks of their own.*

by Jimmy Kane at March 06, 2015 12:24 AM

Scala instantiation of a class with curly braces

I am starting with Scala and with ScalaFX, I understand most of the code but I don't understand this code using for the examples in ScalaFx;

where instantiate an anonymous class follow it by curly braces, How this works???

object ScalaFXHelloWorld extends JFXApp {

  stage = new PrimaryStage {

    title = "ScalaFX Hello World"

    scene = new Scene {

      fill = Black

      content = new HBox {

        padding = Insets(20)

        children = Seq(
          new Text {
            text = "Hello"
            style = "-fx-font-size: 48pt"
            fill = new LinearGradient(
              endX = 0,
              stops = Stops(PaleGreen, SeaGreen)
            )
          },
          new Text {
            text = "World!!!"
            style = "-fx-font-size: 48pt"
            fill = new LinearGradient(
              endX = 0,
              stops = Stops(Cyan, DodgerBlue)
            )
            effect = new DropShadow {
              color = DodgerBlue
              radius = 25
              spread = 0.25

            }
          }
        )

      }

    }

  }

}

the part I don't understand is why in the creation of an anonymous class is follow by curly braces (with some more declarations)(Scene is not a trail to be filling the abstract parts of that class) and even fill or content are functions not a variables and Black for fill for instant is a val meaning that this line

fill = Black

is doing calling a function fill and assigning a val to it(don't make sense for me ), this is fill definition

def fill: ObjectProperty[jfxsp.Paint] = delegate.fillProperty

and this is Black

val Black = new Color(jfxsp.Color.BLACK)

how works this instantiation of a new object with curly braces please help, want to understand. This is because ScalaFx is wrapping JavaFx and something special is going on here?. Thank you guys.

Update:

Well now I know that it is calling a setter via syntax sugar however I check that setter and I don't understand what is going on there

Check it out:

def fill: ObjectProperty[jfxsp.Paint] = delegate.fillProperty
  def fill_=(v: Paint) {
    fill() = v
}

how come the setter is calling the getter to update the value?

delegate.fillProperty

is a function that return a value

by legramira at March 06, 2015 12:09 AM

Getting the same instance of RandomAccessFile in clojure

This piece of code runs on the server and it detects the changes to a file and sends it to the client. This is working for the first time and after that the file length is not getting updated even the I changed the file and saved it. I guess the clojure immutability is the reason here. How can I make this work?

 (def clients (atom {}))
    (def rfiles (atom {}))
    (def file-pointers (atom {}))

(defn get-rfile [filename]
  (let [rdr ((keyword filename) @rfiles)]
    (if rdr
      rdr
      (let [rfile (RandomAccessFile. filename "rw")]
        (swap! rfiles assoc (keyword filename) rfile)
        rfile))))

(defn send-changes [changes]
  (go (while true
        (let [[op filename] (<! changes)
              rfile (get-rfile filename)
              ignore (println (.. rfile getChannel size))
              prev ((keyword filename) @file-pointers)
              start (if prev prev 0)
              end (.length rfile) // file length is not getting updated even if I changed the file externally
              array (byte-array (- end start))]
          (do
            (println (str "str" start " end" end))
            (.seek rfile start)
            (.readFully rfile array)
            (swap! file-pointers assoc (keyword filename) end)
            (doseq [client @clients]
              (send! (key client) (json/write-str
                                    {:changes  (apply str (map char array))
                                     :fileName filename}))
              false))))))

by Naresh at March 06, 2015 12:03 AM

DragonFly BSD Digest

BSDNow 079: Just Add QEMU

The newest BSDNow episode talks with Sean Bruno about poudriere and QEMU.  He’s using those tools on FreeBSD, but poudriere is useful for building dports on DragonFly, too.  The usual news collection is there, too.

by Justin Sherrill at March 06, 2015 12:01 AM

HN Daily

March 05, 2015

CompsciOverflow

Hardness of a constrained quadratic maximization

Consider the following quadratic maximization: \begin{align} \max_{\mathbf{x} \in \mathcal{X}} &\quad\mathbf{x}^{T}\mathbf{A}\mathbf{x} \end{align} with \begin{align} \mathcal{X} = \lbrace \mathbf{x} \in \mathbb{R}^{n} :~ \|\mathbf{x}\|_{2}=1, \|\mathbf{x}\|_{0}\le k \rbrace, \end{align} where $\mathbf{A}$ is a positive semidefinite matrix and $k \le n$ is a sparsity parameter. This problem is NP-hard, by a reduction from the max-clique problem.

I am interested in a similar problem obtained by imposing additional structure on $\mathcal{X}$. In particular, assume that the $n$ variables in $\mathbf{x}$ are partitioned into $k$ disjoint groups. We restrict the feasible set to unit-length vectors $\mathbf{x}$ with one active variable per group. That is, $\mathcal{X}$ contains again $k$-sparse vectors, but the support cannot be arbitrary; it contains (at most) one nonzero entry for each of the $k$ groups.

Note that the feasible set in the modified problem is a subset of the previous maximization, but the number of feasible supports can still be exponential in the number of variables $n$ (for appropriately chosen $k$).

I suspect that the modified problem is also NP-hard. Any ideas on how to show that (or disprove)? Feel free to share your intuition.

by m.a. at March 05, 2015 11:52 PM

UnixOverflow

How to apply updates on OpenBSD, NetBSD, and FreeBSD?

I'm using OpenBSD for quite a while now. All I do, however is go from one release to the next, always just doing an update. I configured the system so it works as my router and firewall, and it works quite well like that. But I never update packages. All I do is just move on to the next release.

Coming from the Linux world, I'm used to applying updates a few times a week; but how do I do that on *BSD? - Or is this not part of the *BSD philosophy?

by polemon at March 05, 2015 11:39 PM

/r/netsec

UnixOverflow

Random MAC-address at every boot on different OS [duplicate]

This question already has an answer here:

Using OpenBSD, how can we generate new MAC-address at every boot? What would the script look like? Where do we need to put it?

Random mac address at startup covers Linux.

by thequestionthequestion at March 05, 2015 11:29 PM

How to only allow a group for network access?

Using OpenBSD's pf.

Question: How can we modify the firewall of OpenBSD to allow ONLY a given group for network access? If somebody isn't in that group, it shouldn't have layer 3 or layer 2 network access.

by user90825 at March 05, 2015 11:10 PM

StackOverflow

How do I pass a callback function to an akka actor constructor when using Context.actorOf?

I want to pass a callback to an akka actor in its constructor:

object FSMActorWithCallback {
  type Tracer = (Int, NodeState, NodeData, ActorRef, Any) => Unit
}

class FSMActorWithCallback(tracerCallback: FSMActorWithCallback.Tracer) extends FSMAwesomeActor
  // method is called each FSM Event so we can record current state and next message 
  override def trace(state: NodeState, data: NodeData, sender: ActorRef, msg: Any) : Unit = {
    // different tracing callback for different test rigs such as unit tests or integration tests
    tracerCallback(nodeUniqueId, state, data, sender, msg)
  }
}

This would let me define raw actor using new but I need to use the actorOf factory method to have the actor correctly hooked into the system:

class Supervisor extends Actor {

  def outputStateTrace(state: NodeState, data: NodeData, sender: ActorRef, msg: Any): Unit = { 
    /*actually make a binary log for analysis of complex failures*/
  }

  // COMPILE ERROR "follow this method with _ if you wan to treat it as a partially applied function" 
  var child = contact.actorOf(Props(classOf[FSMActorWithCallback], outputStateTracer) 

  // seems to work fine but not what i need
  val childRaw = new FSMActorWithCallback(tracer) 

}

The actual construction of the actor needs to be by the factory method shown but I cannot figure out how to pass a callback through the factory method.

by simbo1905 at March 05, 2015 11:05 PM

Should clojure core.async channels be closed when not used anymore?

Close method (at least in java world) is something that you as a good citizen have to call when you are done using related resource. Somehow I automatically started to apply the same for the close! function from core.async library. These channels are not tight to any IO as far as I understand and therefore I am not sure whether it is necessary to call close!. Is it ok to leave channels (local ones) for Garbage Collection without closing them?

by Viktor K. at March 05, 2015 11:03 PM

CompsciOverflow

Complexity bound on $RP^{RP}$

This is a homework question, I'm wondering if anyone could help. Recall $RP$ is the set of languages recognized by randomized algorithms in polynomial time. The question is given an algorithm in $RP$ allowed to consult an oracle in $RP$, prove the "lowest complexity bound" for a set recognized by this algorithm.

I don't think this is a very good question, it's not clear exactly what is meant by lowest complexity bound. I suppose this means any set in this class ($RP^{RP}$) must fall in which complexity class..that is, find the lowest such one and prove it.

Any ideas?

by Kuhndog at March 05, 2015 11:02 PM

DataTau

/r/compsci

Which algorithm is faster?

Given an unsorted array of size n (up to one million elements), I want to find the middle value of the sorted array.

Here's some pseudo for a modified quicksort that stops after the pivot becomes the middle element:

sort(A[], p, r) { if(p<r) { q=Partition(A,p,r); if (q==(A length-1)/2); else { sort(A,p,q-1); sort(A,q+1,r); } } 

Would it be faster to use quickselect?

submitted by olverine
[link] [4 comments]

March 05, 2015 10:56 PM

StackOverflow

Android-NDK: incompatible target while linking ZeroMQ static library into a shared library

I have successfully compiled multiple static libraries with the ndk toolchain and linked them into my own project. I have to use many cpp files and they need protocol buffers and a ZeroMQ, as a library, to compile successfully. Linking against protocol buffers works great, however, when I link against the ZeroMQ I get the following error:

C:/Users/x/ndk/toolchains/arm-linux-androideabi-4.9/prebuilt/windows-  x86_64/bin/../lib/gcc/arm-linux-androideabi/4.9/../../../../arm-linux-   androideabi/bin/ld.exe: error: ./zmq/lib/libzmq.a(libzmq
_la-zmq.o): incompatible target
collect2.exe: error: ld returned 1 exit status
make.exe: *** [C:/Users/x/Workspace/y/app/src/main//obj/local/armeabi-  v7a/libZ.so] Error 1

I have replaced personal information with x, y, z for a clear reason.

I'm using Windows 8.1 with Android Studio 1.1 RC1 and NDK10d. I compiled the libraries on a Ubuntu and a Debian system (tried different ones). Both use the same arm toolchain.

To compile ZeroMQ I followed the steps from the official page. I tried zeromq3-x and zeromq4-x. I tried the mentioned ndk8 and the new ndk10d.

My Application.mk:

APP_STL := gnustl_static     #tried: c++_static/shared stlport_static/shared
APP_PLATFORM := android-21
APP_USE_CPP0X := true  #tried to omit
APP_CXXFLAGS := -std=gnu++11
APP_CPPFLAGS := -frtti -fexceptions --std=c++11
APP_ABI := armeabi-v7a             #tried different like armeabi, all, x86 - obviously only arm should work
NDK_TOOLCHAIN_VERSION := 4.9

Android.mk without important files to compile because it will crash on the first need of zmq:

LOCAL_PATH := $(call my-dir)
include $(CLEAR_VARS)

LOCAL_MODULE := ../jni/protobuf
LOCAL_SRC_FILES := ../jni/protobuf/libprotobuf.a
LOCAL_EXPORT_C_INCLUDES := $(LOCAL_PATH)/include
LOCAL_EXPORT_C_INCLUDES :=      C:\Users\x\Android\Proto\.lib arm 5  2.6\protobuf-2.6.0\build\include
LOCAL_EXPORT_C_INCLUDES :=    C:\Users\x\Android\Proto\.lib arm 5 2.6\protobuf- 2.6.0\build\include\google
LOCAL_EXPORT_C_INCLUDES := C:\Users\x\Android\Proto\.lib arm 5 2.6\protobuf-   2.6.0\build\include\google\proto
LOCAL_C_INCLUDES := C:\Users\x\Android\Proto\.lib arm 5 2.6\protobuf-  2.6.0\build\include
LOCAL_C_INCLUDES := C:\Users\x\Android\Proto\.lib arm 5 2.6\protobuf-2.6.0\build\include\google
LOCAL_C_INCLUDES := C:\Users\x\Android\Proto\.lib arm 5 2.6\protobuf-2.6.0\build\include\google\protobuf

include $(PREBUILT_STATIC_LIBRARY)

LOCAL_MODULE := zmq
LOCAL_SRC_FILES := zmq/lib/libzmq.a  
LOCAL_EXPORT_C_INCLUDES := $(LOCAL_PATH)/include
LOCAL_EXPORT_C_INCLUDES := C:\Users\x\Android\ZMQ\ARM-FINAL\include
LOCAL_EXPORT_C_INCLUDES := zmq/include
LOCAL_C_INCLUDES := C:\Users\x\Android\ZMQ\ARM-FINAL\include
LOCAL_C_INCLUDES := zmq/include

include $(PREBUILT_STATIC_LIBRARY)

include $(CLEAR_VARS)
LOCAL_MODULE := Z
LOCAL_CFLAGS := -I/zmq -std=c++11
LOCAL_CPPFLAGS := -I/zmq -std=c++11
LOCAL_CPP_FEATURES += exceptions
LOCAL_LDLIBS := -lGLESv1_CM -ldl -llog
LOCAL_CPP_EXTENSION := .cxx .cpp .cc .h
LOCAL_DISABLE_FORMAT_STRING_CHECKS := true

LOCAL_SRC_FILES := \
    ../jni/protogen/applications.pb.cc \ # this will work
    common/bytearray.cpp \ # this will fail


LOCAL_ALLOW_UNDEFINED_SYMBOLS := true

 LOCAL_C_INCLUDES += C:\Users\x\Android-MasterUI\app\src\main\jni
 LOCAL_C_INCLUDES += ../jni/protogen
 LOCAL_C_INCLUDES += common
 LOCAL_C_INCLUDES +=   C:\Users\x\Android\ZMQ\ARM-FINAL\include
 LOCAL_C_INCLUDES += C:\Users\x\Workspace\\app\src\main\jni\protobuf
 LOCAL_C_INCLUDES += C:\Users\x\Workspace\Android-    MasterUI\app\src\main\jni\protobuf\include
 LOCAL_C_INCLUDES += C:\Users\x\Android-  MasterUI\app\src\main\jni\protobuf\include\google
 LOCAL_C_INCLUDES +=   C:\Users\x\Android\Proto\.lib arm 5   2.6\protobuf-2.6.0\build\include
 LOCAL_C_INCLUDES += C:\Users\x\Android\Proto\.lib arm 5 2.6\protobuf- 2.6.0\build\include\google
 LOCAL_C_INCLUDES += C:\Users\x\Android\Proto\.lib arm 5 2.6\protobuf-2.6.0\build\include\google\protobuf

 LOCAL_STATIC_LIBRARIES := zmq protobuf

 include $(BUILD_SHARED_LIBRARY)

The objdump from the static library looks like this

libzmq_la-address.o:     file format elf64-x86-64
architecture: i386:x86-64, flags 0x00000011:
HAS_RELOC, HAS_SYMS
start address 0x0000000000000000

I will omit the further ones because they are all the same. There was a better way to dump all the pieces of information with more details about the architecture, but I can not find it anymore.

If you know a better way you may tell me and I will add more information.

Any idea is welcome and appreciated...

by Matthias at March 05, 2015 10:56 PM

Can't install PFSense Live CD

the Install of PFSense stops at atkbd0: irq 1 on atkbdc0

You can see the log at: http://image-upload.de/image/XtJVC0/6ab2918de0.jpg

What I can do?

by user3792694 at March 05, 2015 10:49 PM

Create Eclipse project using ansible

I am researching ways to automate the creation of a Linux development environment. I use packer to build a VirtualBox image, and ansible to provision the machine.

My question is this: is it possible to use ansible to generate an Eclipse workspace and an arbitrary number of projects in that workspace based on a list of github repos? Put another way, I'd like to give ansible a list of github repos, and on provision have it clone them (already doable and easy to understand) and set up Eclipse projects such that when a developer sits down to my newly built and provisioned VM, he/she can open Eclipse and start working on any of the repos almost instantly?

I've seen numerous examples of how to get Eclipse installed using ansible, and I've already got that working properly. I just can't figure out how to go the extra mile and get the dev environment set up automatically.

by thaavik at March 05, 2015 10:37 PM

Lobsters

TheoryOverflow

Polynomial time reduction from 3D Matching to SAT [on hold]

Is there an algorithm to reduce 3D Matching problem to Satisfiability problem (SAT) in polynomial time? Specifically how to represent the instance of 3D Matching to an instance of SAT that satisfies the SAT?

by MD Abid Hasan at March 05, 2015 10:28 PM

StackOverflow

How do I make sbt output stack traces of TestFailedExceptions?

How do I make sbt output the stack traces of TestFailedExceptions, as thrown by ScalaTest, instead of suppressing them, which seems to be the default?

We use the 'F' option to ScalaTest/sbt, but it doesn't affect TestFailedException apparently.

by aknuds1 at March 05, 2015 10:25 PM

/r/compsci

Lobsters

StackOverflow

Is there an equivalent to Python's islice in Scala?

In Python I can easily easily slice or truncate an infinite sequence with itertools's slice method:

list(islice(count(), 3, 5)) -> [3,4]

Is there an equivalent syntax in Scala that can slice or truncate an infinite stream or iterator?

Thanks!

by Salim Fadhley at March 05, 2015 09:57 PM

QuantOverflow

Difference between google finance and yahoo finance?

I am wondering about the huge differences of the data provider google finance and yahoo finance. I am interested in the monthly data from adidas listed on xetra. In google: ETR:ADS and in yahoo finance: ADS.DE - Xetra.

There is a huge difference in the data, e.g. consider the 02.06.2008:

google finance:

Date             Open    High   Low     Close   Volume 
Jun 2, 2008     45.49   45.60   44.93   45.05   1,037,644 

yahoo finance (german):

Datum   Eröffnungskurs  Max.    Tief    Schluss Ø Volumen   Adj. Schluss* 
2. Jun 2008 45,49   46,48   39,39   40,11   1.603.100   38,34

As you can see the open ("Eröffnungskurs") is the same. But all other values are different. So currency seems not to be the problem. But why are these values so different?

Also, why is there a yahoo finance gap in August 2008? I don't get the value of the 01.08.2008 but the 18. Aug 2008 instead? Since I am using the yahoo finance data I have to fill in this gap, what value / what method would be appropriate?

by Ivanov at March 05, 2015 09:57 PM

CompsciOverflow

Why is Computer Architecture in $2^n$ bits?

I have always wondered why is computer architecture in $2^n$ bits. We have 8 / 16 / 32 / 64-bit microprocessors or for that matter other parts of computer are also in power of 2 bits.

The only logic I could understand from my reasoning is that usually computer design process starts from lower amount of bits. For example : Say I want to design a Full adder to add 16 bit numbers. So I would first design a digital circuit to add 2 bits (one from number A and other form number B). Then I would replicate this circuit 16 times. So this will give me 16-bit full adder.

Is my reasoning correct ? Is there some other reason also ?

by SimpleGuy at March 05, 2015 09:49 PM

Planet Clojure

Software Developer at Democracy Works, Inc. (Full-time)

At Democracy Works, we believe voting should fit the way we live. To that end, we build technology for both voters and election administrators that simplifies the process and ensures that no voter should ever have to miss an election.

TurboVote, our first service, tracks an individual voter’s elections. We provide all the materials and information they need to get registered, stay registered, and cast a ballot in every election, from municipal to national—and we’ll even mail forms with an addressed, stamped envelope for the local election office. Ballot Scout, our newest product, helps local election administrators track absentee ballots as they travel through the mail, providing transparency in the vote-by-mail process and making it easier to follow up when things go awry. We also work with the Pew Charitable Trusts and Google to ensure that the Voting Information Project’s data is up-to-date and ready from one election to the next.

These products are the work of our six-person developer team. Languages we use include Ruby, Clojure, ClojureScript, JavaScript, and Python. Our web projects are built with Rails, Pedestal, and Compojure. Our front ends use jQuery, React, and Om. We use SCSS to write our styles. Our databases are MySQL, PostgreSQL, and Datomic. We deploy Docker containers to EC2 servers. Some of our interprocess communication happens via Redis, some of it via Amazon’s SQS, some of it over plain HTTP. And all our code lives on GitHub. You should have experience with some of this and ready to get experience with the rest.

We pair program, collaborate with product managers, and make sure our efforts deliver value to voters. Our flat structure means that you’ll get the opportunity to lead on a variety of projects on a rotating basis.

In the next year we’re looking to rebuild the frontend for turbovote.org, split the Rails backend into independent Clojure services, improve our Ballot Scout webapp and services, and automate the quality-assurance process for Voting Information Project data. If any or all of these projects sound interesting to you, then you’re interesting to us.

Salary is competitive and commensurate with experience. Democracy Works also offers a competitive benefits package including health insurance and vacation. We’re based in Brooklyn, NY and Denver, CO, and we hope you’ll want to work from one of these offices–though we’ll consider remote arrangements for the perfect candidate.

We are inspired by the idea of building an awesome and truly diverse team, so we strongly encourage applicants of all races, colors, political party associations, religions (or lack thereof), national origins, sexual orientations, genders, sexes, ages, abilities, and branches of military service. Feel free to contact work [at] turbovote [dot] org if you have any questions about our commitment to diversity or about general hiring practices.

Get information on how to apply for this position.

by FunctionalJobs.com at March 05, 2015 09:45 PM

/r/compsci

Quicksort Partition function?

Are there other recursive algorithms that use the partition function from quicksort? Outside of sorting, where can the partition function be applied?

submitted by olverine
[link] [8 comments]

March 05, 2015 09:42 PM

/r/clojure

StackOverflow

How do I conditionally run a task if another task was "created"?

I have a playbook that creates an AWS ELB and AWS ASG.

I only want to create the ASG if the ELB task CREATES (not updates) the ELB.

This is very similar to this question: Ansible - only run a series of tasks if a precondition is met Whose solution is to register a variable with the result, and then use "when" statement, but in this case, the result of the ELB task is the same on creation AND update.

I don't want to create the ASG when the ELB is updated... I only want to create the ASG if the ELB was created, so I can't differentiate a create vs an update.

Thinking out loud, I could probably write a script to answer this question (ie query AWS to see when the ELB was created, for instance), but I'm hoping I could do it via Ansible.

Any ideas?

by grayaii at March 05, 2015 09:38 PM

/r/netsec

StackOverflow

Communicating with a classic TCP socket

I'm writing my first application with NetMQ (ZeroMQ implementation for .NET).

I also need to listen to information sent from a client using a traditional TCP socket (a.k.a a non-0MQ socket).

I've seen references to the availability of this socket type in the official ZeroMQ documentation here, (look for ZMQ_STREAM), but there's very few details on how to use it (and that doesn't help much either, the .NET API is quite a bit different from the C++ API).

The offical NetMQ documentation also makes no mention of the Streaming socket type.

Finally I had a look over to the Test suite for NetMQ on Github, and found a partial answer to my question in the method RawSocket.

The following snippet works:

using (NetMQContext context = NetMQContext.Create())
{
    using (var routerSocket = context.CreateRouterSocket())
    {
        routerSocket.Options.RouterRawSocket = true;
        routerSocket.Bind("tcp://127.0.0.1:5599");

        byte[] id = routerSocket.Receive();
        byte[] message = routerSocket.Receive();

        Console.WriteLine(Encoding.ASCII.GetString(id));
        Console.WriteLine(Encoding.ASCII.GetString(message));
    }
}

When using standard TCP/IP test-tools, the byte[] message is printed out nicely, e.g. like this:

Hello World!

but the byte[] id is printed out like this:

 ???♥

In other words, I have no clue what's up with the id part. Why is routerSocket.Receive called twice? What is contained within the id? Is this something ZeroMQ/NetMQ specific, or is something TCP/IP specific information being extracted here?

by Robin Mattheussen at March 05, 2015 09:30 PM

Terminate ansible playbook based on shell output

I have an ansible playbook that runs a shell command. If there is a specific message in the output, I need to terminate the playbook. Here's what I've tried:

- name : Do foo
  shell: /bin/my_application arg1 arg2 arg3
  args:
    creates: /tmp/foo_1
  with_items: data_items
  register: foo_result

- debug: var=foo_result

- debug: var=foo_result
  when:
    - foo_result.results is defined
    - '"[Foo Bar Baz] Err Code abc123" in foo_result.results.stdout'
  failed_when: true

The intention here is that if '"[Foo Bar Baz] Err Code abc123"' appears in the output of the program, I want to print the full output (which includes very useful information, but there is a lot of information so I don't want to print it all of the time) and then abort the playbook.

Unfortunately, this doesn't quite work.

The first debug statement prints something sort of like this:

TASK: [do_stuff | debug var=foo_result] **************************** 
ok: [some-node] => {
    "foo_result": {
        "changed": true, 
        "msg": "All items completed", 
        "results": [
            {
                "changed": true, 
[Snip..]
                "stderr": "", 
                "stdout": "Very large stack trace containing [Foo Bar Baz] Err Code abc123"
            }
        ]
    }
}

So, I know that the error message I'm interested in is in stdout.

But the second one prints this:

TASK: [do_stuff | debug var=foo_result] **************************** 
fatal: [some-node] => error while evaluating conditional: "[Foo Bar Baz] Err Code abc123" in foo_result.results.stdout

I've also tried this using foo_result.stdout, and also escaping the [ and ] as \[ and \] in the condition, and I've tried testing that foo.results.stdout is defined, and I'm not sure why I'm getting this conditional evaluation error for all these permutations. I would have expected a syntax error or something... Is there a better way to do what I'm trying to do here (fail when there's specific text in a command's stdout and then print that stdout)?

(Ansible version is 1.6.10)

by FrustratedWithFormsDesigner at March 05, 2015 09:27 PM

/r/compsci

Question about operating systems and portability

Why isn't there a standard interface that all operating systems implement for graphics, input, etc.? I can somewhat understand why Windows doesn't since they want people to be locked into their platform... but what about open source ones?

On the same topic, if they did do this is there anything that would stop OSs using the same executable format and then binaries could be run on any OS without recompilation?

Edit: should have specified I mean on the same hardware architecture

submitted by huike
[link] [6 comments]

March 05, 2015 09:20 PM

StackOverflow

ansible local file path for unarchive

I have what I think is a fairly common task of taking a local archive file, transferring it to a server, and extracting it there. I'm using the unarchive module for this but struggling with an elegant way to deal with the local filename always being different because the filename includes the software version. For example, here's a excerpt from a playbook:

- name: Upload code
  unarchive: src={{payload_file}} dest={{install_dir}}

I can run this and use -e to set the variable:

ansible-playbook -e payload_file=/tmp/payload-4.2.1.zip -i production deploy.yml

My question is is this the best way to handle this situation? All the docs have hardcoded example file names and paths and I haven't seen any evidence in the docs that makes a variable that will be different each deployment straightforward to set.

by Peter Lyons at March 05, 2015 09:17 PM

CompsciOverflow

Reference work operational semantics [on hold]

For my thesis I built a small programming language. Now I'm at the point I have to write an operational semantic for it. I'm having some trouble doing this because I don't know how to write stuff down.

The language I am writing this formal semantic for is an actor language that implements software transactional memory for shared state. This means I have a set of actors, a set of ordered sets that represent the inbox of each actor, a list of dependencies within each actor and so forth.

A few examples of things I'm having issues with:

  • How do I denote something like hashmap that maps identifiers to ordered lists.
    • In the image below you can see that an actor can send messages. I want M to be a list of sets. Each set represents the mailbox of an actor. And in turn, a mailbox is just an ordered set of Message entities. I wold like a good book and or website that helps me understand what the proper notation for such an entity is.
  • How do I present null? As in, nothing. (Atm I use the bottom symbol)
    • In the image below you can see that an actor has a field tau, which represents his current transaction. This value can be null, nothing, void, or what you have to call it. I would like some good book that helps me figure out what the proper name is and what symbol represents it.
  • How do I write a constructor for a semantic entity?
    • In the image blow you can see that I have a Message entity. I would like a good book that tells me how I can properly write down the creation of a new Message (for example) in my evaluation rules.

I'm wondering if there is somewhere I could read up on these things. I don't have a very strong mathematical background either.

I've been reading papers about people who made operational semantics that are alike to mine, but not everything is clear (i.e., not clear enough to use their notations, but clear enough to understand how the semantics work).

I have read most chapters of the book "Types and Programming Languages" by Benjamin Pierce but the introduction about sets, functions, etc is rather cumbersome.

Edit

As per the comments I'm not looking any specific help. The thing I need is a good book/website that introduces basic concepts regarding formal semantics. The result of reading such a book would be to not having to struggle with notation. I know what I want to write down, I just don't know how. The examples above and below are just there to show what kind of problems I face.

An example of my semantic entities is shown in the image blow.

enter image description here

So the import part of this question is not to help me solve specific notation problems. The important part is to get a good book and or website that will help me with the basic concepts (notation of an unordered set of order sets for example). In case I get stuck in the future, I can consult that book and or website to help me out.

Below I will list a few more things I do not properly understand. And again, I would like a good book and or website that explains me these basics. Below you can see the evaluation rule that adds delta_p to the set delta_ps. I have used the notation :: to append to the end of a list. However, I just saw this in an other paper and have nothing to back up why I actually write it this way. I would like a good book and or website that explains me basic things like these. enter image description here

The image below depicts an actor that sends a message (! actor message args). This actor is not currently in a transaction (STM), so the transaction value (tau) in this actor should be null. I have no idea how to write that down, so for the time being I just use the symbol bottom. I do not know where I can find more information on this.

enter image description here

by Christophe De Troyer at March 05, 2015 09:12 PM

Is Karnaugh Map possible for Maxterms?

I read about Minterms i.e. sums of products, simplification using Karnaugh Graph. Can this graph be used for Maxterms, i.e. products of sums, as well? If yes, then how?

If not, then is there some other similar way to use, for simplification of Maxterms? I know, one can always convert Maxterms to Minterms and then use K-Map. I mam looking for some direct way.

by SimpleGuy at March 05, 2015 09:11 PM

/r/netsec

CompsciOverflow

Unifying Skolems with multiple arguments

Suppose I have these two statements to encode if FOPC: "Everybody has a house" and "Jane and Mark have a house." I encode them

"Everybody who has a house pays utilities"

forall x: has(x,skolem1(x)) ^ house(skolem1(x)) -> paysUtils(x), or

~has(x, skolem1(x)) v ~house(skolem1(x)) v paysUtils(x)

"Jane and Mark have a house"

has(Jane,skolem2(Jane,Mark)) ^ has(Mark,skolem2(Jane,Mark)) ^ house(skolem2(Jane,Mark)), or

has(Jane,skolem2(Jane,Mark))
has(Mark,skolem2(Jane,Mark))
house(skolem2(Jane,Mark))

I want to use resolution to prove Jane pays utilities. The problem is that skolem1 has 1 argument and skolem2 has 2 arguments, so they don't unify.

I'm not sure if this matters -- could I just refer to skolem1 and skolem2 and forget the arguments entirely? If it does matter, how do I resolve the problem so that skolem1 and skolem2 can unify, and resolution can work?

by Will Briggs at March 05, 2015 09:07 PM

Fefe

Hat jemand den Laptop des Geschäftsführers von SIG ...

Hat jemand den Laptop des Geschäftsführers von SIG Sauer gesehen? Den hatte die Polizei beschlagnahmt, im Rahmen von Ermittlungen wegen Waffenlieferungen nach Kasachstan und Kolumbien. Und jetzt ist er … weg. Ich dachte mir erst: Hmm, ob sie den hier gelagert hatten? Aber nein, der lag wohl bei der Staatsanwaltschaft.

March 05, 2015 09:01 PM

/r/compsci

StackOverflow

Running simple spray route test result: Could not initialize class spray.http.Uri$

I am fairly new to spray . I am trying to get the testing correctly so I used the example shown in spary testkit however I am getting this errors. any assistance will greatly appreciated :

The service should

Could not initialize class spray.http.Uri$
java.lang.NoClassDefFoundError: Could not initialize class spray.http.Uri$
    at spray.httpx.RequestBuilding$RequestBuilder.apply(RequestBuilding.scala:36)
    at spray.httpx.RequestBuilding$RequestBuilder.apply(RequestBuilding.scala:34)
    at com.tr.route.EventRouteTester$$anonfun$3$$anonfun$apply$2.apply(EventRouteTester.scala:38)
    at com.tr.route.EventRouteTester$$anonfun$3$$anonfun$apply$2.apply(EventRouteTester.scala:38)

return a 'PONG!' response for GET requests to /ping

Could not initialize class spray.http.Uri$
java.lang.NoClassDefFoundError: Could not initialize class spray.http.Uri$
    at spray.httpx.RequestBuilding$RequestBuilder.apply(RequestBuilding.scala:36)
    at spray.httpx.RequestBuilding$RequestBuilder.apply(RequestBuilding.scala:34)
    at com.tr.route.EventRouteTester$$anonfun$3$$anonfun$apply$6.apply(EventRouteTester.scala:44)
    at com.tr.route.EventRouteTester$$anonfun$3$$anonfun$apply$6.apply(EventRouteTester.scala:44)

leave GET requests to other paths unhandled

scala/collection/GenTraversableOnce$class
java.lang.NoClassDefFoundError: scala/collection/GenTraversableOnce$class
    at spray.http.Uri$Query.<init>(Uri.scala:496)
    at spray.http.Uri$Query$Empty$.<init>(Uri.scala:575)
    at spray.http.Uri$Query$Empty$.<clinit>(Uri.scala)
    at spray.http.parser.UriParser.<init>(UriParser.scala:37)
    at spray.http.Uri$.apply(Uri.scala:231)
    at spray.http.Uri$.apply(Uri.scala:203)
    at spray.http.Uri$.<init>(Uri.scala:194)
    at spray.http.Uri$.<clinit>(Uri.scala)
    at spray.httpx.RequestBuilding$RequestBuilder.apply(RequestBuilding.scala:36)
    at spray.httpx.RequestBuilding$RequestBuilder.apply(RequestBuilding.scala:34)
    at spray.httpx.RequestBuilding$RequestBuilder.apply(RequestBuilding.scala:33)
    at com.tr.route.EventRouteTester$$anonfun$3$$anonfun$apply$9.apply(EventRouteTester.scala:50)
    at com.tr.route.EventRouteTester$$anonfun$3$$anonfun$apply$9.apply(EventRouteTester.scala:50)
Caused by: java.lang.ClassNotFoundException: scala.collection.GenTraversableOnce$class
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
    ... 13 more

return a MethodNotAllowed error for PUT requests to the root path

this is the test class

class RouteTester extends Specification with Specs2RouteTest with HttpService {
  def actorRefFactory = system // connect the DSL to the test ActorSystem

  val smallRoute =
    get {
      pathSingleSlash {
        complete {
          <html>
            <body>
              <h1>Say hello to <i>spray</i>!</h1>
            </body>
          </html>
        }
      } ~
        path("ping") {
          complete("PONG!")
        }
    }

  "The service" should {

    "return a greeting for GET requests to the root path" in {
      Get() ~> smallRoute ~> check {
        responseAs[String] must contain("Say hello")
      }
    }

    "return a 'PONG!' response for GET requests to /ping" in {
      Get("/ping") ~> smallRoute ~> check {
        responseAs[String] === "PONG!"
      }
    }

    "leave GET requests to other paths unhandled" in {
      Get("/kermit") ~> smallRoute ~> check {
        handled must beFalse
      }
    }

    "return a MethodNotAllowed error for PUT requests to the root path" in {
      Put() ~> sealRoute(smallRoute) ~> check {
        status === MethodNotAllowed
        responseAs[String] === "HTTP method not allowed, supported methods: GET"
      }
    }
  }
}

I am using maven this are the dependencies and versions

  <scala.version>2.11.2</scala.version>
        <spray.version>1.3.1</spray.version>
        <akka.version>2.3.8</akka.version>

      <dependency>
            <groupId>io.spray</groupId>
            <artifactId>spray-can</artifactId>
            <version>${spray.version}</version>
        </dependency>
        <dependency>
            <groupId>io.spray</groupId>
            <artifactId>spray-routing</artifactId>
            <version>${spray.version}</version>
        </dependency>
        <dependency>
            <groupId>io.spray</groupId>
            <artifactId>spray-json_2.11</artifactId>
            <version>${spray.version}</version>
        </dependency>
         <dependency>
            <groupId>io.spray</groupId>
            <artifactId>spray-testkit_2.11</artifactId>
             <version>${spray.version}</version>
        </dependency>

by igx at March 05, 2015 08:52 PM

CompsciOverflow

Which type of randomized algorithm is best suited for web crawling?

I have decided to implement a web crawler for my CS major project. The project is focused towards adaptive search. I want the pages to be as user specific as possible and time efficiency is not much a constraint. After searching the web for few days (provided I have no prior knowledge in web mining), I realised major research on adaptive web mining reduced down to three major areas:

  1. Genetic based algorithms: inspired by evolutionary biology studies.One most researched and efficient algorithm is Infospider.

  2. Ant-based algorithms: based on a model of social insect collective behaviour.

  3. Machine learning based algorithm: aims at learning stastically characteristics of the linkage structure of the web. For example algorithms based on Hidden Markov model.

Now since I have no prior knowledge in web mining. I am unable to decide the most apt algorithm for my project (whether genetic based or machine learning based), given I have 7-8 months to complete my project and I want the project to be as near as possible to practical applications. Also I found both fields (Machine learning and genetic algorithms) very interesting. So please enlighten me.

by user3165873 at March 05, 2015 08:49 PM

StackOverflow

Deadletters encountered when communicating between spark clusters with akka actor

Since spark is built on top of Akka, I want to use Akka to send and receive messages between spark clusters.

According to this tutorial, https://github.com/jaceklaskowski/spark-activator/blob/master/src/main/scala/StreamingApp.scala, I can run StreamingApp locally and send messages to the actorStream itself.

Then I tried to attach the sender part to my another spark master and send message from spark master to the remote actor in StreamingApp. The code is as follows

object SenderApp extends Serializable {

    def main(args: Array[String]) {

        val driverPort = 12345
        val driverHost = "xxxx"
        val conf = new SparkConf(false) 
            .setMaster("spark://localhost:8888") // Connecting to my spark master
            .setAppName("Spark Akka Streaming Sender")
            .set("spark.logConf", "true")
            .set("spark.akka.logLifecycleEvents", "true")
        val actorName = "helloer"

        val sc = new SparkContext(conf)

        val actorSystem = SparkEnv.get.actorSystem

        val url = s"akka.tcp://sparkDriver@$driverHost:$driverPort/user/Supervisor0/$actorName"

        val helloer = actorSystem.actorSelection(url)

        helloer ! "Hello"
        helloer ! "from"
        helloer ! "Spark Streaming"
        helloer ! "with"
        helloer ! "Scala"
        helloer ! "and"
        helloer ! "Akka"
    }
}

Then I got messages from StreamingApp saying it encountered DeadLetters. The detailed messages are:

INFO LocalActorRef: Message [akka.remote.transport.AssociationHandle$Disassociated] from Actor[akka://sparkDriver/deadLetters] to Actor[akka://sparkDriver/system/transports/akkaprotocolmanager.tcp0/akkaProtocol-tcp%3A%2F%2FsparkDriver%40111.22.33.444%3A56840-4#-2094758237] was not delivered. [5] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.

by Yifei at March 05, 2015 08:47 PM

/r/compsci

StackOverflow

Can you create a centralized topic in ZeroMQ?

I would really like to try ZeroMQ and I am wondering whether my problem can be solved using it.

THE PROBLEM: I have multiple subscribers and multiple publishers. In a centralized broker architecture the publishers would publish a message to a topic (kind of like multicast address) and the subscribers would get the messages from the topic and act on the messages. I can't use Multicast because our network topology has multiple subnets and IT guys will not forward my multicast packets to all subnets.

Since there is no centralized broker, how can this problem be solved in ZeroMQ? (sample code would be great in any language)

by Denis at March 05, 2015 08:38 PM

CompsciOverflow

Sufficient condition for simple graph isomorphism?

Say I have two simple graphs, $A$ and $B$

  • In $A$, I know:

    • one node has 3 nodes at distance of 1, 4 nodes at distance 2, etc.
    • one node has 4 nodes at distance of 1, 1 nodes at distance 2, etc.
    • etc.
  • In $B$, I know:

    • one node has 3 nodes at distance of 1, 4 nodes at distance 2, etc.
    • one node has 4 nodes at distance of 1, 1 nodes at distance 2, etc.
    • etc.

Can I derive that graph $A$ and $B$ are isomorphic to each other? Or is there a counter-example where two graphs look the same from this distance point of view, but are not isomorphic?

It is a necessary condition, so if these simple graphs are isomorphic, they will share these distances. I am wondering if this is a sufficient condition as well.

by 317070 at March 05, 2015 08:38 PM

PCA and Eigenvectors

I am trying to understand how PCA works, and think I got most of it except...

By calculating eigenvalues/vectors of the covariance matrix of the original dataset allows to find those dimensions where data is least correlated (lowest covariance). Sorting them based on eigenvalues is the method which allows the sorting of dimensions.

Hence, multiplying the original dataset by the 'feature matrix' allows to discard those dimensions which are least significant.

What I don't understand is the relation between the covariance matrix and eigenvectors. Why is it that eigenvectors of the covariance matrix automatically allow us to find the directions where data is least correlated?

Hope this makes sense :)

by max0005 at March 05, 2015 08:35 PM

AWS

AWS Management Portal for vCenter Update – Auto Upgrades, Log Upload, Queued Imports

We have updated AWS Management Portal for vCenter. This plug-in runs within your existing vCenter environment and gives you the power to migrate VMware VMs to Amazon Elastic Compute Cloud (EC2) and to manage AWS resources from within vCenter.

Today’s update includes automatic upgrades, log uploading, and queued import tasks.

Automatic Upgrades
The management console now displays a prompt when a new version is available. You can choose to install upgrades on an on-demand basis or automatically. Automatic upgrades allow you to receive updates to the portal and our on-premises connector, and take advantage of subsequent feature enhancements that we make, without having to perform updates manually.

Log Uploading
You can now upload on-premises logs to AWS with a single click. These logs can assist with troubleshooting of the VM import procedure.

Queue Import Tasks
The plug-in will now queue additional VMs for import if the maximum number of concurrent VM migrations has already been reached. This effectively eliminates the limit on concurrent migration tasks.

Available Now
You can download AWS Management Portal for vCenter now and start using it today. See the release notes for more information on the bug fixes that are included in this launch.

Jeff;

by Jeff Barr at March 05, 2015 08:30 PM

StackOverflow

How to propagate errors in remote actors

If I am using the actor per request model, and I am expecting responses to messages that have been fired to the remote system, and one of those messages caused an exception, how do I recover? From my understanding, since the actor is remote, supervision strategies do not apply in this situation. Should I let the request time out? Should the remote system send a message back containing the exception?

by Jeff at March 05, 2015 08:27 PM

String category names with Wisp scala plotting library

I am trying to plot bars for String categories with Wisp. In other words, I have a string (key) and a count (value) in my repl and I want to have a bar chart of the count versus the key.

I don't know if something easy exists. I went as far as the following hack:

val plot = bar(topWords.map(_._2).toList)
val axisType: com.quantifind.charts.highcharts.AxisType.Type = "category"
val newPlot = plot.copy(xAxis = plot.xAxis.map {
  axisArray => axisArray.map { _.copy(axisType = Option(axisType),
                                      categories = Option(topWords.map(_._1))) }
})

but I don't know if it works because I don't find a way to visualize newPlot.

Thanks for any help.

PS : I don't have the reputation to create the wisp tag, but I would have...

by Wilmerton at March 05, 2015 08:26 PM

Lobsters

StackOverflow

Play Framework cache Remove elements matching regex

I was wondering if there is a way to delete elements from the Play cache using a regex.

I'm using play 2.2.x and I'm storing elements in the cache following this pattern:

collectionName.identifier

Is there a way to expire the caché using a regular expression to match the key, like:

collectionName.[a-zA-Z0-9]+

The reason I want to do that is because sometimes I will update elements in db matching some fields, and I can't really know which elements were updated.

If there is a way in ReactiveMongo to get the updated object identifiers, that would help me as well.

Thanks for any help.

by Bruno Follon at March 05, 2015 08:22 PM

getting items by eid from datomic, too few inputs?

So I'm trying to use the entity id to retrieve items recently transacted to the datomic database.

However, when invoking (get-post-by-eid zzzzzzzzz) I get an error

IllegalArgumentExceptionInfo :db.error/too-few-inputs Query expected 2 inputs but received 1  datomic.error/arg (error.clj:57)


(defn get-post-by-eid [eid]
   (d/q '[:find ?title ?content ?tags ?eid
              :in $ ?eid
              :where
              [?eid post/title ?title]
              [?eid post/content ?content]
              [?eid post/tag ?tags]] (d/db conn)))

So I figure my query string must be malformed..

I've been looking at http://www.learndatalogtoday.org/chapter/3 but still not sure where I'm going astray.

Any help is appreciated (=

by sova at March 05, 2015 08:14 PM

Missing arguments for method toArray in trait List when converting from java ArrayList to scala Array

I have this simple code:

import java.util
import scala.collection.JavaConversions._

def f(x: util.List[Int]): Array[Int] = {
  x.toArray[Int]
}

It is failing on error: missing arguments for method toArray in trait List

However the source code for toArray is the following:

trait TraversableOnce[+A] extends Any with GenTraversableOnce[A] {
...
  def toArray[B >: A : ClassTag]: Array[B] = {
    if (isTraversableAgain) {
      val result = new Array[B](size)
      copyToArray(result, 0)
      result
    }
    else toBuffer.toArray
  }

So clearly there is no missing argument.

1) How is that possible? Is there a simple workaround? Or am I missing something?

2) The error message continue with follow this method with '_' if you want to treat it as a partially applied function. Don't you think it is a stupid proposition? I have declared the return value, so partially applied function cannot work. The compiler should see it.

by mirelon at March 05, 2015 08:12 PM

/r/netsec

/r/emacs

Skewer-mode not recognizing variables

So I've set up skewer-mode and it seems to be interacting with the browser. (Ice weasel 31.5.0) I'm able to do simple math and the alert method works with no problems. However when I define a variable I get an error that the variable is not defined. For example this will give me the error; five is not defined:

var five = 5; five * five; 

What's weird is sometimes I won't get the error and things will work as expected. Any idea what's wrong? ( I have the latest update of Skewer 20141215.1525 and js2-mode 20150304.1821.)

submitted by drivezero
[link] [8 comments]

March 05, 2015 08:09 PM

StackOverflow

Chunked Response from an Iterator with Play Framework in Scala

I have a large result set from a database call that I need to stream back to the user as it can't all fit into memory.

I am able to stream the results from the database back by setting the options

val statement = session.conn.prepareStatement(query, 
                java.sql.ResultSet.TYPE_FORWARD_ONLY,
                java.sql.ResultSet.CONCUR_READ_ONLY)
statement.setFetchSize(Integer.MIN_VALUE)
....
....
val res = statement.executeQuery

And then by using an Iterator

val result = new Iterator[MyResultClass] {
    def hasNext = res.next
    def next = MyResultClass(someValue = res.getString("someColumn"), anotherValue = res.getInt("anotherValue"))
}

In Scala, Iterator extends TraversableOnce which should allow me to pass the Iterator to the Enumerator class that is used for the Chunked Response in the play framework according to the documentation at https://www.playframework.com/documentation/2.3.x/ScalaStream

When looking at the source code for Enumerator I discovered that it has an overloaded apply method for consuming a TraversableOnce object

I tried using the following code

import play.api.libs.iteratee.Enumerator
val dataContent = Enumerator(result)
Ok.chunked(dataContent)

But this isn't working as it throws the following exception

Cannot write an instance of Iterator[MyResultClass] to HTTP response. Try to define a Writeable[Iterator[MyResultClass]]

I can't find anywhere in the documentation that talks about what Writable is or does. I thought once the Enumerator consumed the TraversableOnce object, it would take it from there but I guess not??

by Adam Ritter at March 05, 2015 08:07 PM

/r/netsec

QuantOverflow

How to calculate bond price using QuantLib?

I want to calculate the price of a bond, with discount factors known as a function of time, and fixed coupon.

The example I found (bond.cpp) from QuantLib 1.5 looks quite complicated. I deleted the zero-bond and floating coupon bond, leaving only the fixed coupon bond calculation part.

Even for the fixed bond part, I don't understand how should I define RATE HELPERS and CURVE BUILDING parts, based on the table I have:

Instrument Type | Maturity Date | Quote | Discount Factor

CD | 3/4/2015 | 0.25 | 0.9999

CD | 3/18/2015 | 0.26 | 0.9998 CD | 6/17/2015 | 0.27 | 0.9997

CD | 9/16/2015 | 0.4 | 0.998

CD | 12/16/2015 | 0.6 | 0.997

… … … …

SWAP | 3/4/2019 | 1.3 | 0.94

SWAP | 3/4/2020 | 1.6 | 0.92

… … … …

What should I do to build the discount curve in QuantLib? Is there a more straight-forward way for QuantLib to read the discount factors?

If not, how to build the "RelinkableHandle discountingTermStructure", as shown in the bond.cpp example?

by ziegele at March 05, 2015 08:02 PM

Lobsters

/r/scala

Lobsters

AWS

CloudTrail Integration with CloudWatch in Four More Regions

My colleague Sivakanth Mundru sent me a guest post with CloudTrail and CloudWatch integration news, along with information about a new CloudFormation template to help you to get going more quickly.

— Jeff;


At re: Invent 2014, we launched AWS CloudTrail integration with Amazon CloudWatch Logs in the US East (Northern Virginia), Europe (Ireland), and US West (Oregon) regions. With this integration, you can monitor for specific API calls that are made in your AWS account and receive email notifications when those API calls are made.

Today, we are making this feature available in Asia Pacific (Sydney), Asia Pacific (Singapore), Asia Pacific (Tokyo), and Europe (Germany) regions with more regions to come in the future! We also created a AWS CloudFormation template to easily create CloudWatch alarms for API activity captured by CloudTrail.

CloudFormation Template
In this blog post, I will show you how you can use CloudFormation to configure CloudWatch alarms for monitoring critical network and security related API activity and receive an email notification when those API calls are made in your AWS account. This [[CloudFormation template]] contains predefined metric filters that monitor for critical network and security related API calls made to create, delete, and update Security Groups, Network ACL’s, Internet Gateways, EC2 instances, and IAM policy changes.

For more details, refer to the CloudTrail documentation  that explains the alarms defined in the CloudFormation template. You can configure the CloudWatch alarms individually or you can tweak the metric filters to fit your own scenario.

Prerequisites
You need to configure CloudTrail log file delivery to CloudWatch Logs. The CloudTrail console provides secure default values for your configuration so that you can easily configure CloudTrail to send log files to CloudWatch Logs. Go to the CloudTrail Console or refer the CloudTrail documentation. If you use AWS in multiple regions, you can use the same process and CloudFormation template in those regions to monitor specific API calls and receive email notifications. If you are not using the default CloudWatch Logs log group, note the name to use in the CloudFormation template.

Step 1 – Download the CloudFormation Template
Download the template from https://s3-us-west-2.amazonaws.com/awscloudtrail/cloudwatch-alarms-for-cloudtrail-api-activity/CloudWatch_Alarms_for_CloudTrail_API_Activity.json and save it locally. The template is ready to go, but you are welcome to open it using your favorite text editor or an online JSON editing tool. Here’s a peek inside:

Step 2 – Upload the CloudFormation Template
Go to the CloudFormation Console and create a stack for uploading the template. Give the stack a name that is meaningful to you and upload the CloudFormation template from the location you used in Step 1.

Step 3 – Specify Parameters
Click Next in the above screen to specify parameters. The parameters you need to specify are an email address where you would like to receive notifications and the CloudWatch Logs log group that you configured in step 1.  The CloudFormation template will create an SNS topic and subscribe your email to that topic. Make sure you use the same CloudWatch Logs log group you specified in step 1.

Click Next for other options such as creating tags or other advanced options. In this case, I am not doing either one. In the next screen, you can review parameters and create the alarm stack.

Step 4 – Review Parameters and Create

Verify that your email address and log group name are correct and click Create. Your CloudFormation stack will be created in few minutes.

Step  5 – Confirm Email Subscription from your Email
Once the CloudFormation stack creation process has completed, you will receive an email message that contains a request to validate your email address. Click Confirm Subscription in your email so that you can receive email notifications when alarms are triggered:

Step 6 – Receive Email Notifications
For example, the email I received below is a hint that an API call was made to create, update or delete a security group in my account:

If you would like us to add more alarms to the CloudFormation template, you can share that and other feedback in the CloudTrail forum.

You may also want to read the documentation for Using a AWS CloudFormation Template to Create CloudWatch Alarms and Creating CloudWatch Alarms for CloudTrail Events: Examples.

— Sivakanth Mundru, Senior Product Manager

by Jeff Barr at March 05, 2015 07:54 PM

UnixOverflow

How to highlight given string in given place?

How can we modify this:

<a href="http://foo.bar1">asfdlksafbar1qsasadf</a><br>
<a href="http://foo.bar2">svasfbar2saldkfj</a><br>
<a href="http://foo.bar3">safdfrhbar3saljfd</a><br>
<a href="http://foo.bar4">erasfasfbar4asfer</a><br>

to this?

<a href="http://foo.bar1">asfdlksafbar1qsasadf</a><br>
<a href="http://foo.bar2">svasfbar2saldkfj</a><br>
<a href="http://foo.bar3">safdfrh<font style=BACKGROUND-COLOR:red>bar3</font>saljfd</a><br>
<a href="http://foo.bar4">erasfasfbar4asfer</a><br>

So only the bar3 would be highlighted, only if it occurs between the:

">xxx</a>


I am using ksh/OpenBSD.

by user90825 at March 05, 2015 07:54 PM

CompsciOverflow

Equivalence of Recursively Enumerability (RE) definitions

Let A be a subset of N n

Definition1 of RE

DEF1_RE = A is RE iff there is a TM M st M(x) = 1 iff x belongs to A, 0/undefined otherwise

Definition2 of RE

DEF2_RE = A is RE iff there is a recursive/Turing-decidable relation R subset of Nn+1 st

x belongs to A <=> there exists a y R(x,y)

So I have proved that if A belongs to DEF2_RE, then A belongs to DEF1_RE

but I can't prove the other direction. Can someone please help me prove the other direction?

by Abdul Rahman at March 05, 2015 07:49 PM

/r/compsci

Project Idea

Freshmen computer science major here

I was wondering what was the best language to use to download a torrent automatically from a given torrent site and how would it be done? Is this method called web crawling or HTML scraping or something else?

Thanks

submitted by omahgah
[link] [1 comment]

March 05, 2015 07:48 PM

StackOverflow

Higher-kinded types for multisets

I would like to write a Multiset[T, S[_]] class in Scala, which takes 2 type parameters: T is the type of the element, whereas S is the underlying representation of the set. In this multiset, an instance of S[(T, Int)] is constructed (in each pair, T is the element and Int is its number of occurrence). This is possible in C++:

template<typename T, template<typename> S>

Two questions:

  • How to declare the constraint on S that it must be a set? Does Multiset[T, S[_] <: Set[_]] work?

  • Is it possible to declare the constraint that an instance of S[(T, Int)] can be instantiated? This can be done in C# using the where S: new() constraint.

by Tongfei Chen at March 05, 2015 07:44 PM

QuantOverflow

Stub rate and first fixing in IRS

I have 2 questions that probably are related. Suppose there is an IRS that pays a 2% fixed rate every 6 months and receives the Libor 3 months (but paid every 6 months). The swap starts today (March 5th) so the first payment is on Sept 5th (in 6 months). But the Libor is a 3 months rate in this case so the fixing should be every 3 months, I guess. If that's the case, there are 2 important dates from now and the first payment: the first fixing date (in 3 months time... i.e. June 5th) and the second fixing date which is also payment date on Sept 5th. But I have read that it is convention to have the float rate to be fixed in advance (2 days before the period) and then paid in arrears at the end of the period. But how does it work in my example? The swap starts on March 5th and pays the Libor 3 months in 6 months time. So in 6 months I should have 2 different fixings for the Libor 3 months. But when? One is happening in 3 months time on June 5th but what about the other? Is it done today?

Is there a simple explanation how the stub rate works? How is ot calculated? Is it related to my first doubt about the very first fixing for the Libor 3 months?

by mickG at March 05, 2015 07:36 PM

StackOverflow

How to instantiate an instance of type represented by type parameter in Scala

example:

import scala.actors._  
import Actor._  

class BalanceActor[T <: Actor] extends Actor {  
  val workers: Int = 10  

  private lazy val actors = new Array[T](workers)  

  override def start() = {  
    for (i <- 0 to (workers - 1)) {  
      // error below: classtype required but T found  
      actors(i) = new T  
      actors(i).start  
    }  
    super.start()  
  }  
  // error below:  method mailboxSize cannot be accessed in T
  def workerMailboxSizes: List[Int] = (actors map (_.mailboxSize)).toList  
.  
.  
.

Note the second error shows that it knows the actor items are "T"s, but not that the "T" is a subclass of actor, as constrained in the class generic definition.

How can this code be corrected to work (using Scala 2.8)?

by scaling_out at March 05, 2015 07:34 PM

Avoiding deeply nested Option cascades in Scala

Say I have three database access functions foo, bar, and baz that can each return Option[A] where A is some model class, and the calls depend on each other.

I would like to call the functions sequentially and in each case, return an appropriate error message if the value is not found (None).

My current code looks like this:

Input is a URL: /x/:xID/y/:yID/z/:zID

foo(xID) match {
  case None => Left(s"$xID is not a valid id")
  case Some(x) =>
    bar(yID) match {
      case None => Left(s"$yID is not a valid id")
      case Some(y) =>
        baz(zID) match {
          case None => Left(s"$zID is not a valid id")
          case Some(z) => Right(process(x, y, z))
        }
    }
}

As can be seen, the code is badly nested.

If instead, I use a for comprehension, I cannot give specific error messages, because I do not know which step failed:

(for {
  x <- foo(xID)
  y <- bar(yID)
  z <- baz(zID)
} yield {
  Right(process(x, y, z))
}).getOrElse(Left("One of the IDs was invalid, but we do not know which one"))

If I use map and getOrElse, I end up with code almost as nested as the first example.

Is these some better way to structure this to avoid the nesting while allowing specific error messages?

by Ralph at March 05, 2015 07:23 PM

CompsciOverflow

Cost of shifting a number

I was wondering what would be a time complexity of shifting a binary or a decimal number? For example: 0011, when I shift it left I get 0001.

I was thinking that the time complexity is $\Theta(n)$, because we have to shift every number.

I need it to calculate $m*{b^\frac{n}{2}}$, where $m$ is some number and $b$ might be 2 (in case of binary) or 10 (decimal). Therefore instead of multiplication, I just shift to multiply, and therefore complexity is $\Theta(n)$.

Am I correct or is my thinking wrong?

by Johnny Bravo at March 05, 2015 07:21 PM

/r/compsci

StackOverflow

Ansible condition always evaluates to false

I'm trying to examine the output of a shell command for a particular string which indicates an error, and that the playbook should be terminated.

I'm trying to debug it something like this:

- debug: var=foo_result

- debug: msg={{ 'Some error text' in foo_result }}

In this example, install_result was registered to contain the output of the command, and it does:

TASK: [do_stuff | debug var=foo_result] **************************** 
ok: [some-node] => {
    "foo_result": {
        "changed": true, 
        "msg": "All items completed", 
        "results": [
            {
                "changed": true, 
[Snip..]
                "stderr": "", 
                "stdout": "...Some error text..."
            }
        ]
    }
}

The second debug statement, which checked for "some error text" in foo_result always evaluates to "false".

I'm still finding the Ansible syntax a little confusing, I'm not sure what I did wrong here.

Ansible version: 1.6.10

by FrustratedWithFormsDesigner at March 05, 2015 07:17 PM

/r/netsec

CompsciOverflow

Amortized analysis : Dynamic arrays

While reading lecture notes on amortized analysis, I came upon the example of dynamic arrays, where you expand the array when it is completely full and shrink it when it is $\frac{1}{4}$ full. The notes say that this gives us an amortized cost for each add/remove operation as $O(1)$ and $m$ operations has an amortized cost of $O(m)$.

My question is, if we shrink the array when it is half-full, how does this change the amortized cost? Will it still remain $O(1)$? Will the amortized cost remain the same if we shrink/expand using other limits, say, expanding when the array is $\frac{3}{4}$ full, etc?

by ReiJin ThunderKeg at March 05, 2015 07:13 PM

/r/emacs

StackOverflow

Evaluation of part of Clojure cond

Trying to do exercise 1.16 (iterative version of fast-exp) in "Structure and Interpretation of Computer Programs" with Clojure I came up with this:

(defn fast-it-exp [base exp res]
  (cond (= exp 0) res
  (odd? exp) fast-it-exp base (- exp 1) (* base res)
  :else fast-it-exp base (/ exp 2) (* base base res)))

Trying it out:

user=> (fast-it-exp 0 0 10)
10   ;yep
user=> (fast-it-exp 2 2 2)
1     ;no...
user=> (fast-it-exp 1 1 1)
#<user$fast_it_exp__59 user$fast_it_exp__59@138c63>    ;huh?!

Seems the "odd" part of the cond expression returns a function instead of evaluating. Why? I've tried putting parenthesis around the expressions after the predicates but that seems to be incorrect syntax, this is the best I've been able to come up with. I'm using rev 1146 of Clojure.

by Lars Westergren at March 05, 2015 07:10 PM

default value for dictionary in jinja2 (ansible)

jinja2 has filter '|default()' to works with undefined variables. But it does not work with dictionary values.

if D may have or not have key foo (D[foo]), than:

{{ D[foo]|default ('no foo') }}

will prints 'no foo' if D is undefined, but will cause error ('dict object' has no attribute 'foo') if D is defined, but D[foo] is undefined.

Is any way to make default for dictionary item?

by George Shuklin at March 05, 2015 07:10 PM

Fefe

Urteil des Tages:Ein kostenpflichtiges Online-Verzeichnis, ...

Urteil des Tages:
Ein kostenpflichtiges Online-Verzeichnis, das nicht bei Google & Co. unter den ersten fünf Suchtreffern platziert ist, ist wertlos und erfüllt den Tatbestand der Sittenwidrigkeit (LG Wuppertal, Beschl. v. 05.06.2014 - Az.: 9 S 40/14).
BWAHAHAHAHA

March 05, 2015 07:01 PM

Ich hatte Tilo Jung auf dem Polizeikongress ja schon ...

Ich hatte Tilo Jung auf dem Polizeikongress ja schon kurz verlinkt, hier jetzt der Rest der Story. Viel Spaß, und eure Popcornreserven habt ihr ja hoffentlich nach der Warnung neulich aufgestockt! :-)

March 05, 2015 07:01 PM

Der EuGH killt mal eben die Festplattenabgabe.So stellen ...

Der EuGH killt mal eben die Festplattenabgabe.
So stellen die Richter darin fest, dass für Kopien eines rechtmäßig erworbenen Musikstücks keine weiteren Gebühren veranschlagt werden dürfen. Eine Ausnahme bilden theoretisch per DRM geschützte Titel, da DRM-Sperren aber nur von technisch sehr versierten Nutzern umgangen werden können, stelle das keine Basis für eine allgemeine Regelung dar.

March 05, 2015 07:01 PM

Ganz, GANZ übler Beitrag in der FAZ zur Ukraine und ...

Ganz, GANZ übler Beitrag in der FAZ zur Ukraine und der Bedrohung Europas durch den Russen.
Es führt kein Weg daran vorbei: Europa muss für seine Sicherheit und Selbstbehauptung tiefer in die Tasche greifen und mehr materiell fundierten Selbstbehauptungswillen als bisher generieren. Von Vorteil dürfte sein: wenn der Hauptverbündete Amerika die europäischen Anstrengungen anerkennt, wird die friedensbewahrende Funktion der nuklearen Abschreckung wahrscheinlicher. Sobald die Vereinigten Staaten ihr Abschreckungspotential an die europäische konventionelle Verteidigung glaubhaft ankoppeln, sinkt die Gefahr einer russischen nuklearen Bedrohung erheblich.
Und das ist schon der entschärfte Artikel. Ursprünglich stand da im Anreißer:
Um der russischen Expansion dennoch Einhalt zu gebieten, kann die Rückkehr zur gegenseitigen atomaren Abschreckung helfen.
Jetzt steht da:
Um der russischen Expansion dennoch Einhalt zu gebieten, muss der Westen verteidigungsbereiter werden - und in seiner Doktrin neue Wege beschreiten.
Nukes, fuck yeah!

Oh und wo wir gerade bei unsäglichen FAZ-Artikeln waren: Jasper von Altenbockum springt auf den Snowden-Truther-Zug auf. Ich hoffe er sitzt bequem zwischen den Ex-Spackeria-Crackpots.

Viele der Enthüllungen durch den Whistleblower Edward Snowden mussten im Nachhinein korrigiert werden. Das Gerede vom „totalen Überwachungsstaat“ ist bislang hohl geblieben.
Ach so ist das! Klar, war alles Lug und Trug. Clapper hat es doch selbst gesagt im US-Senat! Und dier BND hat uns auch versichert, dass sie ihre Befugnisse nie missbrauchen würden! Außer in wenigen Einzelfällen.
Der Begriff „Überwachungsstaat“ hilft deshalb mangels Trennschärfe nicht weiter (weshalb er im „Brockhaus“ auch nicht vorkommt).
Ach das ist das Kriterium! Verstehe…

March 05, 2015 07:01 PM

DataTau

UnixOverflow

OpenBSD's pf: disable network access for a given user, except for ssh.

If we are using the default firewall for OpenBSD, how can we modify it to disable all the network access for a normal user except for one thing: we want to ssh to the user from random hosts!

So example if the user want's to "wget google.com", it shouldn't have firewall permission to it. If we want to copy something via scp to the user from a random machine, the firewall would need to allow it. If the user wants to ssh to some other hosts, it shouldn't have access.

by user90825 at March 05, 2015 06:48 PM

CompsciOverflow

Which addresses belong to which network, given a fixed subnet mask?

The subnet mask for a particular network is 255.255.31.0.
Which of the following IP address belongs to this network?

  1. 172.57.88,62 & 172.56.87.233
  2. 10.35.28.2 & 10.35.29.4
  3. 191.203.31.87 & 191.234.31.88
  4. 128.8.129.43 & 128.8.161.55

Little bit confuse about 3rd octet of SM ( i.e .31 ) But got the point that 5 bit was hired from 3rd octet for subnetting.

The principle : convert the dotted-quad IP addresses and mask to 32-bit unsigned integers and AND each address with the mask. If the results are the same, they're in the same subnet.

By using above principle Ans is D.
But Can I do it directly ?
Means

  1. By looking at SM it is clear that , it belongs to class B.
  2. Option A and C are 2 different networks.( there 2nd octet is different i.e. 172.57 and 172.56 and 191.203 and 191.234 )
  3. Option B is Class A network.( so cant be the answer)
  4. Option D is class B network , and its 1st two octet are also same i.e. 128.8 and 128.8 so now no problem to apply above principle to final check.

Can I think like this before applying principle ? ( just to save time )

by user1745866 at March 05, 2015 06:44 PM

StackOverflow

How to fix implicit conversion to Applicative?

This is a follow-up to my previous question

I would like to generalize the implicit conversion toApplicative, which adds method <*> to any M[A=>B], where M is Applicative (i.e. there is a typeclass instance Applicative[M])

implicit def toApplicative[M[_], A, B](mf: M[A=>B])(implicit apm: Applicative[M]) = 
   new { def<*>(ma: M[A]) = apm.ap(ma)(mf) }

Unfortunately I have got an error:

   <console>:25: error: Parameter type in structural refinement 
            may not refer to an abstract type defined outside that refinement

How would you suggest implement such an implicit conversion ?

by Michael at March 05, 2015 06:27 PM

Planet Clojure

filter vs take-while

Quick mental note on when to use each.

Use filter when:

1. you have a  fixed length collection

2. want to eternally read messages from an infinite stream.

3. if you have an infinite stream consider some stopping condition using take n or take-while after filter


Use take-while when:

1. you have an infinite stream and only want to process some messages then stop, e.g reading a file between a range of ids or timestamp (the input must be sorted already).

2. will work on a fixed length collection also but will stop on the first false return of the predicate (again works on sorted collections).

by Gerrit Jansen van Vuuren at March 05, 2015 06:13 PM

StackOverflow

Can I do a lazy take with a long parameter?

Stream.take takes an int as a parameter. I want to define a take that takes a long instead- is this possible without using takeWhile to manage a counter?

by lsankar4033 at March 05, 2015 06:04 PM

Lobsters

Fefe

Gute Nachrichten! Der Breitbandausbau in Bayern ist ...

Gute Nachrichten! Der Breitbandausbau in Bayern ist in trockenen Tüchern! Woher ich das weiß? Pricewaterhousecoopers hat die Ausschreibung gewonnen!

March 05, 2015 06:01 PM

Mann geht mir dieser ganze TPM-Whitewash auf den Sack. ...

Mann geht mir dieser ganze TPM-Whitewash auf den Sack. Von Leuten, die es besser wissen müssten. Leute, Secure Boot ist eine NSA-Erfindung. Die NSA ist neben ihren anderen Aufgaben auch zuständig für die Richtlinien für die Anschaffung von IT-Systemen. Die hat eines Tages dem Militär und Bundesbehörden in die Anforderungsliste geschrieben, dass zu viele Laptops verloren gehen, und daher wollen wir jetzt Plattenverschlüsselung, und damit die nicht per Bootsektor-Angriff ausgehebelt werden kann, fordern wir jetzt auch Secure Boot. Das ist der Grund, wieso Windows 8 UEFI und Secure Boot voraussetzt.

Ich erwähne das, weil sich der eine oder andere (hoffentlich!) gefragt haben wird, wieso wir von dem halbwegs wohlverstandenen Antik-BIOS zu dem klaffenden Sicherheitsloch UEFI wechseln, und wieso Lenovo uns Krypto-Technologien aufdrängen will, die uns daran hindern, etwaige NSA-Hintertüren im BIOS (wir erinnern uns bei der Gelegenheit an die "Bad-BIOS"-Geschichten) durch Flashen eines eigenen BIOS zu ersetzen.

Ja, das liegt an der NSA. Vollumfänglich liegt das an der NSA.

Was mir besonders auf den Sack geht, sind die Leute, die jetzt so tun, als ginge es bei TPM ja nur um Measured Boot, und dieser Bootguard-Scheiß sei bloß eine Aberration. Nein, ist es nicht. Das war der ursprüngliche Plan. Als die Geschichte noch Palladium hieß und ein Microsoft-Projekt war, um den NSA-Anforderungen zu genügen. Damals hieß es: Im Moment können wir TPM nur als passive Komponente neben die CPU löten, aber in der nächsten Generation (wir reden hier von Pentium 4-Zeiten!) ist das dann in der CPU und dann kann man das auch nicht mehr abschalten. Damals haben wir massive Proteste losgetreten, woraufhin das TPM eine (optionale!) passive Komponente neben der CPU blieb. Das Narrativ der NSA-Schergen änderte sich daraufhin zu "aber guckt mal, ihr könnt doch selber Schlüssel hochladen, und dann könnt ihr damit euer Linux absichern!1!!" Und einige Elemente unserer Community haben diesen Bullshit auch noch geglaubt!

Liebe Leute, wir befinden uns hier gerade in einem Krieg um unsere informationstechnischen Plattformen. Wer die Plattform besitzt, kontrolliert auch die auf ihr stattfindenden Dinge. Das sollte spätestens seit Facebook auch dem letzten Idioten klar sein, ist es aber offensichtlich nicht.

Intel glaubt jetzt offenbar, hey, die Leute da draußen sind alle dümmer als ein Stück Brot. Wir probieren das einfach nochmal und tun das TPM in die CPU. Damit es keiner merkt, nennen wir es "Boot Guard" und lassen das TPM daneben bestehen. Für die Plausible Deniability machen wir das "optional", und dann kriegt Lenovo den Gegenwind ab und nicht wir. Und wenn wir Glück haben, dann kommen so Spezialexperten wie Patrick Georgi und erkennen nicht nur nicht, dass Boot Guard ein TPM wie ursprünglich geplant ist, sondern bringen auch noch das Argument, dass das ja was anderes als "das TPM" sei, und versteigen sich dann auch noch zu der Ansage, dass TPM uns retten wird vor Boot Guard. Ach ja, ist das so? OK, zeig mal. Ich hab hier ein System, auf das habe ich coreboot geflasht, und es bootet nicht mehr. Zeig mir doch mal, wie TPM mich jetzt rettet, Patrick!

Warum reibe ich mich so an diesem einzelnen Verirrten? Weil der eigentlich auf unserer Seite kämpft! Das ist ein Coreboot-Entwickler! Ich will dem jetzt nicht unterstellen, ein NSA-Uboot zu sein. Aber vom Effekt seiner Aussagen her könnte er kaum mehr auf Seite der NSA kämpfen.

Liebe Nerds, so geht das nicht weiter. Schlimm genug, wenn die Politik keine Folgenabschätzung macht. Schlimm genug, wenn die Physiker Atombomben bauen, ohne zu wissen, ob deren Detonation am Ende unsere Atmosphäre verbrennen wird. Wenigstens wir müssen uns bei solchen Sachen langfristige Gedanken machen! Und nicht auf Geheimdienste und ihre Talking Points reinfallen!

Es reicht nicht, über die Politik zu meckern, wenn die unsere Freiheit auf dem Altar der Sicherheit opfern. Secure Boot ist genau das selbe in grün.

March 05, 2015 06:01 PM

Halfbakery

StackOverflow

Order of parameters in <*> and parenthesis in Scala

The <*> seems to be defined as a method of M[A], which accepts M[A=>B]. That's why I need parenthesis:

val f: A => B => C = ...
val as: List[A] = ...
val bs: List[B] = ...
val cs: List[C] = ...
val r = cs <*> (bs <*> (as <*> List(f)))  

On the other hand if I define <*> as a method of M[A=>B], which accepts M[A], I can write

val r = List(f) <*> ma <*> mb <*> mc

Does it make sense ?

by Michael at March 05, 2015 06:00 PM

DataTau

TheoryOverflow

Problems with big open complexity gaps

This question is about problems for which there is a big open complexity gap between known lower bound and upper bound, but not because of open problems on complexity classes themselves.

To be more precise, let's say a problem has gap classes $A,B$ (with $A\subseteq B$, not uniquely defined) if $A$ is a maximal class for which we can prove it is $A$-hard, and $B$ is a minimal known upper bound, i.e. we have an algorithm in $B$ solving the problem. This means if we end up finding out that the problem is $C$-complete with $A\subseteq C\subseteq B$, it will not impact complexity theory in general, as opposed to finding a $P$ algorithm for a $NP$-complete problem.

I am not interested in problems with $A\subseteq P$ and $B=NP$, because it is already the object of this question.

I am looking for examples of problems with gap classes that are as far as possible. To limit the scope and precise the question, I am especially interested in problems with $A\subseteq P$ and $B\supseteq EXPTIME$, meaning both membership in $P$ and $EXPTIME$-completeness are coherent with current knowledge, without making known classes collapse (say classes from this list).

by Denis at March 05, 2015 05:59 PM

CompsciOverflow

Formal Verification of Functional Programs

So I've been interested in learning more about formal verification, and I've seen a lot of interesting things like ACSL and JML which are based on the concept of Hoare triples.

My question is, that these seem to be a way to describe the behaviour of imperative programs via specification languages that are mostly functional (plus universal and existential quantifiers). But I don't even know the keywords to search to start learning how one would perform formal verification of code in a pure functional language like Haskell.

Could anyone provide a link to material or the name of the equivalent/analogue of Hoare triples for pure functional computation? Or do you need a fundamentally different approach in the purely-functional realm?

by user1243488 at March 05, 2015 05:55 PM

StackOverflow

Explaining functional programming to object-oriented programmers and less technical people

What are some good examples that I can use to explain functional programming?

The audience would be people with little programming experience, or people who only have object-oriented experience.

by rsideb at March 05, 2015 05:51 PM

adding task to sbt 13.x build.sbt

I have added this to build.sbt

libraryDependencies += "com.typesafe.slick" %% "slick-codegen" % "2.1.0"

lazy val slickGenerate = TaskKey[Seq[File]]("slick code generation")

slickGenerate <<= slickGenerateTask 

lazy val slickGenerateTask = {
    (sourceManaged in Compile, dependencyClasspath in Compile, runner in Compile, streams) map { (dir, cp, r, s) =>
      val dbName = "dbname"
      val userName = "user"
      val password = "password"
      val url = s"jdbc:mysql://server:port/$dbName" 
      val jdbcDriver = "com.mysql.jdbc.Driver"
      val slickDriver = "scala.slick.driver.MySQLDriver"
      val targetPackageName = "models"
      val outputDir = (dir / dbName).getPath // place generated files in sbt's managed sources folder
      val fname = outputDir + s"/$targetPackageName/Tables.scala"
      println(s"\nauto-generating slick source for database schema at $url...")
      println(s"output source file file: file://$fname\n")
      r.run("scala.slick.codegen.SourceCodeGenerator", cp.files, Array(slickDriver, jdbcDriver, url, outputDir, targetPackageName, userName, password), s.log)
      Seq(file(fname))
    }
}

The task's code itself isn't very exciting. It just needs to create an auto-generated scala source file. Problem is, sbt starts fine, yet this new task is evidently not recognized by sbt and cannot be run in the sbt prompt. I have also had very little luck with the := syntax for task definition. Existing documentation has been just confounding.

How can this new task be made available in the sbt prompt?

by matt at March 05, 2015 05:48 PM

Compiler error - covariant return type vs abstract override

I have a program that in part deals with a type hierarchy. All I'm trying to achieve here is to have the 'oldType' def make use of the covariant return type feature I'm accustomed from Java.

trait NumericMember extends  NumericTypedef{ }

trait Type
trait NumericType extends Type

trait Typedef     { def oldType : Type }

class TypedefImpl  extends  Typedef { 
  //can't use a val since it will get overriden
  def oldType : Type = ???
}

trait NumericTypedef extends Typedef with NumericType { 
  abstract override def oldType : NumericType =    super.oldType.asInstanceOf[NumericType] 
}

class NumericTypedefImpl extends TypedefImpl with NumericTypedef{ }

class NumericMemberImpl  extends  NumericMember {
  private val autoType = new NumericTypedefImpl
  override def oldType: NumericType = autoType.oldType
} 

The compiler blindly tells me that oldType in NumericMemberImpl needs to be an abstract override and then changes its mind when I obey it, figuring out NumericMemberImpl is actually a class.

I might be on a wrong avenue here, since I realize abstract override is used for the stacking traits. When I all I want is to have to have general and specialized return values for oldType.

Help, anyone?

by Cristian Martin at March 05, 2015 05:44 PM

How to set vars into ansible inventory?

In playbook I'm using variable {{excluded_service}}. I want to run ansible playbook from python and provide this variable. And I can't use external inventory script to provide this variable. I am using to create inventory:

hosts = ["127.0.0.1"]
inventory=ansible.inventory.Inventory(hosts)

but I don't understand where I can add value of variable?

My code, that works with external inventory script:

import sys
import os
import stat
import json

import ansible.playbook
import ansible.constants as C
import ansible.utils.template
from ansible import errors
from ansible import callbacks
from ansible import utils
from ansible.color import ANSIBLE_COLOR, stringc
from ansible.callbacks import display

playbook="/opt/RDE/3p/ansible/loop/testloop.yml"
inventory="/opt/RDE/3p/ansible/loop/lxc.py"
connection="local"
timeout=10
subset=None

options={'subset': None, 'ask_pass': False, 'sudo': False, 'private_key_file': None, 'syntax': None, 'skip_tags': None, 'diff': False, 'check': False, 'remote_user': 'root', 'listtasks': None, 'inventory': '/opt/RDE/3p/ansible/loop/lxc.py', 'forks': 5, 'listhosts': None, 'start_at': None, 'tags': 'all', 'step': None, 'sudo_user': None, 'ask_sudo_pass': False, 'extra_vars': [], 'connection': 'smart', 'timeout': 10, 'module_path': None}
sshpass = None
sudopass = None
extra_vars = {}



def colorize(lead, num, color):
    """ Print 'lead' = 'num' in 'color' """
    if num != 0 and ANSIBLE_COLOR and color is not None:
        return "%s%s%-15s" % (stringc(lead, color), stringc("=", color), stringc(str(num), color))
    else:
        return "%s=%-4s" % (lead, str(num))

def hostcolor(host, stats, color=True):
    if ANSIBLE_COLOR and color:
        if stats['failures'] != 0 or stats['unreachable'] != 0:
            return "%-37s" % stringc(host, 'red')
        elif stats['changed'] != 0:
            return "%-37s" % stringc(host, 'yellow')
        else:
            return "%-37s" % stringc(host, 'green')
    return "%-26s" % host   


inventory = ansible.inventory.Inventory(options['inventory'])


hosts = ["127.0.0.1"]



#inventory=ansible.inventory.Inventory(hosts)

inventory.subset(options['subset'])
if len(inventory.list_hosts()) == 0:
    raise errors.AnsibleError("provided hosts list is empty")

inventory.set_playbook_basedir(os.path.dirname(playbook))
stats = callbacks.AggregateStats()
playbook_cb = callbacks.PlaybookCallbacks(verbose=utils.VERBOSITY)
if options['step']:
    playbook_cb.step = options['step']
if options['start_at']:
    playbook_cb.start_at = options['start_at']



runner_cb = callbacks.PlaybookRunnerCallbacks(stats, verbose=utils.VERBOSITY)
pb = ansible.playbook.PlayBook(
            playbook=playbook,
            module_path=None,
            inventory=inventory,
            forks=options['forks'],
            remote_user=options['remote_user'],
            remote_pass=sshpass,
            callbacks=playbook_cb,
            runner_callbacks=runner_cb,
            stats=stats,
            timeout=options['timeout'],
            transport=options['connection'],
            sudo=options['sudo'],
            sudo_user=options['sudo_user'],
            extra_vars=extra_vars,

            private_key_file=options['private_key_file'],


            check=options['check'],
            diff=options['diff']        
        )

playnum = 0

failed_hosts = []
unreachable_hosts = []
try:
    print pb.run()

    hosts = sorted(pb.stats.processed.keys())
    print hosts
    display(callbacks.banner("PLAY RECAP"))
    playbook_cb.on_stats(pb.stats)

    for h in hosts:
        t = pb.stats.summarize(h)
        if t['failures'] > 0:
            failed_hosts.append(h)
        if t['unreachable'] > 0:
            unreachable_hosts.append(h)

    retries = failed_hosts + unreachable_hosts

    if len(retries) > 0:
        filename = pb.generate_retry_inventory(retries)
        if filename:
            display("           to retry, use: --limit @%s\n" % filename)

    for h in hosts:
        t = pb.stats.summarize(h)

        display("%s : %s %s %s %s" % (
            hostcolor(h, t),
            colorize('ok', t['ok'], 'green'),
            colorize('changed', t['changed'], 'yellow'),
            colorize('unreachable', t['unreachable'], 'red'),
            colorize('failed', t['failures'], 'red')),
            screen_only=True
        )

        display("%s : %s %s %s %s" % (
            hostcolor(h, t, False),
            colorize('ok', t['ok'], None),
            colorize('changed', t['changed'], None),
            colorize('unreachable', t['unreachable'], None),
            colorize('failed', t['failures'], None)),
            log_only=True
        )

except Exception as  e:
        print ("!!!!!!!ERROR: %s" % e)

by Valeriy Solovyov at March 05, 2015 05:34 PM

What is the difference between a.ne(null) and a != null in Scala?

I have been always using

a != null

to check that a is not a null reference. But now I've met another way used:

a.ne(null)

what way is better and how are they different?

by Ivan at March 05, 2015 05:31 PM

/r/emacs

StackOverflow

Gather result from scalaz stream Process

Recently I started using scalaz streams in Scala/Akka. I'm fetching records from a nosql database. I want to map records to message items (via translateItem: Item) and create Packages (1 Package = 100 Items) of them. E.g. there are 500 records.

 val readChunks = records.chunk(100)
 val createPackages = readChunks.map(chunk => translateItem(chunk))
                                .map(i => toPackage(i))
 val result = createPackages.runLog.run.toList

I've tried to fetch the result (the List[Package]) via runLog. The output of the log looks good. But I don't need the logging overhead.

But how to write the result to a list e.g.?

Or I might return a Future[List[Package]] and pass it to an Akka actor to fetch (Await.result) it. I might convert the scalaz Task to a Future.

Thanks in advance.

by Marco Mayer at March 05, 2015 05:21 PM

Scala: function with inner recursive function

I am writing a function which removes kth element from the list (this comes from scala 99 problems) and was puzzled by a specific behaviour this recursive function works fine

def removeAt3(startPos:Int,  inputList :List[Symbol]) =  {
  // will use inner recursive function 
    def removeAtRecursive(position:Int, lst:List[Symbol]):List[Symbol] = (position, lst) match {
    case (_, Nil) => println("end of list");List[Symbol]()
    case (any, h::tl) => if (any == startPos) removeAtRecursive(any + 1, tl) else  h::removeAtRecursive(any+1, tl)
  }
  removeAtRecursive(0, inputList)
}

But this version does not.

def removeAt4(startPos:Int,  inputList :List[Symbol]) =  {
  // will use inner recursive function 
  def removeAtRecursive(position:Int, lst:List[Symbol]):List[Symbol] = (position, lst) match {
    case (_, Nil) => println("end of list");List[Symbol]()
    case (startPos, h::tl) => removeAtRecursive(position + 1, tl)
    case (any, h::tl) => h::removeAtRecursive(any+1, tl)

  }
  removeAtRecursive(0, inputList)
}
removeAt4(3, List('a, 'b, 'c, 'd, 'e, 'f))

In fact Eclipse keeps on complaining that the case(any, h::tl) is unreachable.

But if I call removeAt4(3, List('a, 'b, 'c, 'd, 'e, 'f)) shouldn't case(startPos, h::tl) be effectively translated in case(3, h::tl)?

by user1068378 at March 05, 2015 05:13 PM

Why return in getOrElse makes tail recursion not possible?

I am confused by following code: the code is artificial, but still I think it is tail recursive. The compiler does not agree and produces an error message:

@annotation.tailrec
def listSize(l : Seq[Any], s: Int = 0): Int = {
  if (l.isEmpty) {
    None.getOrElse( return s )
  }
  listSize(l.tail, s + 1)
}

How is the code above making tail recusion not possible? Why is the compiler telling me:

could not optimize @tailrec annotated method listSize: it contains a recursive call not in tail position

A similar code (with return inside of map) compiles fine:

@annotation.tailrec
def listSize(l : Seq[Any], s: Int = 0): Int = {
  if (l.isEmpty) {
    Some(()).map( return s )
  }
  listSize(l.tail, s + 1)
}

Even the code obtained by inlining None.isEmpty compiles fine:

@annotation.tailrec
def listSize(l : Seq[Any], s: Int = 0): Int = {
  if (l.isEmpty) {
    if (None.isEmpty) {
      return s
    } else None.get
  }
  listSize(l.tail, s + 1)
}

On the other hand, code with slightly modified map is rejected by the compiler and produces the error:

@annotation.tailrec
def listSize(l : Seq[Any], s: Int = 0): Int = {
  if (l.isEmpty) {
    Some(()).map( x => return s )
  }
  listSize(l.tail, s + 1)
}

by Suma at March 05, 2015 05:06 PM

Is there a way to get RabbitMQ to deliver messages in batches, instead of one at a time?

I am using RabbitMQ with Clojure and Langohr, and I want to process messages off the queue in batches rather than one at a time. I could batch the messages myself after they're pulled off the queue, of course, but I'm curious if there's an API call or setting I'm missing that would get RMQ to deliver, say, 500 messages at a time to a consumer. Is this possible?

by Logan Buckley at March 05, 2015 05:05 PM

Fefe

Lobsters

StackOverflow

Replace " with \"

How do I replace " with \".

Here is what im trying :

def main(args:Array[String]) = {      
  val line:String = "replace \" quote";
  println(line);
  val updatedLine = line.replaceAll("\"" , "\\\"");
  println(updatedLine);
}

output :

replace " quote
replace " quote

The output should be :

replace " quote
replace \" quote

by blue-sky at March 05, 2015 04:45 PM

CompsciOverflow

How to find witnesses for big O

I'm having trouble determining the correct way (if there is one) to find the witnesses in any given big O problem.

The example I'm struggling with: $2^x + 17$ is $O(3^x)$.

I am expected to find two witnesses such that $2^x + 17 \leq C(3^x)$ whenever $x > k$.

Unfortunately, my textbook isn't very clear on how to actually find them and rather suggests to more or less guess until you get both constants to work. It is to my understanding that if one pair exists, infinitely many do.

How do you find these?

by kill4silence at March 05, 2015 04:43 PM

/r/emacs

How can I easily keep different .emacs.d folders?

I've started to learn me some Emacs and I'm going through various guides, tutorials and tips whenever I can. I like to try different settings and packages. The thing is, I want to keep separate .emacs.d directories for each tutorial (since they're newbie tutorials and assume a blank profile), and also I want to keep separate profiles for trying new packages and settings.

Currently I create a new empty folder for each new profile and symlink them to .emacs.d, when I would like to use them.

I wonder if it's possible to execute emacs by pointing it to a different path than .emacs.d, instead of symlinking folders to ~/.emacs.d, so it can be both more practical to open different emacs' for different profiles , and also possible use different profiles simultaneously?

submitted by anatolya
[link] [17 comments]

March 05, 2015 04:43 PM

Planet Theory

Personalization in Polytopix

Today’s blog post is a quick announcement of “personalization” feature in Polytopix. We added a new feature that allows users to add their (possibly multiple) twitter accounts in Polytopix. The user’s twitter stream is used to personalize and rank the news articles on Polytopix. More importantly, our semantic similarity algorithms will display contextual explanatory articles along-side the news articles in the user’s feed.

Polytopix


Filed under: algorithms

by kintali at March 05, 2015 04:38 PM

QuantOverflow

How trading in currency pair works, underlying techniques and mechanisms

I am somewhat experienced in Forex trading, but I have a question which has bothered me for quite some time.

If we for instance go back in time four months, to before the beginning of value loss the RUB(RUBLE).

Say that I wanted to make money on the loss of value of the RUB, with my Forex account being based on SEK.

What I am confused about is this, what is the difference in the amount of money I make, if, we assume that USD/RUB and SEK/RUB will both increase 10% over the next time period, which also means that the USD/SEK will increase/decrease 0% against one and other.

Lets also assume that all currency pairs have an initial 1:1 ratio. 100 SEK = 100 USD = 100 RUB.

Mechanism A)

If I buy 100 SEK in the USD/RUB currency pair, my prior understanding was that I am actually buying 100 USD. My 100 SEK are therefore first exchanged for 100 USD. When the value of the USD/RUB over the next period goes up 10%. I sell my 100 USD. But what do I then sell it for?

1) Do I get RUB? Ok, my 100 USD now gets me 110 RUB.

2) Do I get SEK? Ok, my 100 USD now gets me 100 SEK, so that scenario is out.

Then now, we continue with 1). When my 110 RUB are exchanged back for SEK, I should get 110 RUB/(1.1SEK/1RUB) = 100 SEK. Therefore I did not make any money on the shorting of RUB through USD/RUB. Despite the RUB loosing value against the USD! But this can't be true. So how does the underlying mechanism work? My understanding is, and the above example shows, that it doesn't matter if you buy USD/RUB or USD/SEK currency pairs, since what you are buying is essentially USD in both cases.

But I don't believe this scenario is correct. What is then the correct underlying mechanisms to facilitate a transaction where you can make money on shorting the RUB, or as I have been thinking, the increase of value of USD versus the RUB. I used to think that I am buying USD, so it doesn't matter what currency pair, USD/RUB or USD/SEK. But as I write this question now, I am getting a different understanding.

My second possible scenario for how this might go is this:

Mechanism B)

When I "buy" 100 SEK worth of USD/RUB, I am actually lending 100 RUB from the market, selling it immediately for 100 USD, then when the price of USD/RUB goes up 10%, to 110 RUB I sell my 100 USD getting 110 RUB. I then return the 100 RUB to the lender, keeping 10 RUB. I then sell the 10 RUB for SEK, keeping 10/(1.1SEK/RUB) = 9.09 SEK, for a total profit of 9.09%, despite the USD/RUB, SEK/RUB actually having increased 10%. It would also mean that it is better were you can to trade directly in the currency pair you have, SEK/RUB rather than USD/RUB if your account is in SEK. Then you would have made 10% rather than 9.09% without any additional costs of transactions. ( I find this a bit puzzling though, need to think about this. )

This scenario is kind of strange because you are lending 100 RUB, then selling it for 100 USD, meaning that you could also just be lending 100 USD directly, or buying actual 100 USD, omitting the lender, but then we would end up in Mechanism A) again. The lender is needed, so you are never actually buying any currency. You are lending the other currency pair, and buying USD.

Note that I am kind answering the question as I formulate it, which is something I had difficulties doing for myself earlier. It pays to write questions and try to explain them :)

The second scenario would allow for a profit to be made as we can see, and one that seems to be the way it must be done.

But this also constitutes a difference in how some people normally seem to think of trading in Forex, and how the Forex platform buttons usually say, "Buy" and "Sell".

My original idea is that when you "buy" USD/RUB, it is the same as buying USD currency so it won't matter if it is USD/SEK, or USD/EUR, because it is perceived as the net result is you buying USD. It shouldn't matter what currency pair. You might even save the last exchange cost doing.

This is the way people normally seem to explain it, that you are buying USD. But you are obviously not buying USD when "buying" USD/RUB as if you were "buying" a stock. You are lending RUB, selling it for USD, selling the USD later and returning the original RUB.

When you buy a stock, you are indeed buying it. Possibly not even so in CFD' markets? When you are shorting a stock, you enter the lending mechanisms.

However, in Forex currency pair trading, it must be that you are always in the lending mechanism.

It would also explain how despite your account being in SEK, you could make money on going long on the SEK/RUB or "buying" SEK with the same amount of SEK as in your account. You are not buying SEK, you are initially lending 100 RUB, selling it for 100 SEK, making it 200 SEK in your account, and then buying 100 RUB for 90.9 SEK later, returning them, keeping (200-90.9)SEK = 109.1 SEK.

Despite the direction, you are always kind of in the mechanisms of shorting/blanking.

Is this the correct understanding?

Sorry for this long post.

by momo at March 05, 2015 04:38 PM

StackOverflow

Subtracting elements at specified indices in array

I am beginner to functional programming and Scala. I have an Array of arrays which contain Double numerals. I want to subtract elements (basically two arrays, see example) and I am unable to find online how to do this.

For example, consider

val instance = Array(Array(2.1, 3.4, 5.6), 
                  Array(4.4, 7.8, 6.7))

I want to subtract 4.4 from 2.1, 7.8 from 3.4 and 6.7 from 5.6

Is this possible in Scala?

Apologies if the question seems very basic but any guidance in the right direction would be appreciated. Thank you for your time.

by Sharadha Jayaraman at March 05, 2015 04:34 PM

CompsciOverflow

8085 multiplication how does this work? [on hold]

I have this program but i didn't understand it. Why is 'ral' and 'dad' used? program:

  1. lxi h, 4050h
  2. mov e,m
  3. mvi d,00h
  4. inx h
  5. mov a,m
  6. mvi b,08h
  7. lxi h,0000h
  8. mvlt: ral
  9. jnc add1
  10. dad d
  11. add1: dcr b
  12. jz store
  13. dad h
  14. jmp mvlt
  15. store: shld 4052h
  16. rst 1

by Kirsche at March 05, 2015 04:32 PM

/r/scala

CompsciOverflow

Happened-before and Causal order

I'm reading Lamport's "Time, Clocks, and the Ordering of Events in a Distributed System" and there's a detail that's bugging me.

Lamport defines the "happened before" partial order, which I understand. Then he says that "Another way of viewing the definition is to say that a -> b means that it is possible for event a to causally affect event b".

Consider now two events a and b that are message receptions at a process P1, such that a occurs before b. Further more, suppose a and b are the only two events ever occurring at P1. According to the happened-before relation definition, we have a -> b (which makes sense, since P1 observed those event in this order).

However, I don't see how it is possible for event a to causally affect event b. Those two events are totally unrelated and could have happened in a different order.

What am I missing here?

by Nemo at March 05, 2015 04:06 PM

Fefe

Neues aus dem tiefen Staat: Den NSA-Ausschuss-Teilnehmern ...

Neues aus dem tiefen Staat: Den NSA-Ausschuss-Teilnehmern fällt auf, dass der BND die Akten nicht nur zensiert sondern auch aktiv gefälscht hat. Ach. Ach na sowas! Hätte euch doch nur vorher jemand gewarnt, dass man die Beweisführung nicht auf bereitwillig vom Angeklagten zur Verfügung gestellten Akten basieren kann, wenn der Angeklagte sich auch noch für über dem Gesetz stehend hält!

March 05, 2015 04:01 PM

Oh und wo wir gerade beim BND im NSA-Ausschuss waren: ...

Oh und wo wir gerade beim BND im NSA-Ausschuss waren: BND räumt ein, dem Aussschuss "versehentlich" 130 Akten vorenthalten zu haben.
Daraufhin, so Spiegel Online, habe das Kanzleramt den BND um Nachprüfung gebeten, "ob diese Unterlagen dem Untersuchungsausschuss vollumfänglich vorgelegt wurden", wie es in einem Schreiben an den Untersuchungsausschuss heiße. Die Antwort der Geheimdienstler: Man habe "etwa 130 Dokumente (...) aufgrund eines Versehens dem Untersuchungsausschuss bislang nicht übermittelt". Die Behörde wurde gebeten, "Stellung zu nehmen, wie es dazu kommen konnte".
Ich möchte an der Stelle mal die Wörter hervorheben, die das reale Dienstverhältnis hier widerspiegeln.

March 05, 2015 04:01 PM

Die fiesen Fieslinge von Netzpolitik.org haben mal ...

Die fiesen Fieslinge von Netzpolitik.org haben mal eine Netzneutralitäts-Übersicht gemacht. Die fällt, wie zu erwarten war, eher negativ aus.

March 05, 2015 04:01 PM

Bei Diskussionen über Religionen kommt es häufig ...

Bei Diskussionen über Religionen kommt es häufig zu einem Punkt, wo jemand wie ich die These aufstellt, dass die Religionen alle Scheiße sind, und dann kommt üblicherweise jemand anderes und meint: Aber die Hindus! Die haben keine Kreuzritter und keine Selbstmordattentäter!

Mag sein, aber durchgeknallte religiöse Fundamentalisten gibt es da auch.

So, jetzt brauche ich nur noch eine Meldung dieser Art für Buddhisten! :-)

Update: Ui, das ging ja flott. Eins, zwei, drei, vier, fünf, sechs, sieben.

March 05, 2015 04:01 PM

Lobsters

The Train Algorithm

The Train Algorithm is an incremental generational garbage collector that was designed to deal with the long and unpredictable pause times caused by other algorithms. It does this by grouping objects together on “cars” in “trains”. The algorithm provides a strategy for moving objects from the younger generation into different cars, moving objects from one car to another, and collecting cars and trains. It was first described by Hudson and Moss in the paper “Incremental Collection of Mature Objects”.

Comments

by SeanTAllen at March 05, 2015 03:56 PM

StackOverflow

Rails does not reload controllers, helpers on each request in FreeBSD 9.3

I've detected weird behavior of rails. Please give me some advice!

For example I have a code like this:

def new
  raise
end

I start rails server in development mode. Hit refresh in browser and see

RuntimeError in AuthenticationController#new

Okay. I comment out line with "raise" like this:

def
  # raise
end

Hit refresh in browser but again I see that error as shown above. Even though in browser I see code with commented out "raise".

My guess is that controllers and helpers etc. are getting reloaded but rails returns cached results.

config/environments/development.rb:

Rails.application.configure do
  # BetterErrors::Middleware.allow_ip! '192.168.78.0/16'

  # In the development environment your application's code is reloaded on
  # every request. This slows down response time but is perfect for development
  # since you don't have to restart the web server when you make code changes.
  config.cache_classes = false

  # Do not eager load code on boot.
  config.eager_load = false

  # Show full error reports and disable caching.
  config.consider_all_requests_local       = true
  config.action_controller.perform_caching = false

  # Don't care if the mailer can't send.
  config.action_mailer.raise_delivery_errors = false

  # Print deprecation notices to the Rails logger.
  config.active_support.deprecation = :log

  # Raise an error on page load if there are pending migrations.
  config.active_record.migration_error = :page_load

  # Debug mode disables concatenation and preprocessing of assets.
  # This option may cause significant delays in view rendering with a large
  # number of complex assets.
  config.assets.debug = true

  # Asset digests allow you to set far-future HTTP expiration dates on all assets,
  # yet still be able to expire them through the digest params.
  config.assets.digest = true

  # Adds additional error checking when serving assets at runtime.
  # Checks for improperly declared sprockets dependencies.
  # Raises helpful error messages.
  config.assets.raise_runtime_errors = false

  # Raises error for missing translations
  # config.action_view.raise_on_missing_translations = true
end

How I start server:

=> Booting Puma
=> Rails 4.2.1.rc3 application starting in development on http://0.0.0.0:3000
=> Run `rails server -h` for more startup options
=> Ctrl-C to shutdown server
Puma 2.11.1 starting...
* Min threads: 0, max threads: 16
* Environment: development
* Listening on tcp://0.0.0.0:3000

Any suggestions please.

UPDATE 1. This problem does not exists in Ubuntu 14.04 but exists in FreeBSD 9.3.

I've created simple app and tested it out in FreeBSD first (same problem), in Ubuntu then (no problem).

Can you help me with advice how to deal with this problem on FreeBSD 9.3?

by Innokenty Longway at March 05, 2015 03:34 PM

Lobsters

/r/netsec

Planet Theory

(1/2)! = sqrt(pi) /2 and other conventions

 (This post is inspired by the book The cult of Pythagoras: Math and Myths which I recently
read and reviewed. See here for my review.)

STUDENT: The factorial function is only defined on the natural numbers. Is there some way to extend it to all the reals? For example, what is (1/2)! ?

BILL: Actually (1/2)! is sqrt(π)/2

STUDENT: Oh well, ask a stupid question, get a stupid answer.

BILL: No, I'm serious, (1/2)! is sqrt(π)/2.

STUDENT: C'mon, be serious. If you don't know or if its not known just tell me.

The Student has a point. (1/2)! = sqrt(π)/2 is stupid even though its true. So I ask--- is there some other way that factorial could be expanded to all the reals that is as well motivated as the Gamma function? Since 0!=1 and 1!=1, perhaps  (1/2)! should be 1.

Is there a combinatorial interpretation  for (1/2)!=sqrt(π) /2?

If one defined n! by piecewies linear interpolation that works but is it useful? interesting?

For that matter is the Gamma function useful? Interesting?

ANOTHER CONVENTION:  We say that 0^0 is undefined. But I think it should be 1.
Here is why:

d/dx  x^n = nx^{n-1} is true except at 1. Lets make it ALSO true at 1 by saying that x^0=1 ALWAYS
and that includes at 0.

A SECOND LOOK AT A CONVENTION:  (-3)(4) = -12 makes sense since if I owe my bookie
3 dollars 4 times than I owe him 12 dollars. But what about (-3)(-4)=12. This makes certain
other laws of arithmetic extend to the negatives, which is well and good, but we should not
mistake this convention for a discovered truth. IF there was an application where definiting
NEG*NEG = NEG then that would be a nice alternative system, much like the diff geometries.

I COULD TALK ABOUT a^{1/2} = sqrt(a) also being a convention to make a rule work out
however (1) my point is made, and (2) I think I blogged about that a while back.

So what is my point- we adapt certain conventions which are fine and good, but should not
mistake them for eternal truths. This may also play into the question of is math invented or
discovered.


by GASARCH (noreply@blogger.com) at March 05, 2015 03:23 PM

TheoryOverflow

Reordering data to optimize for compression?

Are there any algorithms for reordering data to optimize for compression? I understand this is specific to the data and the compression algorithm, but is there a word for this topic? Where can I find research in this area?

Specifically, I have a json list of 1.5 million strings, and I want to reorder the strings so that gzip (for HTTP) compression is optimized. Sorting the strings does pretty well, but I don't really know if that is optimal.

by Jayen at March 05, 2015 03:21 PM

/r/netsec

Fred Wilson

A Focus On The Company Not The Investment

I said something on stage at Launch yesterday that I’d like to elaborate on:

I do not mean that your investment isn’t important and I do not mean that making money isn’t the focus of a venture capital firm and a venture capital investor. Both are absolutely true.

However, I believe if you are invested in a startup at an early stage that goes on to become a “great company”, that your investment is going to work out fabulously well.

So I think that putting all of your energy into helping the entrepreneur and the team around them build a great company is the best way to accomplish generating great returns on investment.

Venture capital is one of those asset classes where you can impact your investment. And the best VCs do that very well. I’ve studied the great VCs and how they conduct themselves. And what I have seen is that this focus on the company first and everything else second is what separates the best ones from the rest.

by Fred Wilson at March 05, 2015 03:08 PM

QuantOverflow

Co-integration Ratio Using R for pair trading [on hold]

I am hoping to do pair trading using R. To do that , i have to calculate the Co-integration Ratio between the two stocks.

How can i obtain this Co-integration Ratio using R ??

by Dhanushka Rajapaksha at March 05, 2015 03:08 PM

gauss module for maxlik.lcg and optmum.lcg [on hold]

Im wondering if anyone has the gauss module, preferably version 10.0 or after, I would like to have maxlik.lcg and optimum.lcg since my version is too old, so it's not compatible with the code I have.

you can inbox me or leave me the email.

thanks! :)

by Emma at March 05, 2015 02:56 PM

How to calculate global exposure via commitment approach for FX swaps?

How would you calculate global exposure for FX swaps using the commitment approach? In particular, would you take into account both legs?

CESR guidelines (CESR/10-788) defines that the exposure for currency swaps is calculated based on the "Notional value of currency leg(s)". Is your understanding that both legs (spot and forward) should be taken into account? If this is true, does this mean that a currency swap generated twice the leverage compared to a FX forward of identical notionals?

Thanks! Ryko

by Ryko at March 05, 2015 02:52 PM

StackOverflow

Compose in JavaScript, not applying functions correctly?

Here's my compose function, as a polyfill

Function.prototype.compose = function(prevFunc) {
    var nextFunc = this;
    return function() {
        return  nextFunc.call(this, prevFunc.apply(this,arguments));
    }
}

These work:

function function1(a){return a + ' do function1 ';}
function function2(b){return b + ' do function2 ';}
function function3(c){return c + ' do function3 ';}
var myFunction = alert(function1).compose(function2).compose(function3);
myFunction('do');

var roundedSqrt = Math.round.compose(Math.sqrt)
roundedSqrt(6);

var squaredDate = alert.compose(roundedSqrt).compose(Date.parse)
quaredDate("January 1, 2014");

But this does not work!

var d = new Date();
var alertMonth = alert.compose(getMonth); <-- 
alertMonth(d);                   ^^^^

Error throws error "Uncaught ReferenceError: getMonth is not defined" in google chrome.

Now, if I try either of these instead:

var d = new Date();
function pluckMonth(dateObject) {return dateObject.getMonth();}
var alertMonth = alert.compose(pluckMonth);
var alertMonth2 = alert.compose(function(d){return d.getMonth()});
alertMonth(d);
alertMonth2(d);

They work.

Ok, so, why is that? I don't want to write extra functions, I want it to just work. The compose function uses the apply utility and just uses this for the thisArg, so it should work for object members as well as stand-alone functions, right??

i.e., these are equivalent

this.method()
method.call.apply(this)

jsFiddle: http://jsfiddle.net/kohq7zub/3/

by Dan Mantyla at March 05, 2015 02:52 PM

TheoryOverflow

What do we gain by having "dependent types"? [migrated]

I thought I understood dependent typing (DT) properly, but the answer to this question: Why was there a need for Martin-Löf to create intuitionistic type theory? has had me thinking otherwise.

After reading up on DT and trying to understand what they are, I'm trying to wonder, what do we gain by this notion of DTs? They seem to be more flexible and powerful than simply typed lambda calculus (STLC), although I can't understand "how/why" exactly.

What is that we can do with DTs that cannot be done with STLC? Seems like adding DTs makes the theory more complicated, but what's the benefit?

From the answer to the above question:

Dependent types were proposed by de Bruijn and Howard who wanted to extend the Curry-Howard correspondence from propositional to first-order logic.

This seems to make sense at some level, but I'm still unable to grasp the big-picture of "how/why"? Maybe an example explicitly show this extension of the C-H correspondence to FO logic could help hit the point home in understanding what is the big deal with DTs? I'm not sure I comprehend this as well I ought to.

by PhD at March 05, 2015 02:46 PM

StackOverflow

Apache spark join on 2 JdbcRdds fail with "Container killed on request. Exit code is 143"

I have a script for reading two tables from Postgres Database, joining them by key and saving result on hdfs. This script works fine with small tables but fails with large. The meaningful part of code:

  val numPartitions = 32
  val appName = "JoinTables"
  val conf = new SparkConf().
    setAppName(appName).
    set("spark.default.parallelism", s"$numPartitions")
  val sc = new SparkContext(conf)
  val reviews:JdbcRDD[(Int, Int)] = new JdbcRDD(sc, ()=> DriverManager.getConnection(reviewTable.jdbcUrl),
      s"""
         | SELECT id, object_id FROM review WHERE ? <= id AND id <= ?
        """.stripMargin, 0, Int.MaxValue, numPartitions, r=> (r.getInt(1), r.getInt(2)))

  val score:JdbcRDD[(Int, (String, Double))] = new JdbcRDD(sc, ()=> DriverManager.getConnection(scoreTable.jdbcUrl),
      s"""
         | SELECT review_id, feature, score from score WHERE ? <= review_id AND review_id < ?
        """.stripMargin, 0, Int.MaxValue, numPartitions, r=> (r.getInt(1), (r.getString(2), r.getDouble(3))))

    val criteriaObject= score.
      join(reviews).
      saveAsObjectFile("/warehouse/reviews.obj_dump")

This script reads data, stops for about 10 minutes and then fail with

Container exited with a non-zero exit code 143

Table sizes are about 5GB and i have 3 nodes with 16GB RAM (8GB for each container). Did anybody encounter the same issue?

15/03/05 14:57:35 INFO scheduler.DAGScheduler: Stage 1 (JdbcRDD at ExportCriteriaObjectAssembly.scala:67) finished in 37.040 s
15/03/05 14:57:35 INFO cluster.YarnClusterScheduler: Removed TaskSet 1.0, whose tasks have all completed, from pool 
15/03/05 14:57:35 INFO scheduler.DAGScheduler: looking for newly runnable stages
15/03/05 14:57:35 INFO scheduler.DAGScheduler: running: Set(Stage 0)
15/03/05 14:57:35 INFO scheduler.DAGScheduler: waiting: Set(Stage 2)
15/03/05 14:57:35 INFO scheduler.DAGScheduler: failed: Set()
15/03/05 14:57:35 INFO scheduler.DAGScheduler: Missing parents for Stage 2: List(Stage 0)
15/03/05 14:59:50 WARN storage.BlockManagerMasterActor: Removing BlockManager BlockManagerId(1, hadoop02, 49290) with no recent heart beats: 54457ms exceeds 45000ms
15/03/05 14:59:50 INFO storage.BlockManagerMasterActor: Removing block manager BlockManagerId(1, hadoop02, 49290)
15/03/05 15:08:11 WARN remote.ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkExecutor@hadoop02:57510] has failed, address is now gated for [5000] ms. Reason is: [Disassociated].
15/03/05 15:08:11 ERROR cluster.YarnClusterScheduler: Lost executor 1 on hadoop02: remote Akka client disassociated
15/03/05 15:08:11 INFO scheduler.TaskSetManager: Re-queueing tasks for 1 from TaskSet 0.0
15/03/05 15:08:11 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, hadoop02): ExecutorLostFailure (executor 1 lost)
15/03/05 15:08:11 ERROR cluster.YarnClusterSchedulerBackend: Asked to remove non-existent executor 1
15/03/05 15:08:11 INFO scheduler.TaskSetManager: Starting task 0.1 in stage 0.0 (TID 64, hadoop03, PROCESS_LOCAL, 1058 bytes)
15/03/05 15:08:11 INFO scheduler.DAGScheduler: Executor lost: 1 (epoch 1)
15/03/05 15:08:11 INFO storage.BlockManagerMasterActor: Trying to remove executor 1 from BlockManagerMaster.
15/03/05 15:08:11 INFO storage.BlockManagerMaster: Removed 1 successfully in removeExecutor
15/03/05 15:08:16 INFO yarn.YarnAllocationHandler: Completed container container_1423753029844_0240_01_000003 (state: COMPLETE, exit status: 143)
15/03/05 15:08:16 INFO yarn.YarnAllocationHandler: Container marked as failed: container_1423753029844_0240_01_000003. Exit status: 143. Diagnostics: Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Killed by external signal

by ipoteka at March 05, 2015 02:36 PM

Play Project not compiling correctly in Intellij - routes_routing.scala

I have a simple Play project with one controller (with a route) and a unittest.

When I type "sbt compile test" into terminal it runs fine and the test passes.

I cannot get the solution to compile correctly in IntelliJ however.

Controller: controllers.nisp.LandingPageController

Compile error:

.../nisp-frontend/target/scala-2.11/src_managed/main/app/routes_routing.scala 
Error:(37, 18) object LandingPageController is not a member of package app.controllers.nisp 
controllers.nisp.LandingPageController.showLandingPage(),
            ^

Directory structure:

enter image description here

by Spork at March 05, 2015 02:32 PM

/r/scala

StackOverflow

I cannot retrieve BSONDocument correctly

I have been spending time 3 hours and I wasn't success trying to retrieve a BSONDocument with all of the attributes. I don't why I only get the first attribute... where are the others?

I wrote the following test

describe("Testing reactive map user") {
    it(" convert an user to BSONDocument") {

        val basicProfile = BasicProfile("1", "1", Option("pesk"), Option("pesk"),
            Option("pesk pesk"), Option("pesk@gmail.com"), Option("url"),
            AuthenticationMethod.UserPassword, Option(new OAuth1Info("token", "secret")),
            Option(new OAuth2Info("token", Option("secret"))),
            Option(new PasswordInfo("hasher", "password", Option("salt"))))

        val user = User(Option(BSONObjectID.generate), basicProfile, List(), "31.55, 53.66", List[User](), Option(DateTime.now))

        val result = User.UserBSONWriter.write(user)

        assert(result.getAs[String]("providerId") == "1")
    }
}

The UserBSONWriter

implicit object UserBSONWriter extends BSONDocumentWriter[User] {
    def write(user: User): BSONDocument = {
        val doc = BSONDocument(
            "_id" -> user.id.getOrElse(BSONObjectID.generate),
            "providerId" -> BSONString(user.basicProfile.providerId),
            "userId" -> BSONString(user.basicProfile.userId))
        println(doc)
        doc
    }
}

And I attach the screenshot of the console. I'm trying to get the value providerId, which is next to BSONObjectID, but I can just only get the first attribute.

Stream(Success((_id,BSONObjectID("54f862fe010000010001a3fb"))), ?)

I will appreciate a lot if somebody can help me. And I have other comment, Im getting some headaches because of the implicitly system which is used by scala's BSON API. It is not trivial to find some docs about these all implicit conversions.

Outupt of println(doc)

BSONDocument(non-empty)

by Sarang at March 05, 2015 02:23 PM

/r/clojure

/r/netsec

StackOverflow

using macros for mixin traits in Scala

Let say that I have these traits:

object Sample14 {

  import scala.reflect.macros.whitebox.Context
  import scala.language.experimental.macros

  trait A
  trait B extends A
  trait C extends A
  trait D

  class ToMix {
    def mixTraits(x: A, y: D) = macro implMixTraits
  }

  def implMixTraits(c: Context)(x: c.Expr[A], y: c.Expr[D]): c.Expr[Any] = {/*...*/}

}

How I can use macros to mix traits. I used a couple of resources that are available but I really cannot get how if I can mix them in this way? Thanks for any idea, or suggestion!

by Val at March 05, 2015 02:08 PM

In Scala, how can I get Play's Models and Forms to play nicely with Squeryl and PostgreSQL?

I'm currently reading Manning's Play for Scala, and playing with code as I do so.

I'm finding that having a Long "id" field on my Product model seems to get in the way of form submission to create the new Product, resulting in the following error:

    Execution exception
    [RuntimeException: Exception while executing statement : ERROR: relation "s_products_id" does not exist
      Position: 123
    errorCode: 0, sqlState: 42P01
    insert into "products" ("id", "name", "description", "is_active", "pieces", "embedded_video_code", "ean") values (nextval('"s_products_id"'),?,?,?,?,?,?)
    jdbcParams:[Purple Paperclips ,Luscious,true,100,null,1234567890242]]

In D:\Tutorials\Workspaces\Scala\Play2Paperclips\app\util\products\ProductSquerylHelper.scala:73
    70  def insert(product: Product): Product = inTransaction
    71  {
    72    val defensiveProductCopy = product.copy()
    73    productsTable.insert(defensiveProductCopy)
    74  }
    75
    76  def update(product: Product)          = inTransaction { productsTable.update(product)    }
    77  def delete(product: Product)          = inTransaction { productsTable.delete(product.id) }
    78

And, if I try to make the id field Option[Long], I start getting errors like

Cannot prove that models.Product<: <org.squeryl.KeyedEntity[Some[Option[Long]]].

What is the best way to enable this form to work as intended?

@* \app\views\products\edit.scala.html *@
@(productForm: Form[Product])(implicit flash: Flash, lang: Lang)
@import helper._
@import helper.twitterBootstrap._
@main(Messages("products.form")) {
    <h2>@Messages("products.form")</h2>

    @helper.form(action = routes.Products.save()) {
        <fieldset>
            <legend>
                @Messages("products.details", Messages("products.new"))
            </legend>
            @helper.inputText(productForm("ean"))
            @helper.inputText(productForm("name"))
            @helper.textarea(productForm("description"))
            @helper.inputText(productForm("pieces"))
            @helper.checkbox(productForm("isActive"))

        </fieldset>
        <p><input type="submit" class="btn primary" value='@Messages("products.new.submit")'></p>
    }
}

This is the model:

    case class Product (
                     id                 : Long,
                     ean                : Long,           // ean: International/[E]uropean [A]rticle [N]umber
                     name               : String,
                     description        : String,
                     pieces             : Int,

                     @Column("is_active")
                     isActive           : Boolean,

                     @Column("embedded_video_code")
                     embeddedVideoCode  : Option[String]  // See http://squeryl.org/schema-definition.html
                     ) extends KeyedEntity[Long]
{
  def this(id : Long, ean : Long, name : String, description : String) = this(id, ean, name, description, 0, false, None)

  lazy val stockItems: OneToMany[StockItem] = Database.productToStockItemsRelation.left(this)
}

This is the Form, as well as mapping with apply and unapply methods:

    object ProductFormHelper
{
  // -------------------------------------------------------------------

  val productForm: Form[Product] = Form(productFormMapping)

  private def productFormMapping = mapping (
    "id"                -> optional(longNumber),
    "ean"               -> longNumber.verifying("validation.ean.duplicate", ProductDAO.findByEan(_).isEmpty),
    "name"              -> nonEmptyText,
    "description"       -> nonEmptyText,
    "pieces"            -> number,
    "isActive"          -> boolean,
    "embeddedVideoCode" -> optional(text)
  ) (productFormApply) (productFormUnpply)

  private def productFormApply(
                                id                 : Option[Long],
                                ean                : Long,           // ean: International/[E]uropean [A]rticle [N]umber
                                name               : String,
                                description        : String,
                                pieces             : Int,
                                isActive           : Boolean,

                                embeddedVideoCode  : Option[String]  // See http://squeryl.org/schema-definition.html
                                )  =
  {
    val productId = id match
    {
      case Some(long) => long
      case None       => -1L
    }

    Product.apply(productId, ean, name, description, pieces, isActive, embeddedVideoCode)
  }

  private def productFormUnpply(product: Product) =
  {
    Option(Some(product.id), product.ean, product.name, product.description, product.pieces, product.isActive, product.embeddedVideoCode)
  }

}

Here is my Controller's save method:

def save = Action
  {
    implicit request =>
    {
      val newProductForm = ProductFormHelper.productForm.bindFromRequest()
          newProductForm.fold(
            hasErrors =
              {
                form => Redirect(routes.Products.newProduct()).flashing(Flash(form.data) + ("error" -> Messages("validation.errors")))
              },

            success =
              {
                newProduct =>
                {
                                    val insertedProduct = ProductDAO.insert(newProduct)
              val message = Messages("products.new.success", insertedProduct.name)
              Redirect(routes.Products.showByEan(insertedProduct.ean)).flashing("success" -> message)
                }
            }
          )
    }
  }

Here is my insert method:

      def insert(product: Product): Product = inTransaction
  {
    val defensiveProductCopy = product.copy()
    productsTable.insert(defensiveProductCopy)
  }

And this is the Database Schema:

 object Database extends Schema
{
  val productsTable   : Table[Product]   = table[Product]  ("products")
  on(productsTable)   { product   => declare{product.id   is (autoIncremented)}}

}

by Brian Kessler at March 05, 2015 02:08 PM

Jackson: must have name when multiple-parameter constructor annotated as Creator

Got a quite strange Jackson behaviour.

The code below will produce exception, mentioned in the subject line. Making separate parameterless constructor, annotated with @JsonCreator does not solves the problem, neither adding @JsonProperty to Meta constructor argument.

@JsonIgnoreProperties(ignoreUnknown = true)
class ClipInfo() {
  @BeanProperty
  var meta: Meta = _

  @JsonRootName("meta")
  case class Meta (dimension: List[Int])
}

But if only Meta moved out from ClipInfo everything is fixed, compilation and executions works just fine. What do I miss?

by Alex Observer at March 05, 2015 01:57 PM

Dave Winer

JavaScript in-browser almost complete

As you may know, I've become a JavaScript-in-the-browser developer. My liveblog is an example of that. It's an app, just like the stuff we used to develop for Mac and Windows, but it runs in the browser.

The browser is a complete app environment except for one crucial piece: storage. It has a simple facility called localStorage, which almost fits the bill, comes close, but ultimately doesn't do what people want.

I have solved the problem in a generic and open source way. In a very popular server platform, Node.js. However it's not widely known that this problem has been solved.

Try this little app, as a demo: http://macwrite.org/.

You can sign in, write some text, save it, sign out.

And then sign in from a different machine, and voila, the text you entered is there.

From that little bit of functionality you can build anything.

I have a new app in development, very simple, and brain-dead obvious, and useful, that builds on this. Hopefully at that point the lights will start to come on, oh shit, we're ready to build the next layer of the Internet. It really is that big a deal. And you don't need VC backing to participate. One developer, one person, can build something useful in a week. I've just done that myself. The service will virtually run itself at almost no cost, for a lot of users. That's an interesting place to be.

March 05, 2015 01:54 PM

infra-talk

Finding “old” nodes in puppetdb

We're using puppet + puppetdb in an EC2 environment where nodes come and go quite regularly. We have a custom autosign script that uses ec2 security info to validate the nodes before allowing the autosigning. This is all good, but it can leave a lot of "dead" nodes in puppet, eg. if a bunch of nodes are created by an autoscale policy and then terminated.

To get rid of these zombie nodes from puppet/puppetdb we can just use:

puppet node deactivate <certname1> <certname2> ... <certnameN>

We can query puppetdb to get a list of nodes that have not sent puppet reports for, say, 24 hours. The puppetdb query we need is something like this:

'query=["<", "report-timestamp", "$cutoff_date"]'

where $cutoff_date is a date in ISO8601 format, eg. 2015-03-05T13:39:45+0000

We can use date to generate the cutoff date with something like this:

$cutoff_date=$(date -d '-1 day' -Isec)

We then plug this into the query string and send it with curl as follows:

curl --silent -G 'http://localhost:8080/v4/nodes' 
  --data-urlencode "query=["<", "report-timestamp", "$(date -d '-1 day' -Isec)"]"

Finally, we filter through jq to get a list of certnames:

curl --silent -G 'http://localhost:8080/v4/nodes' 
  --data-urlencode "query=["<", "report-timestamp", "$(date -d '-1 day' -Isec)"]" 
  | jq '.[].certname'

We can then pass the list of nodes to the "puppet node deactivate" command.

by Robin Bowes at March 05, 2015 01:49 PM

/r/compsci

StackOverflow

Why am I getting an error in first case but not in second?

I started learning OCaml recently and came across the following problem:

*Write a function last : 'a list -> 'a option that returns the last element of a list. *

I tried the following code:

# let rec last = function
| [] -> None
| _ :: t -> last t
| [x] -> Some x;;

I got the following response:

Characters 65-68:
Warning 11: this match case is unused. 
val last : 'a list -> 'a option = <fun>    

But the following code compiles without an error:

# let rec last = function
| [] -> None
| [x] -> Some x
| _ :: t -> last t;; 

giving the response

val last : 'a list -> 'a option = <fun>

So, my doubt is why just by changing the order I am getting the error?

Any remarks and guidance will be highly appreciated.

I asked this question on programmers.stackexchange As per suggestion I am asking on overflow.

by Phani Raj at March 05, 2015 01:42 PM

How to use SBT IntegrationTest configuration from Scala objects

To make our multi-project build more manageable we split up our Build.scala file into several files, e.g. Dependencies.scala contains all dependencies:

import sbt._

object Dependencies {
  val slf4j_api = "org.slf4j" % "slf4j-api" % "1.7.7"
  ...
}

We want to add integration tests to our build. Following the SBT documentation we added

object Build extends sbt.Build {
  import Dependencies._
  import BuildSettings._
  import Version._
  import MergeStrategies.custom

  lazy val root = Project(
    id = "root",
    base = file("."),
    settings = buildSettings ++ Seq(Git.checkNoLocalChanges, TestReport.testReport)
  ).configs(IntegrationTest).settings(Defaults.itSettings: _*)
  ...
}

where Dependencies, BuildSettings, Version and MergeStrategies are custom Scala objects definied in their own files.

Following the documentation we want to add some dependencies for the IntegrationTest configuration in Dependencies.scala:

import sbt._

object Dependencies {

  val slf4j_api = "org.slf4j" % "slf4j-api" % "1.7.7"

  val junit = "junit" % "junit" % "4.11" % "test,it"
...
}

Unfortunately this breaks the build:

java.lang.IllegalArgumentException: Cannot add dependency 'junit#junit;4.11' to configuration 'it' of module ... because this configuration doesn't exist!

I guess I need to import the IntegrationTest configuration. I tried importing the IntegrationTest configuration in Dependencies.scala:

import sbt.Configurations.IntegrationTest

IntegrationTest is a lazy val defined in the Configurations object:

object Configurations {
  ...
  lazy val IntegrationTest = config("it") extend (Runtime)
  ...
 }

But that did not solve the problem.

Does someone has an idea how to solve this?

by Michael Thaler at March 05, 2015 01:41 PM

Losing information when applying function

This averages the values contained in the map "mapped"

  case class Point(label: String, points: List[Double])

  val mapped = Map(Point("A4", List(5.0, 8.0)) -> List(Point("A3", List(8.0, 4.0)), Point("A4", List(5.0, 8.0))))
                                                  //> mapped  : scala.collection.immutable.Map[general.Point,List[general.Point]] 
                                                  //| = Map(Point(A4,List(5.0, 8.0)) -> List(Point(A3,List(8.0, 4.0)), Point(A4,Li
                                                  //| st(5.0, 8.0))))
  val averaged = mapped.values.map(m => m.map(m2 => m2.points).transpose.map(xs => xs.sum / xs.size))
                                                  //> averaged  : Iterable[List[Double]] = List(List(6.5, 6.0))

Using transpose causes the label information to be lost. How the values be averaged while at same time preserving the label so something like this is produced :

List(List(6.5, 6.0) , List(A3,A4))

instead of :

List(List(6.5, 6.0))

by blue-sky at March 05, 2015 01:40 PM

RuntimeException: Cm cannot create directory, JXTA

Possibly a noob question: I'm experimenting with JXTA and want to create a simple hello world program. To this effect I have copied the hello world example into a scala project in Eclipse and inclued jxta.jar to access the API.

The code currently looks like this (Note that this is a scala project so the syntax is slightly different from Java but should be equivalent.

package JXTA_test

import net.jxta.platform.NetworkManager
import java.text.MessageFormat
import java.io.File;
import java.lang.Boolean

object main {
    def main(args: Array[String]): Unit = {

            //Copied from helloworld
            try
            {
                System.out.println("Configuring JXTA");

                val manager = new NetworkManager(NetworkManager.ConfigMode.ADHOC, "HelloWorld", new File(new File(".cache"), "HelloWorld").toURI());

                // Start the JXTA 
                System.out.println("Starting JXTA");
                manager.startNetwork();
                System.out.println("JXTA Started");

                // Wait up to 20 seconds for a connection to the JXTA Network.
                System.out.println("Waiting for a rendezvous connection");
                val connected : Boolean = manager.waitForRendezvousConnection(20 * 1000);
                System.out.println(MessageFormat.format("Connected :{0}", connected));

                // Stop JXTA
                System.out.println("Stopping JXTA");
                manager.stopNetwork();
                System.out.println("JXTA stopped");            
            }
            catch
            {
            case e: Throwable => {
                // Some type of error occurred. Print stack trace and quit.
                System.err.println("Fatal error -- Quitting");
                e.printStackTrace(System.err);
                System.exit(-1);
            }
            }
    }
}

When this is executed I get:

Configuring JXTA
Starting JXTA
mar 05, 2015 12:54:36 EM net.jxta.platform.NetworkManager configure
INFO: Created new configuration. mode = ADHOC
mar 05, 2015 12:54:36 EM net.jxta.platform.NetworkManager startNetwork
INFO: Starting JXTA Network! MODE = ADHOC,  HOME = file:/D:/Övrigt/Arbete/ExJobb3/Scala_Workspace/JXTA_test_25/.cache/HelloWorld
mar 05, 2015 12:54:36 EM net.jxta.impl.protocol.RelayConfigAdv <init>
WARNING: Unhandled Element: net.jxta.impl.document.LiteXMLElement@504bae78 / isOff = <<null value>>
mar 05, 2015 12:54:36 EM net.jxta.peergroup.WorldPeerGroupFactory newWorldPeerGroup
INFO: Making a new World Peer Group instance using : net.jxta.impl.peergroup.Platform
mar 05, 2015 12:54:36 EM net.jxta.impl.cm.Cm <init>
SEVERE: Unable to create Cm
java.lang.RuntimeException: Cm cannot create directory D:\Övrigt\Arbete\ExJobb3\Scala_Workspace\JXTA_test_25\.cache\HelloWorld\cm\jxta-WorldGroup
    at net.jxta.impl.cm.Cm.<init>(Cm.java:190)
    at net.jxta.impl.peergroup.StdPeerGroup.initFirst(StdPeerGroup.java:775)
    at net.jxta.impl.peergroup.Platform.initFirst(Platform.java:205)
    at net.jxta.impl.peergroup.GenericPeerGroup.init(GenericPeerGroup.java:929)
    at net.jxta.peergroup.WorldPeerGroupFactory.newWorldPeerGroup(WorldPeerGroupFactory.java:310)
    at net.jxta.peergroup.WorldPeerGroupFactory.<init>(WorldPeerGroupFactory.java:178)
    at net.jxta.peergroup.NetPeerGroupFactory.<init>(NetPeerGroupFactory.java:205)
    at net.jxta.platform.NetworkManager.startNetwork(NetworkManager.java:410)
    at JXTA_test.main$.main(main.scala:25)
    at JXTA_test.main.main(main.scala)

mar 05, 2015 12:54:36 EM net.jxta.impl.peergroup.StdPeerGroup initFirst
SEVERE: Error during creation of local store
java.lang.RuntimeException: Cm cannot create directory D:\Övrigt\Arbete\ExJobb3\Scala_Workspace\JXTA_test_25\.cache\HelloWorld\cm\jxta-WorldGroup
    at net.jxta.impl.cm.Cm.<init>(Cm.java:190)
    at net.jxta.impl.peergroup.StdPeerGroup.initFirst(StdPeerGroup.java:775)
    at net.jxta.impl.peergroup.Platform.initFirst(Platform.java:205)
    at net.jxta.impl.peergroup.GenericPeerGroup.init(GenericPeerGroup.java:929)
    at net.jxta.peergroup.WorldPeerGroupFactory.newWorldPeerGroup(WorldPeerGroupFactory.java:310)
    at net.jxta.peergroup.WorldPeerGroupFactory.<init>(WorldPeerGroupFactory.java:178)
    at net.jxta.peergroup.NetPeerGroupFactory.<init>(NetPeerGroupFactory.java:205)
    at net.jxta.platform.NetworkManager.startNetwork(NetworkManager.java:410)
    at JXTA_test.main$.main(main.scala:25)
    at JXTA_test.main.main(main.scala)

mar 05, 2015 12:54:36 EM net.jxta.peergroup.WorldPeerGroupFactory newWorldPeerGroup
SEVERE: World Peer Group could not be instantiated.
net.jxta.exception.PeerGroupException: Error during creation of local store
    at net.jxta.impl.peergroup.StdPeerGroup.initFirst(StdPeerGroup.java:782)
    at net.jxta.impl.peergroup.Platform.initFirst(Platform.java:205)
    at net.jxta.impl.peergroup.GenericPeerGroup.init(GenericPeerGroup.java:929)
    at net.jxta.peergroup.WorldPeerGroupFactory.newWorldPeerGroup(WorldPeerGroupFactory.java:310)
    at net.jxta.peergroup.WorldPeerGroupFactory.<init>(WorldPeerGroupFactory.java:178)
    at net.jxta.peergroup.NetPeerGroupFactory.<init>(NetPeerGroupFactory.java:205)
    at net.jxta.platform.NetworkManager.startNetwork(NetworkManager.java:410)
    at JXTA_test.main$.main(main.scala:25)
    at JXTA_test.main.main(main.scala)
Caused by: java.lang.RuntimeException: Cm cannot create directory D:\Övrigt\Arbete\ExJobb3\Scala_Workspace\JXTA_test_25\.cache\HelloWorld\cm\jxta-WorldGroup
    at net.jxta.impl.cm.Cm.<init>(Cm.java:190)
    at net.jxta.impl.peergroup.StdPeerGroup.initFirst(StdPeerGroup.java:775)
    ... 8 more

Fatal error -- Quitting
net.jxta.exception.PeerGroupException: World Peer Group could not be instantiated.
    at net.jxta.peergroup.WorldPeerGroupFactory.newWorldPeerGroup(WorldPeerGroupFactory.java:335)
    at net.jxta.peergroup.WorldPeerGroupFactory.<init>(WorldPeerGroupFactory.java:178)
    at net.jxta.peergroup.NetPeerGroupFactory.<init>(NetPeerGroupFactory.java:205)
    at net.jxta.platform.NetworkManager.startNetwork(NetworkManager.java:410)
    at JXTA_test.main$.main(main.scala:25)
    at JXTA_test.main.main(main.scala)
Caused by: net.jxta.exception.PeerGroupException: Error during creation of local store
    at net.jxta.impl.peergroup.StdPeerGroup.initFirst(StdPeerGroup.java:782)
    at net.jxta.impl.peergroup.Platform.initFirst(Platform.java:205)
    at net.jxta.impl.peergroup.GenericPeerGroup.init(GenericPeerGroup.java:929)
    at net.jxta.peergroup.WorldPeerGroupFactory.newWorldPeerGroup(WorldPeerGroupFactory.java:310)
    ... 5 more
Caused by: java.lang.RuntimeException: Cm cannot create directory D:\Övrigt\Arbete\ExJobb3\Scala_Workspace\JXTA_test_25\.cache\HelloWorld\cm\jxta-WorldGroup
    at net.jxta.impl.cm.Cm.<init>(Cm.java:190)
    at net.jxta.impl.peergroup.StdPeerGroup.initFirst(StdPeerGroup.java:775)
    ... 8 more

Clearly, the program is unable to create a folder that it needs. ".cache" it turns out is a file and not a folder so maybe that has something to do with it? Is this an Eclipse thing? Can I change it? I have tried giving "Everyone" "Full control" so I think I can rule out a permissions issue.

Any clues are appreciated.

Edit 1: I have done this both for jxta 2.5 and 2.7 with the exact same results.

Edit 2: The question is: Why does it fail to create the folders it needs?

by Felix Eriksson at March 05, 2015 01:29 PM

CompsciOverflow

Innovative ideas for online library system project [on hold]

I'm doing a project named online library system as a part of my course work. I develop this for an educational institution as a open source. I like to implement some new and innovative features in my project.

What are the new features i can add up in this project for better user satisfaction?

I think you guys can help me to have better ideas to develop this projects. Need all your suggestion.

by user29429 at March 05, 2015 01:29 PM

TheoryOverflow

Explaining computer science algorithms/concepts/ideas using metaphors

Recently I found an interesting algorithm book entitled 'Explaining Algorithms Using Metaphors' (Google books) by Michal Forišek and Monika Steinová. "Good" metaphors help people understand and even visualize the abstract concepts and ideas behind algorithms.

For example,

One well-known exposition of the shortest path using the balls-and-strings model looks as follows: To find the shortest path between $s$ and $t$, one just grabs the corresponding two balls and tries to pull them apart.

My question:

I would like to see as many metaphors as possible for computer science algorithms/concepts/ideas.
Do you know any? Do you have your own ones?

by hengxin at March 05, 2015 01:23 PM

/r/netsec

QuantOverflow

Where can I get equivalent of 3 months libor or swap historical data?

Please note: I have already checked your standard "Historical data sources" link, but it does not have the data I need:

I am looking for 5 years of libor/swap data for major currencies. Daily, or even better hourly.

Is this available anywhere?

An example of what I would like is: Bloomberg ADSW2 CMPL Curncy.

Is there a free equivalent?

by ManInMoon at March 05, 2015 01:23 PM

StackOverflow

How do I dynamically register variables with Ansible?

I want to define an Ansible role and register dynamic variables:

---
- name: Check for {{ package }}
  stat: path=/opt/packages/{{ package }}
  register: "{{ package | regex_replace('-', '_') }}"
- name: Install {{ package }} {{ package_version }}
  command: "custom-package-installer {{ package }} {{ package_version }}"
  when: "not {{ package | regex_replace('-', '_') }}.stat.exists"

Usage looks like this:

- include: install_package.yml package=foo package_version=1.2.3

However, Ansible doesn't recognise the conditional:

TASK: [example | Install foo 1.2.3] *********************************** 
fatal: [my-server] => error while evaluating conditional: not foo.stat.exists

FATAL: all hosts have already failed -- aborting

How can I define variables dynamically, expanding the {{ }}?

by Wilfred Hughes at March 05, 2015 12:51 PM

Gatling WS check does not find a match

I'm trying to save information from a websocket answer in an attribute. I need to wait for this, because i can't proceed the test without it. But my check always times out.

This is the answer from the websocket:

4{"cid":1337,"data":{"id":"54f81d216bae58670c070b57","isActive":true,"unreadCount":0,"sharedImages":[],"lastUpdateDate":{},"chatPartner":{"id":"5422667125d54ee17c8b4567","username":"demoUser","gender":"m","isOnline":false,"common":0,"age":25}}}

This is my regex pattern:

"\"cid\":1337,\"data\":\\{\"id\":\"(.+?)\".*"

This is my WScheck:

.check(wsAwait.within(10 seconds).until(1).regex(pattern).saveAs("conversationId"))

Did I miss something?

by Philipp Sander at March 05, 2015 12:45 PM

/r/netsec

QuantOverflow

How to assess stock price movement from implied volatility?

Assume that: - The underlying is at 100 - The implied volatility of ATM call/put is 30%.

Then, is it correct that expected 1-standard-deviation move over the next month is calculated as:

$$100 * 30\% \cdot \sqrt\frac{30}{252} = 10.35 ~ \text{points}$$

I am confused as to whether I should be taking the square root or not.

by Victor123 at March 05, 2015 12:29 PM

/r/netsec

TheoryOverflow

Does it help for clique if the vertices are partitioned into 3 cliques?

A graph is $(p,q)$-colorable if its vertices can be partitioned into $p$ cliques and $q$ independent sets.

For $(2,0)$-colorable graphs clique is polynomial.

I am interested how easier (if any) is clique in $(3,0)$-colorable, when the partitions are given.

Given graph $G$ and 3 partitions of its vertices $A,B,C : A \cup B \cup C=V(G)$ such that $A,B,C$ induce cliques in G.

Q1 Is clique faster in this case?

Q2 If it is faster what is the complexity?

Q3 How good can we approximate clique in this case?

We have a clique $\frac{V(G)}{3}$ for free.

by joro at March 05, 2015 12:27 PM

StackOverflow

scala By-name parameter on a anonymous function

I'm struggling to write an anonymous function with by-name parameter. Here is what i tired.

val fun = (x: Boolean, y: =>Int) => if(x) y else 0

This fail with following error.

Error:(106, 31) identifier expected but '=>' found.
    val fun = (x: Boolean, y: =>Int) => if(x) y else 0
                              ^
Error:(109, 3) ')' expected but '}' found.
  }
  ^

How ever same code as a standard function works.

  def fun1(x: Boolean, y: =>Int) = if(x) y else 0

Any pointers ?

---------------Edit-----------------

I had a two part problem. senia answer solved the initial case. Suppose I have a function takes a function.

  def xxx[A,B](f:(A,=>B)=>B)={}

As per senia solution it works.

val fun: (Int, =>Boolean) => Boolean = (x, y) => y
xxx[Int,Boolean](fun)

However I wanna get rid of the intermediate fun and call xxx with anonymous function. Doing

xxx((Int, =>Boolean) => Boolean = (x, y) => y) 

Will not work. Any ideas how to do this ?

by Sajith Silva at March 05, 2015 12:09 PM

CompsciOverflow

Formula for rays in ray tracing

Each time the camera generates a ray, the first task of the renderer is to determine which object, if any, that ray intersects first and where the intersection occurs. This intersection point is the visible point along the ray, and we will want to simulate the interaction of light with the object at this point. To find the intersection, we must test the ray for intersection against all objects in the scene and select the one that the ray intersects first. Given a ray $r$, we first start by writing it in parametric form: $$r(t)=o+td\,,$$ where $o$ is the ray's origin, $d$ is its direction vector, and $t$ is a parameter whose legal range is $[0, ∞)$.

My Question is - How is the equation $r(t)=o+td$ formed from theory? For example, why is there $+$ instead of $-$ or $*$ or $/$, etc?

by Inder Gill at March 05, 2015 11:56 AM

StackOverflow

What defines a "persistent" data structure in Clojure?

The http://clojure.org/data_structures page explains all Clojure collections as being "immutable and persistent". I have been looking for a clear definition of exactly what "persistent" means in this instance and whether anybody has a clear explanation of this?

by Geem7n at March 05, 2015 11:50 AM

CompsciOverflow

Shortest path in a mutable graph

I have an acyclic edge-weighted graph and have used Dijkstra's Algorithm with topological sort to find any shortest path to every other node from a root $s$. This is performed in time proportional to $V + E$, where $V$ is the number of vertices and $E$ the number of edges. I have $V= N^2$ vertices in my graph. Now suppose I remove $N$ vertices (for now lets assume at random, in reality there is a pattern). If I want to find shortest paths for my new graph, is there any information from the first computation that I can cache to speed things up?

by James Gallagher at March 05, 2015 11:20 AM

StackOverflow

unable to locate tools.jar. Proper solution?

I have find so many question related to this in SO.

When i type ant -version in the command prompt, the following is printed:

Unable to locate tools.jar. Expected to find it in C:\Program Files\Java\jre1.8\lib tools.jar

Apache Ant version 1.9.4 compiled on April 29 2014

Even though it is saying "Unable to locate tools.jar......" it is also printing the version number.

All the other solutions didn't work EXCEPT copying the tools.jar from:

C:\Program Files\Java\jdk1.8.0_31\lib and paste it in

C:\Program Files\Java\jre1.8.0_31\lib

After this, when i typed ant -version, only Apache Ant version 1.9.4 compiled on April 29 2014 is diplayed.

Is this solution recommeded?

FYI:

Before installing jdk 1.8, I had jdk 1.7 and jre 1.8 already installed separately. Now I have all the three inside the same folder C:\Program Files\Java.

In Environment variable->System variables , I have defined:

JAVA_HOME: C:\Program Files\Java\jdk1.8.0_31;

ANT_HOME: ant path

And in the PATH included C:\Program Files\Java\jdk1.8.0_31\bin; and ant bin path also.

by S_M at March 05, 2015 11:14 AM

Lobsters

StackOverflow

Spark streaming merge data

My understanding is that Spark Streaming serialises the closure (e.g. map, filter, etc) and executes it on worker nodes (as explained here). Is there some way of sending the results back to the driver program and perform further operations on the local machine?

In our specific use case, we are trying to turn the results produced by Spark into an observable stream (using RxScala).

by Martijn at March 05, 2015 10:32 AM

Scala: map with two or more Options

Basically I'm looking for the most scala-like way to do the following:

def sum(value1: Option[Int], value2: Option[Int]): Option[Int] = 
  if(value1.isDefined && value2.isDefined) Some(value1.get + value2.get)
  else if(value1.isDefined && value2.isEmpty) value1
  else if(value1.isEmpty && value2.isDefined) value2
  else None

This gives correct output:

sum(Some(5), Some(3))  // result = Some(8)
sum(Some(5), None)     // result = Some(5)
sum(None, Some(3))     // result = Some(3)
sum(None, None)        // result = None

Yet to sum more than two options I'd have to use way too many ifs or use some sort of loop.

EDIT-1:

While writing the question I came up with sort of an answer:

def sum2(value1: Option[Int], value2: Option[Int]): Option[Int] = 
  value1.toList ::: value2.toList reduceLeftOption { _ + _ }

This one looks very idiomatic to my inexperienced eye. This would even work with more than two values. Yet is possible to do the same without converting to lists?

EDIT-2:

I ended up with this solution (thanks to ziggystar):

def sum(values: Option[Int]*): Option[Int] = 
  values.flatten reduceLeftOption { _ + _ }

EDIT-3:

Another alternative thanks to Landei:

def sum(values: Option[Int]*): Option[Int] = 
  values collect { case Some(n) => n } reduceLeftOption { _ + _ }

by Vilius Normantas at March 05, 2015 10:30 AM

Why does Scala see more lines in a file?

Running this from the terminal prompt:

$ wc data.csv
195727 15924341 201584826 data.csv

So, 195727 lines. What about Scala?

val raw_rows: Iterator[String] = scala.io.Source.fromFile("data.csv").getLines()
println(raw_rows.length)

Result: 200945

What am I facing here? I wish for it to be the same. In fact, if I use mighty csv (opencsv wrapper lib) it also reads 195727 lines.

by Wrench at March 05, 2015 10:23 AM

Concurrency, how to create an efficient actor setup?

Alright so I have never done intense concurrent operations like this before, theres three main parts to this algorithm.

This all starts with a Vector of around 1 Million items. Each item gets processed in 3 main stages.

Task 1: Make an HTTP Request, Convert received data into a map of around 50 entries. Task 2: Receive the map and do some computations to generate a class instance based off the info found in the map. Task 3: Receive the class and generate/add to multiple output files.

I initially started out by concurrently running task 1 with 64K entries across 64 threads (1024 entries per thread.). Generating threads in a for loop.

This worked well and was relatively fast, but I keep hearing about actors and how they are heaps better than basic Java threads/Thread pools. I've created a few actors etc. But don't know where to go from here.

Basically: 1. Are actors the right way to achieve fast concurrency for this specific set of tasks. Or is there another way I should go about it. 2. How do you know how many threads/actors are too many, specifically in task one, how do you know what the limit is on number of simultaneous connections is (Im on mac). Is there a golden rue to follow? How many threads vs how large per thread pool? And the actor equivalents? 3. Is there any code I can look at that implements actors for a similar fashion? All the code Im seeing is either getting an actor to print hello world, or super complex stuff.

by David Dudson at March 05, 2015 10:17 AM

/r/netsec

StackOverflow

scala flatten List(List(List(String, List(String, String)

after pattern matching BasicDBobject from mongo casbas i get something Like that:

val arr = List(Some(None), 
               List(List(Some(None),
                         Some(None),
                         Some("54c22f3369702d7fdb8c0100"),
                         Some(None),
                         Some(None),
                         Some(None),
                         Some(None)),
                    List(Some(None), 
                         Some(None),
                         Some("54c22f3369702d7fdb8c0100"),
                         Some(None),
                         Some(None),
                         Some(None), 
                         Some(None)),
                    List(Some(None),
                         Some(None),
                         Some("54c22f3369702d7fdb8c0100"),
                         Some(None),
                         Some(None),
                         Some(None),
                         Some(None))),
                    Some(None))

I need flatten this in List(Some(none), Some(string) ..) in one list. How i can did this?

example what i need from arr:

    List( Some("54c22f3369702d7fdb8c0100"), 
Some("54c22f3369702d7fdb8c0100"),  
Some("54c22f3369702d7fdb8c0100") )

I get arr by this code:

val subjectUsers = x.map {
        case ("entries", y: BasicDBList) => y(0) match {
          case entries: BasicDBList => entries.toList map {
            case z: BasicDBObject => z.toList map {
              case ("type", "subscribe") => Some(z("subject_id"))
              case ("info", v: BasicDBObject) => Some(v("user"))
              case _ => Some(None)
            }
            case _ => Some(None)
          }
        }
        case _ => Some(None)
      }.toList

I need only List(String) like

List( Some("54c22f3369702d7fdb8c0100"), Some("54c22f3369702d7fdb8c0100"))

by Legendary at March 05, 2015 10:12 AM

Lobsters

StackOverflow

Configure Scala Script in IntelliJ IDE to run a spark standalone script through spark-submit

I want to run a standalone Spark script that I've already compiled with sbt package command. How could I set the right configuration of Scala Script to run my script in IntelliJ IDE? Currently I'm using the command line with the following command to run it(but I want to run in IntelliJ to further debugging, for example):

~/spark-1.2.0/bin/spark-submit --class "CoinPipe" target/scala-2.10/coinpipe_2.10-1.0.jar /training/data/dir 7 12

Bellow is a snapshot of what I'm trying to do: The figure shows how I'm trying to configure my script to run in IntelliJ

by Saulo Ricci at March 05, 2015 09:44 AM

CompsciOverflow

Is Morse Code binary, ternary or quinary?

I am reading the book: "Code: The Hidden Language of Computer Hardware and Software" and in Chapter 2 author (exactly) says:

Morse code is said to be a binary (literally meaning two by two) code because the components of the code consists of only two things - a dot and a dash.

Wikipedia on the other hand says (here):

Strictly speaking it is not binary, as there are five fundamental elements (see quinary). However, this does not mean Morse code cannot be represented as a binary code. In an abstract sense, this is the function that telegraph operators perform when transmitting messages (see quinary).

But then again, another Wikipedia page includes Morse Code in 'List of binary codes.'

I am very confused because I would think Morse Code actually is ternary. You have 3 different types of 'possibilities': a silence, a short beep or a long beep.

It is impossible to represent Morse Code in 'stirct binary' isn't it?

By 'strict binary' I mean, think of stream of binary: 1010111101010.. How am I supposed to represent a silence, a short beep and / or a long beep?

Only way I can think of is 'word size' a computer implements. If I (and the CPU / the interpreter of the code) know that it will be reading 8 bits every time, then I can represent Morse Code. I can simply represent a short beep with a 1 or a long beep with a 0 and the silences will be implicitly represented by the word length.(Let's say 8 bits..) So again, I have this 3rd variable/the 3rd asset in my hand: the word size.

My thinking is like this: I can reserve the first 3 bits for how many bits to be read, and last 5 bits for the Morse code in a 8bit word. Like 00110000 will mean 'A'. And I am still in 'binary' BUT I need the word size which makes it ternary isn't it? The first 3 bits say: Read only 1 bit from the following 5 bits.

Instead of binary, if we use trinary, we can show morse code like: 101021110102110222 etc.. where 1 is: dit 0 is: dah and 2 is silence. By using 222 we can code the long silence, so if you have a signal like *- *--- *- you can show it like: 102100022210, but it is not directly possible using only with 1's and 0's UNLESS you come up with something like a 'fixed' word size as I mentioned, but well this is interpreting, not saving the Morse Code as it is in binary. Imagine something like a piano, you have only the piano buttons. You want to leave a message in Morse Code for someone and you can paint buttons to black. There is no way you can leave a clear message, isn't it? You need at least one more color so you can put the silences (the ones between characters and words. This is what I mean by trenary.

I am not asking if you can represent Morse Code in 57-ary or anything else.

I have e-mailed the author (Charles Petzold) about this; he says that he demonstrates in Chapter 9 of "Code" that Morse Code can be interpreted as a binary code.

Where am I wrong with my thinking? Is what I am reading in the book, that the Morse Code being a Binary a fact or not? Is it somehow debatable? Why is Morse Code is told be quinary in one Wikipedia page, and it is also listed in List of Binary Codes page?

Edit: I have e-mailed the author and got a reply:

-----Original Message-----

From: Koray Tugay [mailto:koray@tugay.biz]

Sent: Tuesday, March 3, 2015 3:16 PM

To: cp@charlespetzold.com

Subject: Is Morse Code really binary?

Sir, could you take a look at my question here: Is Morse Code binary, ternary or quinary? quinary ?

Regards, Koray Tugay

From: "Charles Petzold"

To: "'Koray Tugay'"

Subject: RE: Is Morse Code really binary? Date: 3

Mar 2015 23:04:35 EET

Towards the end of Chapter 9 in "Code" I demonstrate that Morse Code can be interpreted as a binary code.

-----Original Message-----

From: Koray Tugay [mailto:koray@tugay.biz]

Sent: Tuesday, March 3, 2015 3:16 PM

To: cp@charlespetzold.com

Subject: Is Morse Code really binary?

Sir, could you take a look at my question here: Is Morse Code binary, ternary or quinary? quinary ?

Regards, Koray Tugay

I am not hiding his e-mail as it is really easy to find on the web anyway.

by Koray Tugay at March 05, 2015 09:42 AM

Runtime and space usage of a snippet of code [duplicate]

This question already has an answer here:

I've been trying to understand time complexity and space complexity by writing my own snippets of code and solving them. Can you see if I'm correct?

for(int i=1; i<=n; i*=2) c = i+8;
for(int j=n; j>0 ; j/=2) a[j] = 8;

I think the time complexity is $O(log_2n)$ and the space complexity is $O(n)$.

for(int i = 1; i<=n;i*=2)
    for(int j=n ; j>0 ; j/=2)
        a[j] = 8;

In this case, the time complexity is $O((log_2n)^2)$ and the space complexity is $O(n)$.

What do you think?

by failexam at March 05, 2015 09:17 AM

StackOverflow

Relation between map function and mathematical concept of map

I started reading "conceptual mathematics: an introduction in Category Theory". There, a map is defined as having a domain and codomain, with exactly one arrow leaving a given element of the domain and mapping it to an element in the codomain.

However, my concurrent endeavours in Haskell show the map function (without filtering) to map everything in domain tot everything in codomain.

This leads me to state that the map function in and by itself does not generate correct maps in the mathematical sense. Am i correct in stating this?

by Gerald V. at March 05, 2015 09:11 AM

CompsciOverflow

The maximum number of statements that could be true at the same time

I've come across a programming question. I can't solve it but I can write the question in mathematical form as follow:

Receive k equations,and for each equation receive 3 variables a, b, and c in the following form:

ax + by + cz >= 0

This is an example input of k = 4:

1) 2x + 1y + 0z >= 0

2) 1x - 2y + 0z >= 0

3) 3x + 3y - 50z >= 0

4) -1x + 2y - 50z >= 0

You should find the maximum number of statements that could be true at the same time given that x, y, and z are positive real numbers and x >= y >= z. As for the following example, the answer is 3.

Please help me with the idea or the design paradigm I should use to solve this problem. The algorithm should be relatively fast. It should be able to compute k = 1,000 in 1 seconds.

by Poomrokc The 3years at March 05, 2015 09:08 AM

StackOverflow

Why is Scala hashmap slow?

And what can be done about it?

I have run some tests and it seems that Scala Hashmap is much slower than a Java HashMap. Please prove me wrong!

For me the whole point of Hashmap is to get quick access to a value from a given key. So I find myself resorting to using a Java HashMap when speed matters, which is a bit sad. I'm not experienced enough to say for sure but it seems that the more you mix Java and Scala the more problems you are likely to face.

test("that scala hashmap is slower than java") {
    val javaMap = new util.HashMap[Int,Int](){
      for (i <- 1 to 20)
      put(i,i+1)
    }

    import collection.JavaConverters._
    val scalaMap = javaMap.asScala.toMap

    // check is a scala hashmap
    assert(scalaMap.getClass.getSuperclass === classOf[scala.collection.immutable.HashMap[Int,Int]])

    def slow = {
      val start = System.nanoTime()
      for (i <- 1 to 1000) {
        for (i <- 1 to 20) {
          scalaMap(i)
        }
      }
      System.nanoTime() - start
    }

    def fast = {
      val start = System.nanoTime()
      for (i <- 1 to 1000) {
        for (i <- 1 to 20) {
          javaMap.get(i)
        }
      }
      System.nanoTime() - start
    }

    val elapses: IndexedSeq[(Long, Long)] = {
      (1 to 1000).map({_ => (slow,fast)})
    }

    var elapsedSlow = 0L
    var elapsedFast = 0L
    for ((eSlow,eFast) <- elapses) {
      elapsedSlow += eSlow
      elapsedFast += eFast
    }

    assert(elapsedSlow > elapsedFast)

    val fraction : Double = elapsedFast.toDouble/elapsedSlow
    println(s"slower by factor of: $fraction")
}

Am I missing something?

Answer Summary

As of now, when comparing Java 8 to Scala 2.11, it appears that Java HashMap is notably speedier at lookups (for a low number of keys) than the Scala offerings - with the exception of LongMap (if your keys are Ints/Longs).

The performance difference is not so great that it should matter in most use cases. Hopefully Scala will improve the speed of their Maps. In the mean time, if you need performance (with non-integer keys) use Java.

Int keys, n=20
Long(60), Java(93), Open(170), MutableSc(243), ImmutableSc(317)

case object keys, n=20
Java(195), AnyRef(230)

by MS-H at March 05, 2015 09:01 AM

/r/netsec

Undeadly

s2k15 Hackathon Report: Jonathan Gray on X Graphic Acceleration Improvements, afl fuzzer

Our third report from the s2k15 hackathon comes from Jonathan Gray (jsg@):

During the recent s2k15 hackathon in Brisbane I made another attempt to get acceleration working on newer Southern Islands/Graphics Core Next Radeon parts. As there is no traditional EXA acceleration provided by the xf86-video-ati driver for these the only option is glamor. Glamor used to be an external library but is now distributed as part of the Xorg X server. It works by creating an EGL context and provides OpenGL based 2D acceleration.

Read more...

March 05, 2015 08:38 AM

/r/netsec

OpenToAll CTF 2 starts in 21 Hours!

Reddit's OpenToAll CTF team has put together their second CTF! The CTF lasts 72 hours and has challenges for those of all skill levels! If you're a beginner, this is a good introductory to CTF Competitions!

http://ctf2.opentoall.net

submitted by Eriner_
[link] [4 comments]

March 05, 2015 08:19 AM

StackOverflow

Scala Set[_] vs Set[Any]

I have the following line of code:

case set: Set[Any] => setFormat[Any].write(set)

However, the compiler issues a warning:

non-variable type argument Any in type pattern scala.collection.Set[Any] is unchecked since it is eliminated by erasure [warn]

Fair enough.

So I change my line to this:

case set: Set[_] => setFormat[Any].write(set)

Now I get an error:

[error] found : scala.collection.Set[_]

[error] required: scala.collection.Set[Any]

Q1. What is the difference between these two?

Then I change my code to the following:

case set: Set[_] => setFormat[Any].write(set.map(s => s))

Now it is happy with no errors or warnings.

Q2. Why does this work??

by user1724882 at March 05, 2015 08:17 AM

Scala tree recursive fold method

Given the following definition for a (not binary) tree:

sealed trait Tree[+A]
case class Node[A](value: A, children: List[Node[A]]) extends Tree[A]
object Tree {...}

I have written the following fold method:

def fold[A, B](t: Node[A])(f: A ⇒ B)(g: (B, List[B]) ⇒ B): B =
  g(f(t.value), t.children map (fold(_)(f)(g)))

that can be nicely used for (among other things) this map method:

def map[A, B](t: Node[A])(f: A ⇒ B): Node[B] =
  fold(t)(x ⇒ Node(f(x), List()))((x, y) ⇒ Node(x.value, y))

Question: can someone help me on how to write a tail recursive version of the above fold?

by user2364174 at March 05, 2015 08:16 AM

Copy local file if exists, using ansible

I'm working in a project, and we use ansible to create a deploy a cluster of servers. One of the tasks that I've to implement, is to copy a local file to the remote host, only if that file exists locally. Now I'm trying to solve this problem using this

- hosts: 127.0.0.1 
  connection: local
  tasks:
    - name: copy local filetocopy.zip to remote if exists
    - shell: if [[ -f "../filetocopy.zip" ]]; then /bin/true; else /bin/false; fi;
      register: result    
    - copy: src=../filetocopy.zip dest=/tmp/filetocopy.zip
      when: result|success

Bu this is failing with the following message: ERROR: 'action' or 'local_action' attribute missing in task "copy local filetocopy.zip to remote if exists"

I've tried to create this if with command task. I've already tried to create this task with a local_action, but I couldn't make it work. All samples that I've found, doesn't consider a shell into local_action, there are only samples of command, and neither of them have anything else then a command. Is there a way to do this task using ansible?

by dirceusemighini at March 05, 2015 08:02 AM

CompsciOverflow

How to scan an entire network, including every subnet [on hold]

Hi so i'm trying to scan a network of mine, however i know there a several different subnets. I was wondering if there is a way to scan a network for each subnet, and then scan each subnet, creating a a master list of network clients?

by Brayden at March 05, 2015 08:01 AM

DataTau

StackOverflow

Surprising scala Iterator "out of memory" error

I am surprised that this throws an out of memory error considering that the operations are on top of an scala.collection.Iterator. The size of the individual lines are small (< 1KB)

Source.fromFile("largefile.txt").getLines.map(_.size).max

It appears it is trying to load the entire file in memory. Not sure which step triggers this. This is disappointing behavior for such a basic operation. Is there a simple way around it. And any reason for this design by the library implementors ?

Tried the same in Java8.

Files.lines(Paths.get("largefile.txt")).map( it -> it.length() ).max(Integer::max).get
//result: 3131

And this works predictably. Files.lines returns java.util.stream.Stream and the heap does not explode.

update: Looks like it boils down to new line interpretation. Both files are being interpreted as UTF-8, and down the line they both call java.io.BufferedReader.readLine(). So, still need to figure out where the discrepancy is. And I compiled both snippets Main classes in to the same project jar.

by smartnut007 at March 05, 2015 07:58 AM

Spark is throwing UnsatisfiedLinkError error

I'm running a clustered vagrant setup where I have ubuntu 14.04 and java 8 installed on my master and slave machines. My cluster successfully starts up with the slaves able to connect, however I'm not running hadoop. Instead I'm running the standalone version of spark 1.2.1.

I then copied the basic SparkPi example and compiled it with this pom:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>dev.quant</groupId>
  <artifactId>neural-spark</artifactId>
  <version>1.0-SNAPSHOT</version>
  <name>${project.artifactId}</name>
  <description>My wonderfull scala app</description>
  <inceptionYear>2010</inceptionYear>
  <licenses>
    <license>
      <name>My License</name>
      <url>http://....</url>
      <distribution>repo</distribution>
    </license>
  </licenses>

  <properties>
    <maven.compiler.source>1.8</maven.compiler.source>
    <maven.compiler.target>1.8</maven.compiler.target>
    <encoding>UTF-8</encoding>
    <scala.tools.version>2.10</scala.tools.version>
    <scala.version>2.10.4</scala.version>
  </properties>

  <dependencies>
    <dependency>
      <groupId>org.scala-lang</groupId>
      <artifactId>scala-library</artifactId>
      <version>${scala.version}</version>
    </dependency>

    <!-- Test -->
    <dependency>
      <groupId>junit</groupId>
      <artifactId>junit</artifactId>
      <version>4.11</version>
      <scope>test</scope>
    </dependency>
    <dependency>
      <groupId>org.specs2</groupId>
      <artifactId>specs2_2.10</artifactId>
      <version>3.0-M1</version>
    </dependency>
    <dependency>
      <groupId>org.scalatest</groupId>
      <artifactId>scalatest_2.10</artifactId>
      <version>3.0.0-SNAP4</version>
    </dependency>
      <dependency>
          <groupId>org.apache.spark</groupId>
          <artifactId>spark-core_2.11</artifactId>
          <version>1.2.0</version>
      </dependency>
  </dependencies>

  <build>
    <sourceDirectory>src/main/scala</sourceDirectory>
    <testSourceDirectory>src/test/scala</testSourceDirectory>
    <plugins>
        <plugin>
            <groupId>net.alchim31.maven</groupId>
            <artifactId>scala-maven-plugin</artifactId>
            <executions>
                <execution>
                    <id>scala-compile-first</id>
                    <phase>process-resources</phase>
                    <goals>
                        <goal>add-source</goal>
                        <goal>compile</goal>
                    </goals>
                </execution>
                <execution>
                    <id>scala-test-compile</id>
                    <phase>process-test-resources</phase>
                    <goals>
                        <goal>testCompile</goal>
                    </goals>
                </execution>
            </executions>
        </plugin>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-surefire-plugin</artifactId>
        <version>2.13</version>
        <configuration>
          <useFile>false</useFile>
          <disableXmlReport>true</disableXmlReport>
          <!-- If you have classpath issue like NoDefClassError,... -->
          <!-- useManifestOnlyJar>false</useManifestOnlyJar -->
          <includes>
            <include>**/*Test.*</include>
            <include>**/*Suite.*</include>
          </includes>
        </configuration>
      </plugin>
        <plugin>
            <artifactId>maven-assembly-plugin</artifactId>
            <configuration>
                <archive>
                    <manifest>
                        <mainClass>dev.quant.App</mainClass>
                    </manifest>
                </archive>
                <descriptorRefs>
                    <descriptorRef>jar-with-dependencies</descriptorRef>
                </descriptorRefs>
            </configuration>
        </plugin>
    </plugins>
  </build>
</project>

Which works -> mvn -U clean scala:compile assembly:single

The program I run is:

package dev.quant

import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.SparkContext._

import scala.math._

/**
 * @author ${user.name}
 */

object App {

  def foo(x : Array[String]) = x.foldLeft("")((a,b) => a + b)

  def main(args : Array[String]) {
    val conf = new SparkConf().setAppName("Spark Pi").setMaster("spark://10.0.0.2:7077").set("spark.executor.memory",".5g")
    val spark = new SparkContext(conf)
    val slices = 20
    val n = math.min(100000L * slices, Int.MaxValue).toInt // avoid overflow
    val count = spark.parallelize(1 until n, slices).map { i =>
        val x = random * 2 - 1
        val y = random * 2 - 1
        if (x*x + y*y < 1) 1 else 0
      }.reduce(_ + _)
    println("Pi is roughly " + 4.0 * count / n)
    spark.stop()
  }

}

So basically after running -> mvn scala:run -DmainClass=dev.quant.App I get the following error bellow...

15/03/04 22:46:15 INFO SecurityManager: Changing view acls to: dev
15/03/04 22:46:15 INFO SecurityManager: Changing modify acls to: dev
15/03/04 22:46:15 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(dev); users with modify permissions: Set(dev)
15/03/04 22:46:16 INFO Slf4jLogger: Slf4jLogger started
15/03/04 22:46:16 INFO Remoting: Starting remoting
15/03/04 22:46:16 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@192.168.0.2:50662]
15/03/04 22:46:16 INFO Utils: Successfully started service 'sparkDriver' on port 50662.
15/03/04 22:46:16 INFO SparkEnv: Registering MapOutputTracker
15/03/04 22:46:16 INFO SparkEnv: Registering BlockManagerMaster
15/03/04 22:46:16 INFO DiskBlockManager: Created local directory at /tmp/spark-b8515bff-2915-4bc2-a917-fdb7c11849b5/spark-3527d111-3aac-4378-b493-17c92b394018
15/03/04 22:46:16 INFO MemoryStore: MemoryStore started with capacity 265.1 MB
Exception in thread "main" java.lang.ExceptionInInitializerError
    at org.apache.spark.util.Utils$.getSparkOrYarnConfig(Utils.scala:1873)
    at org.apache.spark.storage.BlockManager.<init>(BlockManager.scala:105)
    at org.apache.spark.storage.BlockManager.<init>(BlockManager.scala:180)
    at org.apache.spark.SparkEnv$.create(SparkEnv.scala:308)
    at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:159)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:240)
    at dev.quant.App$.main(App.scala:18)
    at dev.quant.App.main(App.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:483)
    at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:358)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
    at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)
    at org.apache.hadoop.security.Groups.<init>(Groups.java:64)
    at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:240)
    at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:255)
    at org.apache.hadoop.security.UserGroupInformation.setConfiguration(UserGroupInformation.java:283)
    at org.apache.spark.deploy.SparkHadoopUtil.<init>(SparkHadoopUtil.scala:44)
    at org.apache.spark.deploy.SparkHadoopUtil$.<init>(SparkHadoopUtil.scala:214)
    at org.apache.spark.deploy.SparkHadoopUtil$.<clinit>(SparkHadoopUtil.scala)
    ... 15 more
Caused by: java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
    at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:129)
    ... 22 more
Caused by: java.lang.UnsatisfiedLinkError: org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative()V
    at org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative(Native Method)
    at org.apache.hadoop.security.JniBasedUnixGroupsMapping.<clinit>(JniBasedUnixGroupsMapping.java:49)
    at org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.<init>(JniBasedUnixGroupsMappingWithFallback.java:38)
    ... 27 more

I've also tried just submitting the jar via ./spark-submit /path/to/my-jar but no avail. I have never seen this error before, but my first impression is that JniBasedUnixGroupsMappingWithFallback is some java library that my binary spark distribution depends on, but java 8 doesn't have it. Anyway, let me know if you guys have any idea what it might be.

by Mr.Student at March 05, 2015 07:40 AM

Spark SQL Unsupported datatype TimestampType

I am just new to spark and scala.Trying to read a text file and save its a parquet file. For me one of the field I am using is the TimeStamp and its the docs say the spark1.1.0 supports java.util.TimeStamp.

The run time error I am getting while saving to parquet files is

Exception in thread "main" java.lang.RuntimeException: Unsupported datatype TimestampType at scala.sys.package$.error(package.scala:27) at org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$fromDataType$2.apply(ParquetTypes.scala:301)

Any recommendation is really appreciable.

Thanks

by Vinoj Mathew at March 05, 2015 07:23 AM

scala compile error : object LongRef does not have a member create

I try to extract spark als code and modify the code then compile by myself.

I use the idea-intellj ide for the project, I 'm confused why the following error occurs...

all the error files are scala's library, there is no error say my codes' error...

so I don't know if there is any error in my own code that cause this error.

Error:scalac: Error: object LongRef does not have a member create
scala.reflect.internal.FatalError: object LongRef does not have a member create
    at scala.reflect.internal.Definitions$DefinitionsClass.scala$reflect$internal$Definitions$DefinitionsClass$$fatalMissingSymbol(Definitions.scala:1183)
    at scala.reflect.internal.Definitions$DefinitionsClass.getMember(Definitions.scala:1200)
    at scala.reflect.internal.Definitions$DefinitionsClass.getMemberMethod(Definitions.scala:1235)
    at scala.tools.nsc.transform.LambdaLift$$anonfun$scala$tools$nsc$transform$LambdaLift$$refCreateMethod$1.apply(LambdaLift.scala:41)
    at scala.tools.nsc.transform.LambdaLift$$anonfun$scala$tools$nsc$transform$LambdaLift$$refCreateMethod$1.apply(LambdaLift.scala:41)
    at scala.reflect.internal.util.Collections$$anonfun$mapFrom$1.apply(Collections.scala:182)
    at scala.reflect.internal.util.Collections$$anonfun$mapFrom$1.apply(Collections.scala:182)
    at scala.collection.immutable.List.map(List.scala:273)
    at scala.reflect.internal.util.Collections$class.mapFrom(Collections.scala:182)
    at scala.reflect.internal.SymbolTable.mapFrom(SymbolTable.scala:16)
    at scala.tools.nsc.transform.LambdaLift.scala$tools$nsc$transform$LambdaLift$$refCreateMethod$lzycompute(LambdaLift.scala:41)
    at scala.tools.nsc.transform.LambdaLift.scala$tools$nsc$transform$LambdaLift$$refCreateMethod(LambdaLift.scala:40)
    at scala.tools.nsc.transform.LambdaLift$LambdaLifter.postTransform(LambdaLift.scala:480)
    at scala.tools.nsc.transform.LambdaLift$LambdaLifter.transform(LambdaLift.scala:535)
    at scala.tools.nsc.transform.LambdaLift$LambdaLifter.transform(LambdaLift.scala:56)
    at scala.reflect.api.Trees$Transformer$$anonfun$transformStats$1.apply(Trees.scala:2589)
    at scala.reflect.api.Trees$Transformer$$anonfun$transformStats$1.apply(Trees.scala:2587)
    at scala.collection.immutable.List.loop$1(List.scala:173)
    at scala.collection.immutable.List.mapConserve(List.scala:189)
    at scala.reflect.api.Trees$Transformer.transformStats(Trees.scala:2587)
    at scala.tools.nsc.transform.LambdaLift$LambdaLifter.transformStats(LambdaLift.scala:554)
    at scala.tools.nsc.transform.LambdaLift$LambdaLifter.transformStats(LambdaLift.scala:56)
    at scala.reflect.internal.Trees$class.itransform(Trees.scala:1366)
    at scala.reflect.internal.SymbolTable.itransform(SymbolTable.scala:16)
    at scala.reflect.internal.SymbolTable.itransform(SymbolTable.scala:16)
    at scala.reflect.api.Trees$Transformer.transform(Trees.scala:2555)
    at scala.tools.nsc.transform.TypingTransformers$TypingTransformer.transform(TypingTransformers.scala:44)
    at scala.tools.nsc.transform.ExplicitOuter$OuterPathTransformer.scala$reflect$internal$Trees$UnderConstructionTransformer$$super$transform(ExplicitOuter.scala:219)
    at scala.reflect.internal.Trees$UnderConstructionTransformer$class.transform(Trees.scala:1687)
    at scala.tools.nsc.transform.ExplicitOuter$OuterPathTransformer.transform(ExplicitOuter.scala:291)
    at scala.tools.nsc.transform.LambdaLift$LambdaLifter.preTransform(LambdaLift.scala:527)
    at scala.tools.nsc.transform.LambdaLift$LambdaLifter.transform(LambdaLift.scala:535)
    at scala.tools.nsc.transform.LambdaLift$LambdaLifter.transform(LambdaLift.scala:56)
    at scala.reflect.internal.Trees$$anonfun$itransform$2.apply(Trees.scala:1363)
    at scala.reflect.internal.Trees$$anonfun$itransform$2.apply(Trees.scala:1361)
    at scala.reflect.api.Trees$Transformer.atOwner(Trees.scala:2600)
    at scala.tools.nsc.transform.TypingTransformers$TypingTransformer.atOwner(TypingTransformers.scala:30)
    at scala.tools.nsc.transform.TypingTransformers$TypingTransformer.atOwner(TypingTransformers.scala:25)
    at scala.tools.nsc.transform.TypingTransformers$TypingTransformer.atOwner(TypingTransformers.scala:17)
    at scala.reflect.internal.Trees$class.itransform(Trees.scala:1360)
    at scala.reflect.internal.SymbolTable.itransform(SymbolTable.scala:16)
    at scala.reflect.internal.SymbolTable.itransform(SymbolTable.scala:16)
    at scala.reflect.api.Trees$Transformer.transform(Trees.scala:2555)
    at scala.tools.nsc.transform.TypingTransformers$TypingTransformer.transform(TypingTransformers.scala:44)
    at scala.tools.nsc.transform.ExplicitOuter$OuterPathTransformer.scala$reflect$internal$Trees$UnderConstructionTransformer$$super$transform(ExplicitOuter.scala:219)
    at scala.reflect.internal.Trees$UnderConstructionTransformer$class.transform(Trees.scala:1687)
    at scala.tools.nsc.transform.ExplicitOuter$OuterPathTransformer.transform(ExplicitOuter.scala:291)
    at scala.tools.nsc.transform.LambdaLift$LambdaLifter.preTransform(LambdaLift.scala:527)
    at scala.tools.nsc.transform.LambdaLift$LambdaLifter.transform(LambdaLift.scala:535)
    at scala.tools.nsc.transform.LambdaLift$LambdaLifter.transform(LambdaLift.scala:56)
    at scala.reflect.api.Trees$Transformer$$anonfun$transformStats$1.apply(Trees.scala:2589)
    at scala.reflect.api.Trees$Transformer$$anonfun$transformStats$1.apply(Trees.scala:2587)
    at scala.collection.immutable.List.loop$1(List.scala:173)
    at scala.collection.immutable.List.mapConserve(List.scala:189)
    at scala.reflect.api.Trees$Transformer.transformStats(Trees.scala:2587)
    at scala.tools.nsc.transform.LambdaLift$LambdaLifter.transformStats(LambdaLift.scala:554)
    at scala.tools.nsc.transform.LambdaLift$LambdaLifter.transformStats(LambdaLift.scala:56)
    at scala.reflect.internal.Trees$class.itransform(Trees.scala:1404)
    at scala.reflect.internal.SymbolTable.itransform(SymbolTable.scala:16)
    at scala.reflect.internal.SymbolTable.itransform(SymbolTable.scala:16)
    at scala.reflect.api.Trees$Transformer.transform(Trees.scala:2555)
    at scala.tools.nsc.transform.TypingTransformers$TypingTransformer.scala$tools$nsc$transform$TypingTransformers$TypingTransformer$$super$transform(TypingTransformers.scala:40)
    at scala.tools.nsc.transform.TypingTransformers$TypingTransformer$$anonfun$transform$1.apply(TypingTransformers.scala:40)
    at scala.tools.nsc.transform.TypingTransformers$TypingTransformer$$anonfun$transform$1.apply(TypingTransformers.scala:40)
    at scala.reflect.api.Trees$Transformer.atOwner(Trees.scala:2600)
    at scala.tools.nsc.transform.TypingTransformers$TypingTransformer.atOwner(TypingTransformers.scala:30)
    at scala.tools.nsc.transform.TypingTransformers$TypingTransformer.transform(TypingTransformers.scala:40)
    at scala.tools.nsc.transform.ExplicitOuter$OuterPathTransformer.scala$reflect$internal$Trees$UnderConstructionTransformer$$super$transform(ExplicitOuter.scala:219)
    at scala.reflect.internal.Trees$UnderConstructionTransformer$class.transform(Trees.scala:1687)
    at scala.tools.nsc.transform.ExplicitOuter$OuterPathTransformer.transform(ExplicitOuter.scala:291)
    at scala.tools.nsc.transform.LambdaLift$LambdaLifter.preTransform(LambdaLift.scala:527)
    at scala.tools.nsc.transform.LambdaLift$LambdaLifter.transform(LambdaLift.scala:535)
    at scala.tools.nsc.transform.LambdaLift$LambdaLifter.transform(LambdaLift.scala:56)
    at scala.reflect.api.Trees$Transformer.transformTemplate(Trees.scala:2563)
    at scala.reflect.internal.Trees$$anonfun$itransform$4.apply(Trees.scala:1408)
    at scala.reflect.internal.Trees$$anonfun$itransform$4.apply(Trees.scala:1407)
    at scala.reflect.api.Trees$Transformer.atOwner(Trees.scala:2600)
    at scala.tools.nsc.transform.TypingTransformers$TypingTransformer.atOwner(TypingTransformers.scala:30)
    at scala.tools.nsc.transform.TypingTransformers$TypingTransformer.atOwner(TypingTransformers.scala:25)
    at scala.tools.nsc.transform.TypingTransformers$TypingTransformer.atOwner(TypingTransformers.scala:17)
    at scala.reflect.internal.Trees$class.itransform(Trees.scala:1406)
    at scala.reflect.internal.SymbolTable.itransform(SymbolTable.scala:16)
    at scala.reflect.internal.SymbolTable.itransform(SymbolTable.scala:16)
    at scala.reflect.api.Trees$Transformer.transform(Trees.scala:2555)
    at scala.tools.nsc.transform.TypingTransformers$TypingTransformer.transform(TypingTransformers.scala:44)
    at scala.tools.nsc.transform.ExplicitOuter$OuterPathTransformer.scala$reflect$internal$Trees$UnderConstructionTransformer$$super$transform(ExplicitOuter.scala:219)
    at scala.reflect.internal.Trees$UnderConstructionTransformer$class.transform(Trees.scala:1687)
    at scala.tools.nsc.transform.ExplicitOuter$OuterPathTransformer.transform(ExplicitOuter.scala:291)
    at scala.tools.nsc.transform.LambdaLift$LambdaLifter.preTransform(LambdaLift.scala:527)
    at scala.tools.nsc.transform.LambdaLift$LambdaLifter.transform(LambdaLift.scala:535)
    at scala.tools.nsc.transform.LambdaLift$LambdaLifter.transform(LambdaLift.scala:56)
    at scala.reflect.api.Trees$Transformer$$anonfun$transformStats$1.apply(Trees.scala:2589)
    at scala.reflect.api.Trees$Transformer$$anonfun$transformStats$1.apply(Trees.scala:2587)
    at scala.collection.immutable.List.loop$1(List.scala:173)
    at scala.collection.immutable.List.mapConserve(List.scala:189)
    at scala.reflect.api.Trees$Transformer.transformStats(Trees.scala:2587)
    at scala.tools.nsc.transform.LambdaLift$LambdaLifter.transformStats(LambdaLift.scala:554)
    at scala.tools.nsc.transform.LambdaLift$LambdaLifter.transformStats(LambdaLift.scala:56)
    at scala.reflect.internal.Trees$$anonfun$itransform$7.apply(Trees.scala:1426)
    at scala.reflect.internal.Trees$$anonfun$itransform$7.apply(Trees.scala:1426)
    at scala.reflect.api.Trees$Transformer.atOwner(Trees.scala:2600)
    at scala.tools.nsc.transform.TypingTransformers$TypingTransformer.atOwner(TypingTransformers.scala:30)
    at scala.tools.nsc.transform.TypingTransformers$TypingTransformer.atOwner(TypingTransformers.scala:25)
    at scala.tools.nsc.transform.TypingTransformers$TypingTransformer.atOwner(TypingTransformers.scala:17)
    at scala.reflect.internal.Trees$class.itransform(Trees.scala:1425)
    at scala.reflect.internal.SymbolTable.itransform(SymbolTable.scala:16)
    at scala.reflect.internal.SymbolTable.itransform(SymbolTable.scala:16)
    at scala.reflect.api.Trees$Transformer.transform(Trees.scala:2555)
    at scala.tools.nsc.transform.TypingTransformers$TypingTransformer.scala$tools$nsc$transform$TypingTransformers$TypingTransformer$$super$transform(TypingTransformers.scala:40)
    at scala.tools.nsc.transform.TypingTransformers$TypingTransformer$$anonfun$transform$2.apply(TypingTransformers.scala:42)
    at scala.tools.nsc.transform.TypingTransformers$TypingTransformer$$anonfun$transform$2.apply(TypingTransformers.scala:42)
    at scala.reflect.api.Trees$Transformer.atOwner(Trees.scala:2600)
    at scala.tools.nsc.transform.TypingTransformers$TypingTransformer.atOwner(TypingTransformers.scala:30)
    at scala.tools.nsc.transform.TypingTransformers$TypingTransformer.transform(TypingTransformers.scala:42)
    at scala.tools.nsc.transform.ExplicitOuter$OuterPathTransformer.scala$reflect$internal$Trees$UnderConstructionTransformer$$super$transform(ExplicitOuter.scala:219)
    at scala.reflect.internal.Trees$UnderConstructionTransformer$class.transform(Trees.scala:1687)
    at scala.tools.nsc.transform.ExplicitOuter$OuterPathTransformer.transform(ExplicitOuter.scala:291)
    at scala.tools.nsc.transform.LambdaLift$LambdaLifter.preTransform(LambdaLift.scala:527)
    at scala.tools.nsc.transform.LambdaLift$LambdaLifter.transform(LambdaLift.scala:535)
    at scala.tools.nsc.transform.LambdaLift$LambdaLifter.transform(LambdaLift.scala:56)
    at scala.tools.nsc.ast.Trees$Transformer.transformUnit(Trees.scala:147)
    at scala.tools.nsc.transform.LambdaLift$LambdaLifter.scala$tools$nsc$transform$LambdaLift$LambdaLifter$$super$transformUnit(LambdaLift.scala:560)
    at scala.tools.nsc.transform.LambdaLift$LambdaLifter$$anonfun$transformUnit$1.apply(LambdaLift.scala:560)
    at scala.tools.nsc.transform.LambdaLift$LambdaLifter$$anonfun$transformUnit$1.apply(LambdaLift.scala:560)
    at scala.reflect.internal.SymbolTable.enteringPhase(SymbolTable.scala:235)
    at scala.reflect.internal.SymbolTable.exitingPhase(SymbolTable.scala:256)
    at scala.tools.nsc.transform.LambdaLift$LambdaLifter.transformUnit(LambdaLift.scala:559)
    at scala.tools.nsc.transform.Transform$Phase.apply(Transform.scala:30)
    at scala.tools.nsc.Global$GlobalPhase$$anonfun$applyPhase$1.apply$mcV$sp(Global.scala:441)
    at scala.tools.nsc.Global$GlobalPhase.withCurrentUnit(Global.scala:432)
    at scala.tools.nsc.Global$GlobalPhase.applyPhase(Global.scala:441)
    at scala.tools.nsc.Global$GlobalPhase$$anonfun$run$1.apply(Global.scala:399)
    at scala.tools.nsc.Global$GlobalPhase$$anonfun$run$1.apply(Global.scala:399)
    at scala.collection.Iterator$class.foreach(Iterator.scala:750)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1202)
    at scala.tools.nsc.Global$GlobalPhase.run(Global.scala:399)
    at scala.tools.nsc.Global$Run.compileUnitsInternal(Global.scala:1500)
    at scala.tools.nsc.Global$Run.compileUnits(Global.scala:1487)
    at scala.tools.nsc.Global$Run.compileSources(Global.scala:1482)
    at scala.tools.nsc.Global$Run.compile(Global.scala:1580)
    at xsbt.CachedCompiler0.run(CompilerInterface.scala:126)
    at xsbt.CachedCompiler0.run(CompilerInterface.scala:102)
    at xsbt.CompilerInterface.run(CompilerInterface.scala:27)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at sbt.compiler.AnalyzingCompiler.call(AnalyzingCompiler.scala:102)
    at sbt.compiler.AnalyzingCompiler.compile(AnalyzingCompiler.scala:48)
    at sbt.compiler.AnalyzingCompiler.compile(AnalyzingCompiler.scala:41)
    at org.jetbrains.jps.incremental.scala.local.IdeaIncrementalCompiler.compile(IdeaIncrementalCompiler.scala:29)
    at org.jetbrains.jps.incremental.scala.local.LocalServer.compile(LocalServer.scala:26)
    at org.jetbrains.jps.incremental.scala.remote.Main$.make(Main.scala:62)
    at org.jetbrains.jps.incremental.scala.remote.Main$.nailMain(Main.scala:20)
    at org.jetbrains.jps.incremental.scala.remote.Main.nailMain(Main.scala)
    at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at com.martiansoftware.nailgun.NGSession.run(NGSession.java:319)

by user2848932 at March 05, 2015 07:21 AM

Integrate jcloud with aws-sns [on hold]

I want to integrate Jclouds with aws-sns.But I found jcloud didn't provide any provider for aws-sns.

I followed this tutorial(https://jclouds.apache.org/guides/aws/).I didn't find aws-sns provider any where. can you please suggest how can we integrate jclouds with aws-sns

by lakshmanaRao at March 05, 2015 07:19 AM

My filter loop in scala doesn't work?

for(i <- data){
        if(i != 'a' || i != 'e' || i != 'i' || i != 'o' || i != 'u'){
            myArray(i) = i;
            println(myArray(i));
        }
    }

Data is a passed in string, and myArray variable is a char array. Why is it that when the char selected is in-putted into myArray it can be a vowel? Please help, thanks.

by Callum Sangray at March 05, 2015 07:13 AM

/r/compsci

/r/netsec

StackOverflow

Using lists and Arrays in scala

def filter(data : Array[Int]) : Array[Int] = {


    var list: List[Int] = List();
    var index = 0;

    for(index <- data){
        if(index % 2 == 0){
        list.add(index);
        }
    }
    var myArray = new Array[Int](list.length);
    index = 0;
    for(index <- list){
        myArray(index) = index;
    }

    return myArray;


}

The array is meant to take data and filter all even numbers and return it, when i return it, it has lots of zeros added into the array. Please help? Thank you

by Callum Sangray at March 05, 2015 07:01 AM

TheoryOverflow

Search for all nearest neighbors within a certain radius of a point in 3D?

I have about 80 million spatial points(3D) and I want to find all the nearest neighbors of a query point which lie under a sphere of a certain radius(can be given as input) with the query point as center.

I have read about some data structures that are used for such kind of search, such as Kd-trees, octrees or range trees. For my application, I only need to populate the data structure once and then search for multiple query points.

My question is:

  • Is there any better way or a better data structure than Kd-trees in this case?
    • With kd trees, I'll have to find the median of such a large dataset multiple times, which may take a lot of time to populate the tree.

I don't know much about any of these data structures so could you please refer some tutorials about whatever solution you may recommend. I know this question may seem repeated but from all the questions I found and read, no one was using such a large set of points.

by user16368 at March 05, 2015 06:58 AM

StackOverflow

SOLR Full import won't run script transormer Complains I'm not running java 6

From a few days i am getting following error on Full import

org.apache.solr.common.SolrException log
SEVERE: Full Import failed:org.apache.solr.handler.dataimport.DataImportHandlerException:  can be used only in java 6 or above Processing Document # 1
        at org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:72)
        at org.apache.solr.handler.dataimport.ScriptTransformer.initEngine(ScriptTransformer.java:94)
        at org.apache.solr.handler.dataimport.ScriptTransformer.transformRow(ScriptTransformer.java:54)
        at org.apache.solr.handler.dataimport.EntityProcessorWrapper.applyTransformer(EntityProcessorWrapper.java:193)
        at org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:251)
        at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:596)
        at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:622)
        at org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:268)
        at org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:187)
        at org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:359)
        at org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:427)
        at org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:408)
Caused by: java.lang.NullPointerException
        at org.apache.solr.handler.dataimport.ScriptTransformer.initEngine(ScriptTransformer.java:89)

Can anyone help me in this regard

I am running Following JDK. After searching about this, find a little that sometimes openjdk build has no rhino support and scripttransformer breaks due to that. Can anyone help me how to check about rhino support and how to install this on machine.

# java -version
java version "1.6.0_34"
OpenJDK Runtime Environment (IcedTea6 1.13.6) (rhel-1.13.6.1.el6_6-x86_64)
OpenJDK 64-Bit Server VM (build 23.25-b01, mixed mode)

by user822391 at March 05, 2015 06:53 AM

Generalizing Clojure solution to Euler # 1

Q = Find the sum of all the multiples of 3 or 5 below 1000.

Simplest answer

(reduce + (filter #(or (== (mod % 3) 0) (== (mod % 5) 0)) (range 1000))) 

Trying for generic answer like following

(reduce + (list-nums-divisible-by-all-divisors N div1 div2 ...))
(defn list-nums-divisible-by-all-divisors
    [num & divisors]
    (let [myfn (create-fn divisors)]
        (filter myfn (range num)))) 

Here is create-fn for 2 divisors

(defn create-fn
  [div1 div2]
  #(or (== (mod % div1) 0) (== (mod % div2) 0)))

How would I write create-fn for variable number of divisors?

Is this the right approach to tackle this? I have a feeling that I should probably be using the -> or ->> operator, instead of this way.

Also, I think this becomes a generic question. Can one create and return a function using a variable number of arguments, which can then be used as an anonymous function (with another level of arguments)?

Thanks in advance :-)

by Samir S at March 05, 2015 06:41 AM

CompsciOverflow

String distance metric for possibly truncated words

I am looking for the optimal string distance metric to indicate similarity/difference between possibly truncated words. For example, I would like to find the distance metric between the name "Richard" and possible variations of it (and other names as well)

Variations: Richard | Rihcard | Richard Smith | Rich? | R?ard | ?rd | Joe

, where "?" stands for truncation. For example, the metric should indicate smaller distance between "Richard" and "?rd" or "Richard Smith" than "Richard" and "Joe".

I would really appreciate any help.

by Adam at March 05, 2015 06:31 AM

StackOverflow

Why is the actor "ask" pattern considered an anti-pattern or "code smell?"

From what I've gathered, the "ask" pattern is considered a bad practice and should be avoided. Instead, the recommended pattern is the "actor per request" model. However, this doesn't make sense to me, as the "ask" pattern does exactly this - it creates a lightweight actor per request. So why is this then considered bad, especially when futures are far more composable and are able to more elegantly handle the collation of multiple send/receives?

by Jeff at March 05, 2015 06:08 AM

UnixOverflow

Can BTRFS stripe RAID arrays?

I've been using an Ext3/4+lvm2+mdadm file system setup for about 5 years on a file server. This has grown (and moved processors a couple of times) and I've moved disks between systems (exporting/importing volume groups) and moved physical volumes onto new disks, so I'm fairly comfortable with it, but recently started looking at alternate file systems as the check and resync times for mdadm are becoming excessive and newer systems seem to be using checksums and dynamic healing to avoid this.

My setup uses RAID arrays build from partitions rather than entire disks, to allow mixing of disk sizes; for example, I have previously mixed 4 disks, 2 x 3TB, 1 x 2Tb and 1 x 1Tb to create a 9Tb array giving 6Tb of usable space. Assuming the 4 disks are formatted with 1Tb partitions, the mdadm/lvm2 commands are:

# mdadm /dev/md0 --create -lraid5 -n3 /dev/sda1 /dev/sdb1 /dev/sdc1
# mdadm /dev/md1 --create -lraid5 -n3 /dev/sda2 /dev/sdb2 /dev/sdc2
# mdadm /dev/md2 --create -lraid5 -n3 /dev/sda3 /dev/sdb3 /dev/sdd1
# pvcreate /dev/md0 /dev/md1 /dev/md2
# vgcreate grp /dev/md0 /dev/md1 /dev/md2
# lvcreate -l100%FREE --name vol grp

For ZFS, ignoring the log and any caches, the command would be:

# zpool create puddle raidz sda1 sdb2 sdc1 raidz sda2 sdb2 sdc2 raidz sda3 sdb3 sdd1

However, with BTRFS it seems the best that can be done is to create 3 raid arrays:

# mkfs.btrfs -draid5 /dev/sda1 /dev/sdb1 /dev/sdc1
# mkfs.btrfs -draid5 /dev/sda2 /dev/sdb2 /dev/sdc2
# mkfs.btrfs -draid5 /dev/sda3 /dev/sdb3 /dev/sdd1

Unless I've missed something, there appears no way within BTRFS to merge these into a single file system (Also, many discussions imply the raid5 implementation is fairly new and may not be up to production standard!). Is there a way to configure BTRFS to use to get a single file system across 4 disparately sized disks, with redundancy for the loss of any single disk and getting 2/3 of the space available for storage?

Or is my choice restricted to ZFS or my existing ext/lvm/mdadm stack?

by StarNamer at March 05, 2015 06:02 AM

Fefe

Kurze Durchsage des Wissenschaftlichen Dienstes des ...

Kurze Durchsage des Wissenschaftlichen Dienstes des Bundestages:
Stadt- und Gemeinderäte dürfen sich nicht mit dem geplanten europäisch-amerikanischen Freihandelsabkommen TTIP beschäftigen. Tun sie es doch, verhalten sie sich rechtswidrig.
Ach so. Na dann.

March 05, 2015 06:00 AM

Planet Clojure

Free Application

Picking back up

It's been awhile since I've posted anything as I've been busy working on the internals of Toccata. But it's time to pick back up.

In the last couple of posts, we've seen how to apply functions to values that are wrapped in various kinds of contexts. We've introduced several different kinds of contexts; Maybe, Error, Thunk, Reader. And they all implement the Applicative protocol.

(defprotocol Applicative
  (wrap [x v])
  (apply* [fv args]))

These different contexts are called 'effects', short for 'side effects', because they cause something to happen in addition to returning a value.

Live Free or Die

Take a look at this effect type:

(deftype free [v]
  FreeEval
  (evaluate [free-val eval-free]
    (eval-free v))

  Applicative
  (wrap [_ v]
    (free v))
  (apply* [fv args]
    (free-app fv args)))

As you can see, it does almost nothing. When it's called, it just creates a context that holds a value. And the only thing you can do with it, is pass it as an argument to 'evaluate' along with a function, eval-free. The contained value v is then passed to eval-free and the result is returned. Hardly worth mentioning.

But look down at the implementation of apply*. That returns the result of a call to free-app as the result of applying a function to free arguments. So let's look at free-app.

(deftype free-app [fv args]
  FreeEval
  (evaluate [free-val eval-free]
    (let [args (map args (fn [arg]
                           (evaluate arg eval-free)))
          f (evaluate fv eval-free)]
      (apply* f args)))

  Applicative
  (wrap [_ v]
    (free v))
  (apply* [fv args]
    (free-app fv args)))

free-app's implementation of Applicative is identical to free's, so nothing new. The interesting part must be the implementation of evaluate, so let's turn our attention there.

Nesting evaluations

This implemenation of evaluate is a little more involved. It recursively calls evaluate on all the args of free-app. The results are bound to the args symbol in the let. Then the fv value is also evaluated and its result is bound to f. Nothing too surprising.

But then, apply* is called to apply f to the list of args. If f is a plain old function, this is just function application. But as we've seen f and each of the args could be wrapped in a context. So the result of apply* could be some kind of side-effecting computation! In fact, depending on the type of values returned from eval-free, it could be any of the effects we've seen so far, or any other types that implement the Applicative interface.

And one thing you might have missed. Since evaluate is called recursively on each of the args, you can nest values of type free or free-app inside values of free-app to any depth.

What this gives you is a tree data structure where each of the internal nodes is a free-app value and the leaves are all free values. Then, depending on the eval-free function you pass to evaluate, that generic tree data structure can be transformed into side effecting computations in any kind of effect you choose.

The meaning of 'free'

And that gives us a clue as to why it's called free. It just means that it's not bound to any specific effect. It can be any effect it needs to be depending on how you interpret it.

A quick example

To close out, here's a quick example of how to build a small tree using apply-to which is implemented using apply*.

(apply-to list
          (free 9)
          (free 3)
          (apply-to list
                    (free 5)
                    (free 1)
                    (apply-to list
                              (free 7)
                              (free 6)
                              (free 4)))
          (free 2)
          (free 0))

We'll see how to evaluate that a couple of different ways next time.

by Toccata at March 05, 2015 06:00 AM

StackOverflow

ZeroMQ PUB/SUB not working on same machine on different JVM

I am using Zero MQ PUB/SUB model where PUB and SUB are two different applications deployed on the same machine on webpshere 6.1. This model works fine on my local machine but when I deploy it on a remote unix box server it isn't working. My SUB never receives a message from PUB. I tried all the options suggested i could find on the web (localhost, 127.0.0.1) but no luck. Appreciate any help on this. I am using jeroMq 3.2.2.

Thanks Akash

by Akash at March 05, 2015 05:56 AM

Lobsters

/r/emacs

TheoryOverflow

An upper bound for chi-square divergence in terms of KL divergence for general alphabets

In my research I need an upper bound for chi-square divergence in terms KL divergence which works for general alphabets. To make this precise, note that for two probability measures $P$ and $Q$ defined over a general alphabet $\mathcal{X}$, if $P\ll Q$, then $$\chi ^2(P||Q):=\int_{\mathcal{X}}\Big(\frac{dP}{dQ}\Big)^2dQ$$ and $$D(P||Q):=\int_{\mathcal{X}}dP\log\frac{dP}{dQ}.$$

I am looking for an upper bound of $\chi^2(P||Q)$ in terms of $D(P||Q)$ which works wven if $\mathcal{X}$ is uncountable. What I need is a special case where $P=P_{XY}$ and $Q=P_X\times P_Y$, for two random variables with joint and product distributions are $P_{XY}$ and $P_X\times P_Y$, respectively. Noticing that in this case KL divergence is equal to the mutual information , I need an upper bound of chi-square divergence in terms of mutual information.

by SAmath at March 05, 2015 05:08 AM

StackOverflow

How to keep track of the entity id for the stuff that was just added to the db?

On a html-post page a user can input various fields and hit submit,

My router.clj code looks like

 (POST "/postGO" [ post-title post-input post-tags :as request ]
    (def email (get-in request [:session :ze-auth-email]))
      ;; connect to datomic and write in the request
    (dbm/add-ze-post post-title post-input post-tags email) ;; db insert

    {:status 200, 
     :body "successfully added the post to the database", 
     :headers {"Content-Type" "text/plain"}}) ;;generic return page

It works well, but I want to redirect the user afterwards to a page that can show them their uploaded post. To do this, it would be very helpful to have the eid of the entity just transacted.

;; code from my dbm file for playing directly with the database
;; id est: the db transact code
(defn add-ze-blurb [title, content, tags, useremail]
  (d/transact conn [{:db/id (d/tempid :db.part/user),
                     :post/title title,
                     :post/content content,
                     :post/tag tags,
                     :author/email useremail}]))

Is there any way to have datomic return the eid as soon as something is added to the DB successfully, or should I use another query right afterward to make sure it's there?

by sova at March 05, 2015 05:05 AM

Wap in java to solve following squence [on hold]

google code jam problem(Solved)

input any numbers from keyboard and print it in word eg 150 1223 3444 = one five zero one double two three three triple four.

150 122 33444 = reads one five zero one double two double three triple four.

Rules:

Single numbers just read them separately.

2 successive numbers use double.

3 successive numbers use triple.

4 successive numbers use quadruple.

5 successive numbers use quintuple.

6 successive numbers use sextuple.

7 successive numbers use septuple.

8 successive numbers use octuple.

9 successive numbers use nonuple.

10 successive numbers use decuple.

More than 10 successive numbers read them all separately.

 import java.util.Scanner;

 public class Test {
private static Scanner sc;
private static long NUMBERS;
static long rem;
static int count = 1;
static long max;

static int i = 11;
static int[] a = new int[i + 1];

public static void main(String[] args) {
    int len = a.length;
    // storing very large value at last position
    a[len - 1] = 898979787;
    long temp;

    sc = new Scanner(System.in);
    System.out.println("enter eleven digits");
    NUMBERS = sc.nextLong();
    temp = NUMBERS;

    while (temp > 0) {
        // calculating remainder
        rem = temp % 10;

        i--;

        // storing each digits in array
        a[i] = (int) rem;
        temp /= 10;
    }

    System.out.println();

    for (int i = 1; i < 12; i++) {
        if (a[i - 1] == a[i]) {

            count++;

        }

        else if (a[i - 1] != a[i]) {
            tuples(count);
            call((long) a[i - 1]);

            count = 1;
            continue;

        }

    }

  }

   private static void tuples(int count2) {
    if (count2 == 0) {
        System.out.println(" zeotuple");
    }

    if (count2 == 9) {
        System.out.println(" nanotuples");
    }

    else if (count2 == 2) {
        System.out.print(" double");

    } else if (count2 == 3) {
        System.out.print(" triple");

    } else if (count2 == 4) {
        System.out.print(" quadruple");

    } else if (count2 == 5) {
        System.out.print(" quintuple");

    } else if (count2 == 6) {
        System.out.print(" sextuple");

    } else if (count2 == 7) {
        System.out.print(" septeble");

    } else if (count2 == 8) {
        System.out.print(" octuple");
    }

 }

  private static void call(Long long1) {
    if (long1 == 0) {
        System.out.println(" zero");
    } else if (long1 == 1) {
        System.out.print(" one");
    } else if (long1 == 2) {
        System.out.print(" two");

    } else if (long1 == 3) {
        System.out.print(" three");

    } else if (long1 == 4) {
        System.out.print(" four");

    } else if (long1 == 5) {
        System.out.print(" five");

    } else if (long1 == 6) {
        System.out.print(" six");

    } else if (long1 == 7) {
        System.out.print(" seven");

    } else if (long1 == 8) {
        System.out.print(" eight");
    } else {
        System.out.print(" nine");

    }

   }

  }

by Syed Shibli at March 05, 2015 05:04 AM

Planet Clojure

Ambly App Bootstrapping

A couple of weeks ago Ambly gained the ability to work with ClojureScript apps that bootstrap themselves.

Previously, the focus had been on bootstrapping a ClojureScript REPL into an empty JavaScriptCore environment. This involved compiling cljs.core, transmitting the JavaScript to the iOS device using WebDAV, etc. This is the pattern followed with the demo iOS app included in the Ambly source tree.

The Ambly Demo app is nice in that it lets you quickly fire up the REPL and try things out. But, most hybrid ClojureScript / native iOS apps won’t be this simple. In particular, non-trivial apps will need application-specific JavaScript on the device and available during the app initialization sequence.

A simple app illustrating this need is Shrimp, where the first screen shown upon launch is a list of items taken from its database. In Shrimp’s case, the code used to populate the list is derived from ClojureScript.



We also expect that ClojureScript-based React Native apps will have the same characteristic: Needing the JavaScript runtime fully up and running in order to launch the app.

Also, when developing an app, you should be able to launch it without requiring a REPL to first connect to it as a launch dependency.

The approach being taken with Ambly involves having the developer first invoke lein cljsbuild once to compile a copy of the app’s JavaScript (and other associated compiler output metadata) into a local "out" directory. The Xcode project refers to this "out" directory in order to include it in the app bundle. Then the challenge becomes: When the iOS app launches, the ClojureScript environment needs to be bootstrapped within the app without the REPL.

Ambly's solution effectively duplicates the bootstrapping logic which was previously in the Clojure REPL implementation in Objective C in order to handle this case. This way, during the app initialization sequence, things can be properly set up and the app can begin running.

As part of the bootstrapping process, the bundled JavaScript and other compiler output files are copied into the on-device WebDAV target directory. Then later, when the user connects with the REPL, Ambly detects that bootstrapping has already been done, and it skips that bit and simply establishes the other aspects that are needed for proper REPL operation (so that WebDAV can be used for remote compilation, etc.)

This approach has a side benefit in that it allows the developer to disconnect the REPL and reconnect it again later, with subsequent reconnects avoiding unnecessary bootstrapping.

The Shrimp app has been updated to use Ambly in this mode. I've also updated my production App Store app, which is based on Goby, to use Ambly in the same way.

This gets Ambly in good shape to help support targeting ClojureScript to React Native.

by Mike Fikes at March 05, 2015 05:00 AM

StackOverflow

Unable to Read/Write Avro RDD on cluster. (YARN cluster)

I am trying to read RDD avro, transform and write. I am able to run it locally fine but when i run onto cluster, i see issues with Avro.

export SPARK_HOME=/home/dvasthimal/spark/spark-1.0.2-bin-2.4.1
export SPARK_YARN_USER_ENV="CLASSPATH=/apache/hadoop/conf"
export HADOOP_CONF_DIR=/apache/hadoop/conf
export YARN_CONF_DIR=/apache/hadoop/conf
export SPARK_JAR=$SPARK_HOME/lib/spark-assembly-1.0.2-hadoop2.4.1.jar
export SPARK_LIBRARY_PATH=/apache/hadoop/lib/native
export SPARK_YARN_USER_ENV="CLASSPATH=/apache/hadoop/conf"
export SPARK_YARN_USER_ENV="CLASSPATH=/apache/hadoop/conf"
export SPARK_CLASSPATH=/apache/hadoop/share/hadoop/common/hadoop-common-2.4.1-company-2.jar:/apache/hadoop/lib/hadoop-lzo-0.6.0.jar:/home/dvasthimal/spark/avro-mapred-1.7.7-hadoop2.jar:/home/dvasthimal/spark/avro-1.7.7.jar
export SPARK_LIBRARY_PATH="/apache/hadoop/lib/native"
export YARN_CONF_DIR=/apache/hadoop/conf/

cd $SPARK_HOME

./bin/spark-submit --master yarn-cluster --jars /home/dvasthimal/spark/avro-mapred-1.7.7-hadoop2.jar,/home/dvasthimal/spark/avro-1.7.7.jar --num-executors 3 --driver-memory 4g --executor-memory 2g --executor-cores 1  --queue hdmi-spark --class com.company.ep.poc.spark.reporting.SparkApp /home/dvasthimal/spark/spark_reporting-1.0-SNAPSHOT.jar startDate=2015-02-16 endDate=2015-02-16 epoutputdirectory=/user/dvasthimal/epdatasets_small/exptsession subcommand=successevents outputdir=/user/dvasthimal/epdatasets/successdetail 

Spark assembly has been built with Hive, including Datanucleus jars on classpath
15/03/04 03:20:29 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm2
15/03/04 03:20:30 INFO yarn.Client: Got Cluster metric info from ApplicationsManager (ASM), number of NodeManagers: 2221
15/03/04 03:20:30 INFO yarn.Client: Queue info ... queueName: hdmi-spark, queueCurrentCapacity: 0.7162806, queueMaxCapacity: 0.08,
      queueApplicationCount = 7, queueChildQueueCount = 0
15/03/04 03:20:30 INFO yarn.Client: Max mem capabililty of a single resource in this cluster 16384
15/03/04 03:20:30 INFO yarn.Client: Preparing Local resources
15/03/04 03:20:30 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/03/04 03:20:30 WARN hdfs.BlockReaderLocal: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.


15/03/04 03:20:46 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 7780745 for dvasthimal on 10.115.206.112:8020
15/03/04 03:20:46 INFO yarn.Client: Uploading file:/home/dvasthimal/spark/spark_reporting-1.0-SNAPSHOT.jar to hdfs://apollo-phx-nn.company.com:8020/user/dvasthimal/.sparkStaging/application_1425075571333_61948/spark_reporting-1.0-SNAPSHOT.jar
15/03/04 03:20:47 INFO yarn.Client: Uploading file:/home/dvasthimal/spark/spark-1.0.2-bin-2.4.1/lib/spark-assembly-1.0.2-hadoop2.4.1.jar to hdfs://apollo-phx-nn.company.com:8020/user/dvasthimal/.sparkStaging/application_1425075571333_61948/spark-assembly-1.0.2-hadoop2.4.1.jar
15/03/04 03:20:52 INFO yarn.Client: Uploading file:/home/dvasthimal/spark/avro-mapred-1.7.7-hadoop2.jar to hdfs://apollo-phx-nn.company.com:8020/user/dvasthimal/.sparkStaging/application_1425075571333_61948/avro-mapred-1.7.7-hadoop2.jar
15/03/04 03:20:52 INFO yarn.Client: Uploading file:/home/dvasthimal/spark/avro-1.7.7.jar to hdfs://apollo-phx-nn.company.com:8020/user/dvasthimal/.sparkStaging/application_1425075571333_61948/avro-1.7.7.jar
15/03/04 03:20:54 INFO yarn.Client: Setting up the launch environment
15/03/04 03:20:54 INFO yarn.Client: Setting up container launch context
15/03/04 03:20:54 INFO yarn.Client: Command for starting the Spark ApplicationMaster: List($JAVA_HOME/bin/java, -server, -Xmx4096m, -Djava.io.tmpdir=$PWD/tmp, -Dspark.app.name=\"com.company.ep.poc.spark.reporting.SparkApp\",  -Dlog4j.configuration=log4j-spark-container.properties, org.apache.spark.deploy.yarn.ApplicationMaster, --class, com.company.ep.poc.spark.reporting.SparkApp, --jar , file:/home/dvasthimal/spark/spark_reporting-1.0-SNAPSHOT.jar,  --args  'startDate=2015-02-16'  --args  'endDate=2015-02-16'  --args  'epoutputdirectory=/user/dvasthimal/epdatasets_small/exptsession'  --args  'subcommand=successevents'  --args  'outputdir=/user/dvasthimal/epdatasets/successdetail' , --executor-memory, 2048, --executor-cores, 1, --num-executors , 3, 1>, <LOG_DIR>/stdout, 2>, <LOG_DIR>/stderr)
15/03/04 03:20:54 INFO yarn.Client: Submitting application to ASM
15/03/04 03:20:54 INFO impl.YarnClientImpl: Submitted application application_1425075571333_61948
15/03/04 03:20:56 INFO yarn.Client: Application report from ASM: 
     application identifier: application_1425075571333_61948
     appId: 61948
     clientToAMToken: null
     appDiagnostics: 
     appMasterHost: N/A
     appQueue: hdmi-spark
     appMasterRpcPort: -1
     appStartTime: 1425464454263
     yarnAppState: ACCEPTED
     distributedFinalState: UNDEFINED
     appTrackingUrl: https://apollo-phx-rm-2.company.com:50030/proxy/application_1425075571333_61948/
     appUser: dvasthimal
15/03/04 03:21:18 INFO yarn.Client: Application report from ASM: 
     application identifier: application_1425075571333_61948
     appId: 61948
     clientToAMToken: Token { kind: YARN_CLIENT_TOKEN, service:  }
     appDiagnostics: 
     appMasterHost: phxaishdc9dn0169.phx.company.com
     appQueue: hdmi-spark
     appMasterRpcPort: 0
     appStartTime: 1425464454263
     yarnAppState: RUNNING
     distributedFinalState: UNDEFINED
     appTrackingUrl: https://apollo-phx-rm-2.company.com:50030/proxy/application_1425075571333_61948/
     appUser: dvasthimal
….
….
15/03/04 03:21:22 INFO yarn.Client: Application report from ASM: 
     application identifier: application_1425075571333_61948
     appId: 61948
     clientToAMToken: Token { kind: YARN_CLIENT_TOKEN, service:  }
     appDiagnostics: 
     appMasterHost: phxaishdc9dn0169.phx.company.com
     appQueue: hdmi-spark
     appMasterRpcPort: 0
     appStartTime: 1425464454263
     yarnAppState: FINISHED
     distributedFinalState: FAILED
     appTrackingUrl: https://apollo-phx-rm-2.company.com:50030/proxy/application_1425075571333_61948/A
     appUser: dvasthimal

AM failed with following exception.

/apache/hadoop/bin/yarn logs -applicationId application_1425075571333_61948
15/03/04 03:21:22 INFO NewHadoopRDD: Input split: hdfs://apollo-phx-nn.company.com:8020/user/dvasthimal/epdatasets_small/exptsession/2015/02/16/part-r-00000.avro:0+13890
15/03/04 03:21:22 ERROR Executor: Exception in task ID 3
java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
    at org.apache.avro.mapreduce.AvroKeyInputFormat.createRecordReader(AvroKeyInputFormat.java:47)
    at org.apache.spark.rdd.NewHadoopRDD$$anon$1.<init>(NewHadoopRDD.scala:111)
    at org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:99)
    at org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:61)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
    at org.apache.spark.rdd.FlatMappedRDD.compute(FlatMappedRDD.scala:33)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
    at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
    at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
    at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
    at org.apache.spark.scheduler.Task.run(Task.scala:51)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:183)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

1) Having figured out the error the fix would be to put the right version of avro libs into AM JVM classpath. Hence i included --jars /home/dvasthimal/spark/avro-mapred-1.7.7-hadoop2.jar,/home/dvasthimal/spark/avro-1.7.7.jar in spark-submit command. However i still see the same exception. 2) I tried to include these libs in SPARK_CLASSPATH. However i see the same exception.

by deepujain at March 05, 2015 04:54 AM

Play framework 2 - i18n in javascript files

I am using Scala Play Framework 2. I want multilanguage javascript files and it would be perfect to have the possibility to put Messages("title.items") inside javascript files.

To do so, I think we should create a new Asset controller that inject the Lang object. Is there a better way? Where could I find some resources about?

by Matroska at March 05, 2015 04:37 AM

Can I write some of the code in Scala (using AndroidStudio)?

I'm using AndroidStudio, and I want to stick to using it because it's the official IDE.

All I want to do is to be able to write some classes in Scala, sounds reasonable to me.

However, all I could find online is a way to create a new project using SBT (+ Android plugins and idea plugin) then load it in AndroidStudio. Of course, I had to deal with all the strange errors and the like until I finally made it compile and run on the emulator. But then I tried to add a fragment drawer and again I ran into problem because I need to add some extra Android libraries and I have no clue to do that.

The sane approach should be to use AndroidStudio as it is (because it's the official IDE) and be able to add Scala files somehow that then get compiled into java bytecode then get treated like normal Java code by the Android compiler. Is there a way to do that?

by Space monkey at March 05, 2015 04:35 AM

Lobsters

StackOverflow

Play WebSocketActor createHandler with custom name

I am using (learning to) handle websockets in play application. My controller is using WebSocket.acceptWithActor

def clientWS = WebSocket.acceptWithActor[JsValue, JsValue] { _ =>
   upstream => ClientSesssionActor.props(upstream)
}

and all is well except some other "supervisor" actor needs to be able to use context.actorSelection(...) to communicate with all/some of those ClientSessionActors.

But all my ClientSessionActors are created with a path like this one :

[akka://application/system/websockets/ REQ_ID /handler]

Here is the line where WebsocketActorSupervisor creates them :

val webSocketActor = context.watch(context.actorOf(createHandler(self), "handler"))

That is where the "handler" part of the path comes from.

I would like to pass in a specific name for my ClientSessionActor instead of getting "handler".

Overloading the whole call stack with one more parameter seems inelegant: there is WebSocketActor.scala with Connect, WebSocketActorSupervisor(props and constructor), WebSocketsActor receive and then everything inside the WebSocket.scala.

I know I can pass the supervisor reference to the props, but what about the case when the "supervisor" has been restarted and needs to reconnect with his minions.

One more thing, I realize that I might be able to get all the "handler" actors, but there are more than 2 kinds of handlers. Yes I could have them ignore msgs directed at the other groups of handlers but this just feels so redundant sending out 3 times more msgs than I should have to.

Any suggestions ?

James ? :)

Thank you

by Tjunkie at March 05, 2015 04:26 AM

CompsciOverflow

For loop and sign operator, reading Java [migrated]

I'm tutoring a high school student and my CS language skills need help.

How do you interpret this java code?

Int k;
For (k=0; k < nums.length; k++) {
       nums [k] -= sign (nums [k]);
       nums [k] += sign (nums [k]);
}

In the instructions it says that int [] nums= {-2,-1,0,1,2}. It also says int sign(int x) returns 1 if positive and -1 if negative and 0 if x is 0.

I suppose I need to understand what happens in the code where "-" and "=" are next to each other. And also what happens when "+" and "=" are next to each other.

by MerryMC at March 05, 2015 04:16 AM

Help with Java code for sorting GPA [migrated]

I'm tutoring a high school student and she has a CS assignment that I don't know how to help her with.

She is supposed to "write a method that will take an array of students and return the student with the highest GPA."

The code given states

public class Student {
   private String myName;
   private double my GPA;

   public Student (String n, double gpa) {
      myName = n;
      myGPA = gpa;
   }

   public String getName() {
      return myName;
   }

   public double getGPA() {
      return myGPA;
   }

   /* other methods not shown */
}

We know what they are asking, but we don't know exactly how to do it. We say they are asking us to sort people and their GPA, and then return JUST the student who has the highest GPA. Should we sort first, and then extract? Is there anything else going on?

by MerryMC at March 05, 2015 04:16 AM

/r/netsec

StackOverflow

Remove file trailer record using scalding or scala

I am trying to use Pipe (cascading.pipe.Pipe) for reading a file. Every record in file follows a schema except trailer record hence; whenever the pipe reading code executes, it throws exception as trailer record doesn't match with schema. The Pipe line looks like :

fieldlst:List(col1, col2, col3)

val filteredInput = Csv(inputFilePath, separator = "|", fields = fieldlst, skipHeader = true) .read

Can anybody tell me a solution for this. Removing trailer record by read-write file seems to be a simple solution but for that, I have to read-write entire file and file can be very huge.

by Anup at March 05, 2015 04:11 AM

Returning a Function - Nothing logging to the console

I am having trouble seeing where I have gone wrong in writing this 'transformArray' function. I am trying to make it take in a function as an argument, apply it to each element of some array. Why does it not return anything?

var array1 = [1,2,3,4,5];
function transformArray (aFunction) {
    return function (array) {
        return array.forEach(aFunction);
    };
}
var halve = transformArray(function (num) {return num/2;});
console.log(halve(array1));

by Jessica O'Brien at March 05, 2015 04:03 AM

Akka: how to make number of actors in balancing pool relative to thread pool size

For an Actor class encompassing a key computation in my application, I am spawning a bunch of actors behind a router:

val concurrency = 4 // to be replaced by something dynamic
val ahoCorasick = AppActorSystem.system.actorOf(MyActorClass.props(ldb)
                                         .withRouter(BalancingPool(nrOfInstances = concurrency)), 
                                          name = "foo") 

How can I get the number of Actor instances relative to the number of cores, or to the size of the thread pool that applies to the actor system? e.g. one Actor per core, or a number of actors equal to the supplied thread pool size? (might also define a thread pool specific to these actors).

by matt at March 05, 2015 04:02 AM

Testing scala play evolution in 1.sql (Down)

I am trying to test scala play evolutions in Dev mode. The Up part in 1.sql works fine and all the necessary tables are created in DB. But I want to make sure that the Downs are also working from 1.sql. What is the best way to enforce this situation so that Downs in 1.sql will be executed when the application starts (or processes request in Dev mode)? I already have the following in config.

# Enable evolutions for default database
evolutionplugin=enabled
applyEvolutions.default=true
applyDownEvolutions.default=true

thanks.

by Prasad at March 05, 2015 03:59 AM

Planet Clojure

Try Three Times

Distributed systems fail in indistinguishable ways. Often, retrying is a good solution to intermittent errors. We create a retry macro to handle the retries in a generic way.

Read full post

by LispCast at March 05, 2015 03:44 AM

Wes Felter

GigaOM: Google gets chatty about live migration while AWS stays mum

GigaOM: Google gets chatty about live migration while AWS stays mum:

March 05, 2015 03:27 AM

StackOverflow

Standard name for a function that modifies a function to ignore an argument

I'm using Python because it's generally easy to read, but this is not a Python-specific question.

Take the following Python function strip_argument:

def strip_argument(func_with_no_args):
  return lambda unused: func_with_no_args()

In use, I can pass a no-argument function to strip_argument, and it will return a function that accepts one argument that is never used. For example:

# some API I want to use
def set_click_event_listener(listener):
  """Args:
      listener: function which will be passed the view that was clicked.
  """
  # ...implementation...

# my code
def my_click_listener():
  # I don't care about the view, so I don't want to make that an arg.
  print "some view was clicked"

set_click_event_listener(strip_argument(my_click_listener))

Is there a standard name for the function strip_argument? I'm interested in any languages that have a function like this in the standard library.

by Cory Petosky at March 05, 2015 03:14 AM

How to profile methods in Scala?

What is a standard way of profiling Scala method calls?

What I need are hooks around a method, using which I can use to start and stop Timers.

In Java I use aspect programming, aspectJ, to define the methods to be profiled and inject bytecode to achieve the same.

Is there a more natural way in Scala, where I can define a bunch of functions to be called before and after a function without losing any static typing in the process?

by sheki at March 05, 2015 03:10 AM

QuantOverflow

Why theta multipled by days to expiry exceeds the total time premium of the option

Sometimes, I find an option where the total time value of the option may be 5 cents(rest is intrinsic value) and there are about 15 days to expiry and theta is .08 (8 cents).

How is this possible. If it is decaying 8 cents a day, then in 15 days, it will lose 120 cents of time premium( and that is assuming a linear time decay, which is not true), but time premium is only 5 cents to start with. So total time decay can only be 5 cents. So how can it keep decaying at 8 cents/day for 15 days?

Please look at AAPL 102 call with underlying ~130, and the call is asking 28.05. So time value of this call is ~ 28.05 - (130-102) = 0.05

Theta is 0.0882. There are ~ 15 days to expiry. Today is feb 27 enter image description here

by Victor123 at March 05, 2015 03:10 AM

CompsciOverflow

Programming languages: syntax and semantics, with design and implemntation and with paradigms [on hold]

  1. Since a semantics of a programming language maps its syntax to its model of computation, is an implementation of a programming language mostly (if not all) about its semantics?

  2. Is the design of a programming language mostly (if not all) about the syntax of a programming language?

    Is the design of a programming language mostly (if not all) about how to write programs in the language?

  3. Are imperative, functional and logical programming paradigms mostly (if not all) about semantics or syntax of programming languages?

Thanks.

by Tim at March 05, 2015 02:18 AM

/r/compsci

Is the halting problem for TMs decidable relative to the acceptance problem?

The canonical proof says that the acceptance problem is decidable relative to the halting problem.

What about the other direction? I've searched and found no solution but that it has been put forth as an exercise. But I'm not sure how to prove this.

submitted by coaster367
[link] [comment]

March 05, 2015 02:04 AM

StackOverflow

Why some programming languages support both functional and imperative paradigms

Languages such as Rust and Go, support functional and imperative style at the same time. Isn't it easy to go wrong allowing you to use = operator when you are practicing the functional programming, or what kind of advantages can programmers who write imperative code get from the features of functional programming? Why would they design such a kind of language, or just simply to pile them up to satisfy the needs of different programmers?

In the meantime, I also want to know what would be the technical difficulties when making a PL support both of these programming styles.

Thanks in advance.

by Jan Fan at March 05, 2015 02:04 AM

DataTau

Planet Theory

A Differential Geometric Approach to Classification

Authors: Qinxun Bai, Steven Rosenberg, Stan Sclaroff
Download: PDF
Abstract: We use differential geometry techniques to study the classification problem to estimate the conditional label probability $P(y=1|\mathbf x)$ for learning a plug-in classifier. In particular, we propose a geometric regularization technique to find the optimal hypersurface corresponding to the estimator of $P(y=1|\mathbf x)$. The regularization term measures the total Riemannian curvature of the hypersurface corresponding to the estimator of $P(y=1|\mathbf x)$, based on the intuition that overfitting corresponds to fast oscillations and hence large curvature of the estimator. We use gradient flow type methods to move from an initial estimator towards a minimizer of a penalty function that penalizes both the deviation of the hypersurface from the training data and the total curvature of the hypersurface. We establish Bayes consistency for our algorithm under mild initialization assumptions and implement a discrete version of this algorithm. In experiments for binary classification, our implementation compares favorably to several widely used classification methods.

March 05, 2015 01:47 AM

Saturated simple and 2-simple topological graphs with few edges

Authors: Péter Hajnal, Alexander Igamberdiev, Günter Rote, André Schulz
Download: PDF
Abstract: A simple topological graph is a topological graph in which any two edges have at most one common point, which is either their common endpoint or a proper crossing. More generally, in a k-simple topological graph, every pair of edges has at most k common points of this kind. We construct saturated simple and 2-simple graphs with few edges. These are k-simple graphs in which no further edge can be added. We improve the previous upper bounds of Kyn\v{c}l, Pach, Radoi\v{c}i\'c, and T\'oth and show that there are saturated simple graphs on $n$ vertices with only 7n edges and saturated 2-simple graphs on n vertices with 14.5n edges. As a consequence, 14.5n edges is also a new upper bound for $k$-simple graphs (considering all values of k). We also construct saturated simple and 2-simple graphs that have some vertices with low degree.

March 05, 2015 01:46 AM

Counting Inversions Adaptively

Authors: Amr Elmasry
Download: PDF
Abstract: We give a simple and efficient algorithm for adaptively counting inversions in a sequence of $n$ integers. Our algorithm runs in $O(n + n \sqrt{\lg{(Inv/n)}})$ time in the word-RAM model of computation, where $Inv$ is the number of inversions.

March 05, 2015 01:46 AM

Maximizing Submodular Functions with the Diminishing Return Property over the Integer Lattice

Authors: Tasuku Soma, Yuichi Yoshida
Download: PDF
Abstract: The problem of maximizing non-negative monotone submodular functions under a certain constraint has been intensively studied in the last decade. In this paper, we address the problem for functions defined over the integer lattice. Suppose that a non-negative monotone submodular function $f:\mathbb{Z}_+^n \to \mathbb{R}_+$ is given via an evaluation oracle. Furthermore, we assume that $f$ satisfies the diminishing return property, which is not an immediate consequence of the submodularity when the domain is the integer lattice. Then, we show (i) a $(1-1/e-\epsilon)$-approximation algorithm for a cardinality constraint with $\widetilde{O}(\frac{n}{\epsilon}\log \frac{r}{\epsilon})$ queries, where $r$ is the maximum cardinality of feasible solutions, (ii) a $(1-1/e-\epsilon)$-approximation algorithm for a polymatroid constraint with $\widetilde{O}(\frac{nr}{\epsilon^4}+n^6)$ queries, where $r$ is the rank of the polymatroid, and (iii) a $(1-1/e-\epsilon)$-approximation algorithm for a knapsack constraint with $\widetilde{O}(\frac{n^2}{\epsilon^{18}}\log \frac{1}{w})(\frac{1}{\epsilon})^{O(1/\epsilon^8)}$ queries, where $w$ is the minumum weight of elements.

Our algorithms for polymatroid constraints and knapsack constraints first extend the domain of the objective function to the Euclidean space and then run the continuous greedy algorithm. We give two different kinds of continuous extensions, one is for knapsack constraints and the other is for polymatroid constraints, which might be of independent interest.

March 05, 2015 01:46 AM

Faster unfolding of communities: speeding up the Louvain algorithm

Authors: V. A. Traag
Download: PDF
Abstract: Many complex networks exhibit a modular structure of densely connected groups of nodes. Usually, such a modular structure is uncovered by the optimisation of some quality function. Although flawed, Modularity remains one of the most popular quality functions. The Louvain algorithm was originally developed for optimising Modularity, but has been applied to a variety of methods. As such, speeding up the Louvain algorithm, enables the analysis of larger graphs in a shorter time for various methods. We here suggest to consider moving nodes to the community of a random neighbour, instead of the best neighbouring community. Although incredibly simple, it reduces the theoretical runtime complexity from $O(m)$ to $O(n \log <k>)$ in networks with a clear community structure. In benchmark networks, resembling real networks more closely, it speeds up the algorithm roughly 2-3 times. This is due to two factors: (1) a random neighbour is likely to be in a "good" community; and (2) random neighbours are likely to be hubs, helping the convergence. Finally, the performance gain only slightly diminishes the quality, thus providing an excellent quality-performance ratio. However, these gains do not seem to hold up when detecting small communities in large graphs.

March 05, 2015 01:46 AM

Lobsters

Planet Theory

A randomized online quantile summary in $O(\frac{1}{\varepsilon} \log \frac{1}{\varepsilon})$ words

Authors: David Felber, Rafail Ostrovsky
Download: PDF
Abstract: A quantile summary is a data structure that approximates to $\varepsilon$-relative error the order statistics of a much larger underlying dataset.

In this paper we develop a randomized online quantile summary for the cash register data input model and comparison data domain model that uses $O(\frac{1}{\varepsilon} \log \frac{1}{\varepsilon})$ words of memory. This improves upon the previous best upper bound of $O(\frac{1}{\varepsilon} \log^{3/2} \frac{1}{\varepsilon})$ by Agarwal et. al. (PODS 2012). Further, by a lower bound of Hung and Ting (FAW 2010) no deterministic summary for the comparison model can outperform our randomized summary in terms of space complexity. Lastly, our summary has the nice property that $O(\frac{1}{\varepsilon} \log \frac{1}{\varepsilon})$ words suffice to ensure that the success probability is $1 - e^{-\text{poly}(1/\varepsilon)}$.

March 05, 2015 01:43 AM

On the Number of Minimal Separators in Graphs

Authors: Serge Gaspers, Simon Mackenzie
Download: PDF
Abstract: We consider the largest number of minimal separators a graph on n vertices can have at most.

We give a new proof that this number is in $O( ((1+\sqrt{5})/2)^n n )$.

We prove that this number is in $\omega( 1.4521^n )$, improving on the previous best lower bound of $\Omega(3^{n/3}) \subseteq \omega( 1.4422^n )$.

This gives also an improved lower bound on the number of potential maximal cliques in a graph. We would like to emphasize that our proofs are short, simple, and elementary.

March 05, 2015 01:43 AM

Sequential quantum mixing for slowly evolving sequences of Markov chains

Authors: Vedran Dunjko, Hans J. Briegel
Download: PDF
Abstract: In this work we consider the problem of preparation of the stationary distribution of irreducible, time-reversible Markov chains, which is a fundamental task in algorithmic Markov chain theory. For the classical setting, this task has a complexity lower bound of $\Omega(1/\delta)$, where $\delta$ is the spectral gap of the Markov chain, and other dependencies contribute only logarithmically. In the quantum case, the conjectured complexity is $O(\sqrt{\delta^{-1}})$ (with other dependencies contributing only logarithmically). However, this bound has only been achieved for a few special classes of Markov chains.

In this work, we provide a method for the sequential preparation of stationary distributions for sequences of general time-reversible $N-$state Markov chains, akin to the setting of simulated annealing methods.

The complexity of preparation we achieve is $O(\sqrt{\delta^{-1}} N^{1/4})$, neglecting logarithmic factors. While this result falls short of the conjectured optimal time, it still provides at least a quadratic improvement over other straightforward approaches for quantum mixing applied in this setting.

March 05, 2015 01:43 AM

Constant-Time Testing and Learning of Image Properties

Authors: Piotr Berman, Meiram Murzabulatov, Sofya Raskhodnikova
Download: PDF
Abstract: We initiate a systematic study of sublinear-time algorithms for image analysis that have access only to labeled random samples from the input. Most previous sublinear-time algorithms for image analysis were {\em query-based}, that is, they could query pixels of their choice. We consider algorithms with two types of input access: {\em sample-based} algorithms that draw independent uniformly random pixels, and {\em block-sample-based} algorithms that draw pixels from independently random square blocks of the image. We investigate three basic properties: being a half-plane, convexity, and connectedness. For the first two, our algorithms are sample-based; for connectedness, they are block-sample-based. All our algorithms have low sample complexity that depends polynomially on the inverse of the error parameter and is independent of the input size.

We design algorithms that approximate the distance to the three properties within a small additive error or, equivalently, tolerant testers for being a half-plane, convexity and connectedness. Tolerant testers for these properties, even with query access to the image, were not investigated previously. Tolerance is important in image processing applications because it allows algorithms to be robust to noise in the image. We also give (non-tolerant) testers for convexity and connectedness with better complexity than our distance approximation algorithms and previously known query-based testers.

To obtain our algorithms for convexity, we design two fast proper PAC learners of convex sets in two dimensions that work under the uniform distribution: non-agnostic and agnostic.

March 05, 2015 01:42 AM

Hierarchies of Relaxations for Online Prediction Problems with Evolving Constraints

Authors: Alexander Rakhlin, Karthik Sridharan
Download: PDF
Abstract: We study online prediction where regret of the algorithm is measured against a benchmark defined via evolving constraints. This framework captures online prediction on graphs, as well as other prediction problems with combinatorial structure. A key aspect here is that finding the optimal benchmark predictor (even in hindsight, given all the data) might be computationally hard due to the combinatorial nature of the constraints. Despite this, we provide polynomial-time \emph{prediction} algorithms that achieve low regret against combinatorial benchmark sets. We do so by building improper learning algorithms based on two ideas that work together. The first is to alleviate part of the computational burden through random playout, and the second is to employ Lasserre semidefinite hierarchies to approximate the resulting integer program. Interestingly, for our prediction algorithms, we only need to compute the values of the semidefinite programs and not the rounded solutions. However, the integrality gap for Lasserre hierarchy \emph{does} enter the generic regret bound in terms of Rademacher complexity of the benchmark set. This establishes a trade-off between the computation time and the regret bound of the algorithm.

March 05, 2015 01:42 AM

Optimal Constructions for Chain-based Cryptographic Enforcement of Information Flow Policies

Authors: Jason Crampton, Naomi Farley, Gregory Gutin, Mark Jones
Download: PDF
Abstract: The simple security property in an information flow policy can be enforced by encrypting data objects and distributing an appropriate secret to each user. A user derives a suitable decryption key from the secret and publicly available information. A chain-based enforcement scheme provides an alternative method of cryptographic enforcement that does not require any public information, the trade-off being that a user may require more than one secret. For a given information flow policy, there will be many different possible chain-based enforcement schemes. In this paper, we provide a polynomial-time algorithm for selecting a chain-based scheme which uses the minimum possible number of keys. We also compute the number of secrets that will be required and establish an upper bound on the number of secrets required by any user.

March 05, 2015 01:40 AM

Integer Addition and Hamming Weight

Authors: John Y. Kim
Download: PDF
Abstract: We study the effect of addition on the Hamming weight of a positive integer. Consider the first $2^n$ positive integers, and fix an alpha among them. We show that if the binary representation of alpha consists of $\Theta(n)$ blocks of zeros and ones, then addition by alpha causes a constant fraction of low Hamming weight integers to become high Hamming weight integers. This result has applications in complexity theory to the hardness of computing powering maps using arithmetic circuits over $F_2$. Our result implies that powering by alpha composed of many blocks require exponential-size arithmetic circuits over $F_2$.

March 05, 2015 01:40 AM

/r/netsec

CompsciOverflow

Understand the time complexity for this LCS (longest common subsequence) solution

I would appreciate an intuitive way to find the time complexity of dynamic programming problems. Can anyone explain me “#subproblems * time/subproblem”? I am not able to grok it.

Code for LCS -

public static String findLCS(String str1, String str2 ) {
    // If either string is empty, return the empty string
    if(null == str1 || null == str2)
        return "";
    if("".equals(str1) || "".equals(str2)) {
        return "";
    }
    // are the last characters identical?
    if(str1.charAt(str1.length()-1) == str2.charAt(str2.length()-1)) {
        // yes, so strip off the last character and recurse
        return findLCS(str1.substring(0, str1.length() -1), str2.substring(0, str2.length()-1)) + str1.substring(str1.length()-1, str1.length());
    } else {
       // no, so recurse independently on (str1_without_last_character, str2)
       // and (str1, str2_without_last_character)
       String opt1 = findLCS(str1.substring(0, str1.length() -1), str2); 
       String opt2 = findLCS(str1, str2.substring(0, str2.length()-1));
       // return the longest LCS found
       if(opt1.length() >= opt2.length())
           return opt1;
       else
           return opt2;
    }
}

I am just providing the actual code instead of pseudo code (i hope pseudo code or the algo is pretty self explanatory from above)

by abipc at March 05, 2015 01:37 AM

Lobsters

/r/emacs

find-and-ctags v0.0.1

Create and update TAGS by combining Find and Ctags for any language on Winows/Linux/OSX.

https://github.com/redguardtoo/find-and-ctags

  • Enough tools provided. You can set up any project in 1 minute

  • The TAGS file created is portable. You can use it anywhere

  • Easy to manage. All your projects settings is in your "~/.emacs"

  • Powerful and versatile. Power of Find/Ctags/Lisp is at your hand

submitted by redguardtoo
[link] [6 comments]

March 05, 2015 01:34 AM

arXiv Networking and Internet Architecture

A Blind Zone Alert System based on Intra-vehicular Wireless Sensor Networks. (arXiv:1503.01440v1 [cs.NI])

Due to the increasing number of sensors deployed in modern vehicles, Intra-Vehicular Wireless Sensor Networks (IVWSNs) have recently received a lot of attention in the automotive industry as they can reduce the amount of wiring harness inside a vehicle. By removing the wires, car manufacturers can reduce the weight of a vehicle and improve engine performance, fuel economy, and reliability. In addition to these direct benefits, an IVWSN is a versatile platform that can support other vehicular applications as well. An example application, known as a Side Blind Zone Alert (SBZA) system, which monitors the blind zone of the vehicle and alerts the driver in a timely manner to prevent collisions, is discussed in this paper. The performance of the IVWSN-based SBZA system is evaluated via real experiments conducted on two test vehicles. Our results show that the proposed system can achieve approximately 95% to 99% detection rate with less than 15% false alarm rate. Compared to commercial systems using radars or cameras, the main benefit of the IVWSN-based SBZA is substantially lower cost.

by <a href="http://arxiv.org/find/cs/1/au:+Lin_J/0/1/0/all/0/1">Jiun-Ren Lin</a>, <a href="http://arxiv.org/find/cs/1/au:+Talty_T/0/1/0/all/0/1">Timothy Talty</a>, <a href="http://arxiv.org/find/cs/1/au:+Tonguz_O/0/1/0/all/0/1">Ozan K. Tonguz</a> at March 05, 2015 01:30 AM

Combinatorial Auction-Based Pricing for Multi-tenant Autonomous Vehicle Public Transportation System. (arXiv:1503.01425v1 [cs.GT])

A smart city provides its people with high standard of living through advanced technologies and transport is one of the major foci. With the advent of autonomous vehicles (AVs), an AV-based public transportation system has been proposed recently, which is capable of providing new forms of transportation services with high efficiency, high flexibility, and low cost. For the benefit of passengers, multitenancy can increase market competition leading to lower service charge and higher quality of service. In this paper, we study the pricing issue of the multi-tenant AV public transportation system and three types of services are defined. The pricing process for each service type is modeled as a combinatorial auction, in which the service providers, as bidders, compete for offering transportation services. The winners of the auction are determined through an integer linear program. To prevent the bidders from raising their bids for higher returns, we propose a strategy-proof Vickrey-Clarke-Groves-based charging mechanism, which can maximize the social welfare, to settle the final charges for the customers. We perform extensive simulations to verify the analytical results and evaluate the performance of the charging mechanism.

by <a href="http://arxiv.org/find/cs/1/au:+Lam_A/0/1/0/all/0/1">Albert Y.S. Lam</a> at March 05, 2015 01:30 AM

Disaggregated and optically interconnected memory: when will it be cost effective?. (arXiv:1503.01416v1 [cs.DC])

The "Disaggregated Server" concept has been proposed for datacenters where the same type server resources are aggregated in their respective pools, for example a compute pool, memory pool, network pool, and a storage pool. Each server is constructed dynamically by allocating the right amount of resources from these pools according to the workload's requirements. Modularity, higher packaging and cooling efficiencies, and higher resource utilization are among the suggested benefits. With the emergence of very large datacenters, "clouds" containing tens of thousands of servers, datacenter efficiency has become an important topic. Few computer chip and systems vendors are working on and making frequent announcements on silicon photonics and disaggregated memory systems.

In this paper we study the trade-off between cost and performance of building a disaggregated memory system where DRAM modules in the datacenter are pooled, for example in memory-only chassis and racks. The compute pool and the memory pool are interconnected by an optical interconnect to overcome the distance and bandwidth issues of electrical fabrics. We construct a simple cost model that includes the cost of latency, cost of bandwidth and the savings expected from a disaggregated memory system. We then identify the level at which a disaggregated memory system becomes cost competitive with a traditional direct attached memory system.

Our analysis shows that a rack-scale disaggregated memory system will have a non-trivial performance penalty, and at the datacenter scale the penalty is impractically high, and the optical interconnect costs are at least a factor of 10 more expensive than where they should be when compared to the traditional direct attached memory systems.

by <a href="http://arxiv.org/find/cs/1/au:+Abali_B/0/1/0/all/0/1">Bulent Abali</a>, <a href="http://arxiv.org/find/cs/1/au:+Eickemeyer_R/0/1/0/all/0/1">Richard J. Eickemeyer</a>, <a href="http://arxiv.org/find/cs/1/au:+Franke_H/0/1/0/all/0/1">Hubertus Franke</a>, <a href="http://arxiv.org/find/cs/1/au:+Li_C/0/1/0/all/0/1">Chung-Sheng Li</a>, <a href="http://arxiv.org/find/cs/1/au:+Taubenblatt_M/0/1/0/all/0/1">Marc A. Taubenblatt</a> at March 05, 2015 01:30 AM

Hardware Fingerprinting Using HTML5. (arXiv:1503.01408v2 [cs.CR] UPDATED)

Device fingerprinting over the web has received much attention both by the research community and the commercial market a like. Almost all the fingerprinting features proposed to date depend on software run on the device. All of these features can be changed by the user, thereby thwarting the device's fingerprint. In this position paper we argue that the recent emergence of the HTML5 standard gives rise to a new class of fingerprinting features that are based on the \emph{hardware} of the device. Such features are much harder to mask or change thus provide a higher degree of confidence in the fingerprint. We propose several possible fingerprint methods that allow a HTML5 web application to identify a device's hardware. We also present an initial experiment to fingerprint a device's GPU.

by <a href="http://arxiv.org/find/cs/1/au:+Nakibly_G/0/1/0/all/0/1">Gabi Nakibly</a>, <a href="http://arxiv.org/find/cs/1/au:+Shelef_G/0/1/0/all/0/1">Gilad Shelef</a>, <a href="http://arxiv.org/find/cs/1/au:+Yudilevich_S/0/1/0/all/0/1">Shiran Yudilevich</a> at March 05, 2015 01:30 AM

Node.DPWS: High performance and scalable Web Services for the IoT. (arXiv:1503.01398v1 [cs.NI])

Interconnected computing systems, in various forms, are expected to permeate our lives, realizing the vision of the Internet of Things (IoT) and allowing us to enjoy novel, enhanced services that promise to improve our everyday lives. Nevertheless, this new reality also introduces significant challenges in terms of performance, scaling, usability and interoperability. Leveraging the benefits of Service Oriented Architectures (SOAs) can help alleviate many of the issues that developers, implementers and end-users have to face in the context of the IoT. This work presents Node.DPWS, a novel implementation of the Devices Profile for Web Services (DPWS) based on the Node.js platform. Node.DPWS can be used to deploy lightweight, efficient and scalable Web Services over heterogeneous nodes, including devices with limited resources. The performance of the presented work is evaluated on typical embedded devices, including comparisons with implementations created using alternative DPWS toolkits.

by <a href="http://arxiv.org/find/cs/1/au:+Fysarakis_K/0/1/0/all/0/1">Konstantinos Fysarakis</a> (1), <a href="http://arxiv.org/find/cs/1/au:+Mylonakis_D/0/1/0/all/0/1">Damianos Mylonakis</a> (2), <a href="http://arxiv.org/find/cs/1/au:+Manifavas_C/0/1/0/all/0/1">Charalampos Manifavas</a> (3), <a href="http://arxiv.org/find/cs/1/au:+Papaefstathiou_I/0/1/0/all/0/1">Ioannis Papaefstathiou</a> (1) ((1) Dept. of Electronic &amp; Computer Engineering, Technical University of Crete, Greece, (2) Dept. of Computer Science, University of Crete, Greece, (3) Dept. of Informatics Engineering, Technological Educational Institute of Crete, Greece) at March 05, 2015 01:30 AM

BVNS para el problema del bosque generador k-etiquetado. (arXiv:1503.01376v1 [cs.DM])

In this paper we propose an efficient solution for the problem of generating k-labeling forest VNS. This problem is an extension of the Minimum Spanning Tree Problem Labelling problem with important applications in telecommunications networks and multimodal transport. It is, given an undirected graph whose links are labeled, and an integer positive number k, find the spanning forest with the lowest number of connected components using at most k different labels. To address the problem a Basic Variable Neighbourhood Search is proposed where the maximum amplitude of the neighbourhood space, n, is a key parameter. Different strategies are studied to establish the value of n. BVNS with the best selected strategy is experimentally compared with other metaheuristics that have appeared in the literature applied to this type of problem.

by <a href="http://arxiv.org/find/cs/1/au:+Consoli_S/0/1/0/all/0/1">Sergio Consoli</a>, <a href="http://arxiv.org/find/cs/1/au:+Mladenovic_N/0/1/0/all/0/1">Nenad Mladenov&#xec;c</a>, <a href="http://arxiv.org/find/cs/1/au:+Moreno_Perez_J/0/1/0/all/0/1">Jos&#xe8; A. Moreno-P&#xe8;rez</a> at March 05, 2015 01:30 AM

Tensors, !-graphs, and non-commutative quantum structures (extended version). (arXiv:1503.01348v1 [cs.LO])

!-graphs provide a means of reasoning about infinite families of string diagrams and have proven useful in manipulation of (co)algebraic structures like Hopf algebras, Frobenius algebras, and compositions thereof. However, they have previously been limited by an inability to express families of diagrams involving non-commutative structures which play a central role in algebraic quantum information and the theory of quantum groups. In this paper, we fix this shortcoming by offering a new semantics for non-commutative !-graphs using an enriched version of Penrose's abstract tensor notation.

by <a href="http://arxiv.org/find/cs/1/au:+Kissinger_A/0/1/0/all/0/1">Aleks Kissinger</a>, <a href="http://arxiv.org/find/cs/1/au:+Quick_D/0/1/0/all/0/1">David Quick</a> at March 05, 2015 01:30 AM

An Incentivized Approach for Fair Participation in Wireless Ad hoc Networks. (arXiv:1503.01314v1 [cs.NI])

In Wireless Ad hoc networks (WANETs), nodes separated by considerable distance communicate with each other by relaying their messages through other nodes. However, it might not be in the best interests of a node to forward the message of another node due to power constraints. In addition, all nodes being rational, some nodes may be selfish, i.e. they might not relay data from other nodes so as to increase their lifetime. In this paper, we present a fair and incentivized approach for participation in Ad hoc networks. Given the power required for each transmission, we are able to determine the power saving contributed by each intermediate hop. We propose the FAir Share incenTivizEd Ad hoc paRticipation protocol (FASTER), which takes a selected route from a routing protocol as input, to calculate the worth of each node using the cooperative game theory concept of 'Shapley Value' applied on the power saved by each node. This value can be used for allocation of Virtual Currency to the nodes, which can be spent on subsequent message transmissions.

by <a href="http://arxiv.org/find/cs/1/au:+Choudhuri_A/0/1/0/all/0/1">Arka Rai Choudhuri</a>, <a href="http://arxiv.org/find/cs/1/au:+S_K/0/1/0/all/0/1">Kalyanasundaram S</a>, <a href="http://arxiv.org/find/cs/1/au:+Sridhar_S/0/1/0/all/0/1">Shriyak Sridhar</a>, <a href="http://arxiv.org/find/cs/1/au:+B_A/0/1/0/all/0/1">Annappa B</a> at March 05, 2015 01:30 AM

Game-theoretic Approach for Non-Cooperative Planning. (arXiv:1503.01288v1 [cs.AI])

When two or more self-interested agents put their plans to execution in the same environment, conflicts may arise as a consequence, for instance, of a common utilization of resources. In this case, an agent can postpone the execution of a particular action, if this punctually solves the conflict, or it can resort to execute a different plan if the agent's payoff significantly diminishes due to the action deferral. In this paper, we present a game-theoretic approach to non-cooperative planning that helps predict before execution what plan schedules agents will adopt so that the set of strategies of all agents constitute a Nash equilibrium. We perform some experiments and discuss the solutions obtained with our game-theoretical approach, analyzing how the conflicts between the plans determine the strategic behavior of the agents.

by <a href="http://arxiv.org/find/cs/1/au:+Jordan_J/0/1/0/all/0/1">Jaume Jord&#xe1;n</a>, <a href="http://arxiv.org/find/cs/1/au:+Onaindia_E/0/1/0/all/0/1">Eva Onaindia</a> at March 05, 2015 01:30 AM

Electric Vehicles Charging Control based on Future Internet Generic Enablers. (arXiv:1503.01267v1 [cs.NI])

In this paper a rationale for the deployment of Future Internet based applications in the field of Electric Vehicles (EVs) smart charging is presented. The focus is on the Connected Device Interface (CDI) Generic Enabler (GE) and the Network Information and Controller (NetIC) GE, which are recognized to have a potential impact on the charging control problem and the configuration of communications networks within reconfigurable clusters of charging points. The CDI GE can be used for capturing the driver feedback in terms of Quality of Experience (QoE) in those situations where the charging power is abruptly limited as a consequence of short term grid needs, like the shedding action asked by the Transmission System Operator to the Distribution System Operator aimed at clearing networks contingencies due to the loss of a transmission line or large wind power fluctuations. The NetIC GE can be used when a master Electric Vehicle Supply Equipment (EVSE) hosts the Load Area Controller, responsible for managing simultaneous charging sessions within a given Load Area (LA); the reconfiguration of distribution grid topology results in shift of EVSEs among LAs, then reallocation of slave EVSEs is needed. Involved actors, equipment, communications and processes are identified through the standardized framework provided by the Smart Grid Architecture Model (SGAM).

by <a href="http://arxiv.org/find/cs/1/au:+Lanna_A/0/1/0/all/0/1">Andrea Lanna</a>, <a href="http://arxiv.org/find/cs/1/au:+Liberati_F/0/1/0/all/0/1">Francesco Liberati</a>, <a href="http://arxiv.org/find/cs/1/au:+Zuccaro_L/0/1/0/all/0/1">Letterio Zuccaro</a>, <a href="http://arxiv.org/find/cs/1/au:+Giorgio_A/0/1/0/all/0/1">Alessandro Di Giorgio</a> at March 05, 2015 01:30 AM

Competitive Diffusion in Social Networks: Quality or Seeding?. (arXiv:1503.01220v1 [cs.GT])

In this paper, we study a strategic model of marketing and product consumption in social networks. We consider two firms in a market competing to maximize the consumption of their products. Firms have a limited budget which can be either invested on the quality of the product or spent on initial seeding in the network in order to better facilitate spread of the product. After the decision of firms, agents choose their consumptions following a myopic best response dynamics which results in a local, linear update for their consumption decision. We characterize the unique Nash equilibrium of the game between firms and study the effect of the budgets as well as the network structure on the optimal allocation. We show that at the equilibrium, firms invest more budget on quality when their budgets are close to each other. However, as the gap between budgets widens, competition in qualities becomes less effective and firms spend more of their budget on seeding. We also show that given equal budget of firms, if seeding budget is nonzero for a balanced graph, it will also be nonzero for any other graph, and if seeding budget is zero for a star graph it will be zero for any other graph as well. As a practical extension, we then consider a case where products have some preset qualities that can be only improved marginally. At some point in time, firms learn about the network structure and decide to utilize a limited budget to mount their market share by either improving the quality or new seeding some agents to incline consumers towards their products. We show that the optimal budget allocation in this case simplifies to a threshold strategy. Interestingly, we derive similar results to that of the original problem, in which preset qualities simulate the role that budgets had in the original setup.

by <a href="http://arxiv.org/find/cs/1/au:+Fazeli_A/0/1/0/all/0/1">Arastoo Fazeli</a>, <a href="http://arxiv.org/find/cs/1/au:+Ajorlou_A/0/1/0/all/0/1">Amir Ajorlou</a>, <a href="http://arxiv.org/find/cs/1/au:+Jadbabaie_A/0/1/0/all/0/1">Ali Jadbabaie</a> at March 05, 2015 01:30 AM

Building a RAPPOR with the Unknown: Privacy-Preserving Learning of Associations and Data Dictionaries. (arXiv:1503.01214v1 [cs.CR])

Techniques based on randomized response enable the collection of potentially sensitive data from clients in a privacy-preserving manner with strong local differential privacy guarantees. One of the latest such technologies, RAPPOR, allows the marginal frequencies of an arbitrary set of strings to be estimated via privacy-preserving crowdsourcing. However, this original estimation process requires a known set of possible strings; in practice, this dictionary can often be extremely large and sometimes completely unknown.

In this paper, we propose a novel decoding algorithm for the RAPPOR mechanism that enables the estimation of "unknown unknowns," i.e., strings we do not even know we should be estimating. To enable learning without explicit knowledge of the dictionary, we develop methodology for estimating the joint distribution of two or more variables collected with RAPPOR. This is a critical step towards understanding relationships between multiple variables collected in a privacy-preserving manner.

by <a href="http://arxiv.org/find/cs/1/au:+Fanti_G/0/1/0/all/0/1">Giulia Fanti</a>, <a href="http://arxiv.org/find/cs/1/au:+Pihur_V/0/1/0/all/0/1">Vasyl Pihur</a>, <a href="http://arxiv.org/find/cs/1/au:+Erlingsson_%7B/0/1/0/all/0/1">&#xda;lfar Erlingsson</a> at March 05, 2015 01:30 AM

Automated detection and classification of cryptographic algorithms in binary programs through machine learning. (arXiv:1503.01186v1 [cs.CR])

Threats from the internet, particularly malicious software (i.e., malware) often use cryptographic algorithms to disguise their actions and even to take control of a victim's system (as in the case of ransomware). Malware and other threats proliferate too quickly for the time-consuming traditional methods of binary analysis to be effective. By automating detection and classification of cryptographic algorithms, we can speed program analysis and more efficiently combat malware.

This thesis will present several methods of leveraging machine learning to automatically discover and classify cryptographic algorithms in compiled binary programs.

While further work is necessary to fully evaluate these methods on real-world binary programs, the results in this paper suggest that machine learning can be used successfully to detect and identify cryptographic primitives in compiled code. Currently, these techniques successfully detect and classify cryptographic algorithms in small single-purpose programs, and further work is proposed to apply them to real-world examples.

by <a href="http://arxiv.org/find/cs/1/au:+Hosfelt_D/0/1/0/all/0/1">Diane Duros Hosfelt</a> at March 05, 2015 01:30 AM

Tractability Frontier of Data Complexity in Team Semantics. (arXiv:1503.01144v1 [cs.LO])

We study the data complexity of model-checking for logics with team semantics. For dependence and independence logic, we completely characterize the tractability/intractability frontier of data complexity of both quantifier-free and quantified formulas. For inclusion logic formulas, we reduce the model-checking problem to the satisfiability problem of so-called Dual-Horn propositional formulas. While interesting in its own right, this also provides an alternative proof for the recent result of P. Galliani and L. Hella in 2013 showing that the data complexity of inclusion logic is in PTIME. In the last section we consider the data complexity of inclusion logic under so-called strict semantics.

by <a href="http://arxiv.org/find/cs/1/au:+Durand_A/0/1/0/all/0/1">Arnaud Durand</a>, <a href="http://arxiv.org/find/cs/1/au:+Kontinen_J/0/1/0/all/0/1">Juha Kontinen</a>, <a href="http://arxiv.org/find/cs/1/au:+Rugy_Altherre_N/0/1/0/all/0/1">Nicolas de Rugy-Altherre</a>, <a href="http://arxiv.org/find/cs/1/au:+Vaananen_J/0/1/0/all/0/1">Jouko V&#xe4;&#xe4;n&#xe4;nen</a> at March 05, 2015 01:30 AM

S-Store: Streaming Meets Transaction Processing. (arXiv:1503.01143v1 [cs.DB])

Stream processing addresses the needs of real-time applications. Transaction processing addresses the coordination and safety of short atomic computations. Heretofore, these two modes of operation existed in separate, stove-piped systems. In this work, we attempt to fuse the two computational paradigms in a single system called S-Store. In this way, S-Store can simultaneously accommodate OLTP and streaming applications. We present a simple transaction model for streams that integrates seamlessly with a traditional OLTP system. We chose to build S-Store as an extension of H-Store, an open-source, in-memory, distributed OLTP database system. By implementing S-Store in this way, we can make use of the transaction processing facilities that H-Store already supports, and we can concentrate on the additional implementation features that are needed to support streaming. Similar implementations could be done using other main-memory OLTP platforms. We show that we can actually achieve higher throughput for streaming workloads in S-Store than an equivalent deployment in H-Store alone. We also show how this can be achieved within H-Store with the addition of a modest amount of new functionality. Furthermore, we compare S-Store to two state-of-the-art streaming systems, Spark Streaming and Storm, and show how S-Store matches and sometimes exceeds their performance while providing stronger transactional guarantees.

by <a href="http://arxiv.org/find/cs/1/au:+Meehan_J/0/1/0/all/0/1">John Meehan</a>, <a href="http://arxiv.org/find/cs/1/au:+Tatbul_N/0/1/0/all/0/1">Nesime Tatbul</a>, <a href="http://arxiv.org/find/cs/1/au:+Zdonik_S/0/1/0/all/0/1">Stan Zdonik</a>, <a href="http://arxiv.org/find/cs/1/au:+Aslantas_C/0/1/0/all/0/1">Cansu Aslantas</a>, <a href="http://arxiv.org/find/cs/1/au:+Cetintemel_U/0/1/0/all/0/1">Ugur Cetintemel</a>, <a href="http://arxiv.org/find/cs/1/au:+Du_J/0/1/0/all/0/1">Jiang Du</a>, <a href="http://arxiv.org/find/cs/1/au:+Kraska_T/0/1/0/all/0/1">Tim Kraska</a>, <a href="http://arxiv.org/find/cs/1/au:+Madden_S/0/1/0/all/0/1">Samuel Madden</a>, <a href="http://arxiv.org/find/cs/1/au:+Pavlo_A/0/1/0/all/0/1">Andrew Pavlo</a>, <a href="http://arxiv.org/find/cs/1/au:+Stonebraker_M/0/1/0/all/0/1">Michael Stonebraker</a>, <a href="http://arxiv.org/find/cs/1/au:+Wang_H/0/1/0/all/0/1">Hao Wang</a> at March 05, 2015 01:30 AM

/r/clojure