Planet Primates

August 03, 2015

CompsciOverflow

Find language (and/or regular expression for automaton)

How do I go about finding a language recognized by the following automaton? And is it possible to find a corresponding regular expression? I've read some posts about the same issue here on CS.SE but none of the approaches seem to give me a solution.

enter image description here

by Robin Haveneers at August 03, 2015 06:55 PM

StackOverflow

Do threaded binary tree structures offer any advantages in Haskell?

I was reading this mailing list post, where someone had a question about a threaded RB tree in Haskell, and the end of the response said:

I suggest you (Lex) either go imperative (with STRef or IORef) or do without threading, unless you're sure that you'll be doing many more lookups and traversals than inserts and deletes.

Implying that, although creating threaded trees in Haskell is generally not a good idea, it does still make lookups and traversals more efficient without resorting to imperative algorithms.

However, I can't think of a way threads could make haskell trees more efficient without using imperative constructs. Is it even possible?

by Sintrastes at August 03, 2015 06:51 PM

Convert java.util.Set to scala.collection.Set

How can I convert a java.util.Set[String] to a scala.collection.Set with a generic type in Scala 2.8.1?

import scala.collection.JavaConversions._

var in : java.util.Set[String] = new java.util.HashSet[String]()

in.add("Oscar")
in.add("Hugo")

val out : scala.collection.immutable.Set[String] = Set(in.toArray : _*)

And this is the error message

<console>:9: error: type mismatch;  
found   : Array[java.lang.Object]
required: Array[_ <: String]   
val out : scala.collection.immutable.Set[String] = Set(javaset.toArray : _*)

What am I doing wrong?

by Twistleton at August 03, 2015 06:50 PM

Where is the declaration of `scala.Any`?

I tried to find the declaration of scala.Any, but failed. Where can I find it?

by Freewind at August 03, 2015 06:50 PM

/r/compsci

StackOverflow

fluentd, config agent-td send request to webservice

I 'm new for fluentd. I wonder is it support send a request in fluentd-agentd.

Example: when fluent have new log (from apache/nginx/rails). I want call a request http with body is log text to specify url (http://my_service.com/api/get_log)

Is it possible?

by Peter_175 at August 03, 2015 06:44 PM

Injecting services into Scala Akka Actors with Google Guice

I have a couple of services that I want to inject into akka actors. There are three different types of actors I am working with, and each type will use different services. Currently I just have a module, instantiate an injector inside of the actor, and do the binding inside of each Crow. The issue is that for each other, they receive a new instance of the service.

I did a little bit of reading and found http://www.typesafe.com/activator/template/activator-akka-scala-guice but the documentation for akka recommends we not use IndirectActorProducer. What is the best way for me to inject these services into my actors? The @Inject keyword looks promising but I'm not exactly sure how to use this.

Workflow:

Main creates commander, sends it a command, commander creates the three different types of crows, and sends them messages to execute (it is these crows that require the services).

by jstnchng at August 03, 2015 06:41 PM

Scala Testkit Unit Testing with Akka

I am using TestKit to test some of my classes for a Scala project I am working on involving Akka Actors, and I'm running into this issue:

One or more requested classes are not Suites: poc.PocConstituentsWatcher

The class in question looks like this:

class PocConstituentsWatcher(_system: ActorSystem) extends TestKit(_system) with ImplicitSender with WordSpecLike with Matchers with BeforeAndAfter with BeforeAndAfterAll {

I didn't used to have this issue, because I had

def this() = this(ActorSystem)

but now I define my own ActorSystem via injection, so I have val actorSystem = injector.instance[ActorSystem] instead, and when I do

def this() = this(actorSystem)

I get an error saying it can't find actorSystem. I think it's because the constructor signature is incorrect? Thanks for your help.

by jstnchng at August 03, 2015 06:38 PM

QuantOverflow

What market making strategies are often used nowadays ?

I am doing a survey of market making strategies, what's the popular market making strategies?

by ZHI at August 03, 2015 06:36 PM

/r/compsci

StackOverflow

Convert list of sortedset to sortedset in scala

(Still new to scala) I have a List[SortedSet[A]], and I'd like a unique SortedSet[A] with all (unique and sorted) elements. How should I do that?

My goal is: I have a class, say Container, that contain a list of Element and a list of (sub)Container. This class should implement a recursive getSortedElements(): SortedSet[Element] methods.

So I easily have this invalid code:

case class Container(myElements: List[Element], myContainers: List[Container]){
    def getSortedElements(): SortedSet[Element] =
        SortedSet(myElements) ++ SortedSet(myContainers.map(_.getSortedElements))
}

by Juh_ at August 03, 2015 06:27 PM

Play 2.3.8 - Unable to start Kamon 0.4.0

I am actually trying to integrate Kamon 0.4.0 with my play (scala) application (Great framework by the way!).

Here is what I did:

  • Added the following dependencies in my build.sbt:

    libraryDependencies ++= Seq( jdbc, anorm, "com.typesafe.play" %% "play-mailer" % "2.4.0", "org.bouncycastle" % "bcpkix-jdk15on" % "1.51", "org.bouncycastle" % "bcprov-jdk15on" % "1.51", "com.github.nscala-time" %% "nscala-time" % "1.8.0", "io.kamon" %% "kamon-core" % "0.4.0", "io.kamon" %% "kamon-play" % "0.4.0", "org.aspectj" % "aspectjweaver" % "1.8.6" )

  • Starting and shutting down Kamon in Global.scala

    object Global extends GlobalSettings {

    override def onStart(app: Application) { val hsmProxyName = Play.current.configuration.getString("ngocspd.hsm.proxy.name").get val supervisorName = Play.current.configuration.getString("ngocspd.ocspd.supervisor.name").get val notifierName = Play.current.configuration.getString("ngocspd.notification.name").get java.security.Security.addProvider(new BouncyCastleProvider) // Starting Kamon Kamon.start() Akka.system.actorOf(Props[NotificationActor], name = notifierName) Akka.system.actorOf(Props[HardwareSecurityModuleProxyActor], name = hsmProxyName) Akka.system.actorOf(Props[OCSPdActor], name = supervisorName) }

    override def onStop(app: Application) { Kamon.shutdown() }

    }

  • Starting activator with the path of the AspectJ agent:

    activator -J-javaagent:/Users/pantin/.ivy2/cache/org.aspectj/aspectjweaver/jars/aspectjweaver-1.8.6.jar

I am encountering two problems.

When running the app, the following exceptions are thrown:

[error] o.a.w.b.BcelWorld - Unable to find class 'scala.concurrent.impl.Future.PromiseCompletingRunnable' in repository
java.lang.ClassNotFoundException: scala.concurrent.impl.Future.PromiseCompletingRunnable not found - unable to determine URL
    at org.aspectj.apache.bcel.util.ClassLoaderRepository.loadClass(ClassLoaderRepository.java:292) ~[aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.bcel.BcelWorld.lookupJavaClass(BcelWorld.java:418) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.bcel.BcelWorld.resolveDelegate(BcelWorld.java:392) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.ltw.LTWWorld.resolveDelegate(LTWWorld.java:107) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.World.resolveToReferenceType(World.java:477) [aspectjweaver-1.8.6.jar:1.8.6]
[error] o.a.w.b.BcelWorld - Unable to find class 'scala.concurrent.impl.Future.PromiseCompletingRunnable' in repository
java.lang.ClassNotFoundException: scala.concurrent.impl.Future.PromiseCompletingRunnable not found - unable to determine URL
    at org.aspectj.apache.bcel.util.ClassLoaderRepository.loadClass(ClassLoaderRepository.java:292) ~[aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.bcel.BcelWorld.lookupJavaClass(BcelWorld.java:418) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.bcel.BcelWorld.resolveDelegate(BcelWorld.java:392) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.ltw.LTWWorld.resolveDelegate(LTWWorld.java:107) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.World.resolveToReferenceType(World.java:477) [aspectjweaver-1.8.6.jar:1.8.6]
[error] o.a.w.b.BcelWorld - Unable to find class 'scala.concurrent.impl.Future.PromiseCompletingRunnable' in repository
java.lang.ClassNotFoundException: scala.concurrent.impl.Future.PromiseCompletingRunnable not found - unable to determine URL
    at org.aspectj.apache.bcel.util.ClassLoaderRepository.loadClass(ClassLoaderRepository.java:292) ~[aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.bcel.BcelWorld.lookupJavaClass(BcelWorld.java:418) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.bcel.BcelWorld.resolveDelegate(BcelWorld.java:392) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.ltw.LTWWorld.resolveDelegate(LTWWorld.java:107) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.World.resolveToReferenceType(World.java:477) [aspectjweaver-1.8.6.jar:1.8.6]
[error] o.a.w.b.BcelWorld - Unable to find class 'scala.concurrent.impl.Future.PromiseCompletingRunnable' in repository
java.lang.ClassNotFoundException: scala.concurrent.impl.Future.PromiseCompletingRunnable not found - unable to determine URL
    at org.aspectj.apache.bcel.util.ClassLoaderRepository.loadClass(ClassLoaderRepository.java:292) ~[aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.bcel.BcelWorld.lookupJavaClass(BcelWorld.java:418) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.bcel.BcelWorld.resolveDelegate(BcelWorld.java:392) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.ltw.LTWWorld.resolveDelegate(LTWWorld.java:107) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.World.resolveToReferenceType(World.java:477) [aspectjweaver-1.8.6.jar:1.8.6]
[error] o.a.w.b.BcelWorld - Unable to find class 'scala.concurrent.impl.Future.PromiseCompletingRunnable' in repository
java.lang.ClassNotFoundException: scala.concurrent.impl.Future.PromiseCompletingRunnable not found - unable to determine URL
    at org.aspectj.apache.bcel.util.ClassLoaderRepository.loadClass(ClassLoaderRepository.java:292) ~[aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.bcel.BcelWorld.lookupJavaClass(BcelWorld.java:418) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.bcel.BcelWorld.resolveDelegate(BcelWorld.java:392) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.ltw.LTWWorld.resolveDelegate(LTWWorld.java:107) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.World.resolveToReferenceType(World.java:477) [aspectjweaver-1.8.6.jar:1.8.6]
[error] o.a.w.b.BcelWorld - Unable to find class 'scala.concurrent.impl.Future.PromiseCompletingRunnable' in repository
java.lang.ClassNotFoundException: scala.concurrent.impl.Future.PromiseCompletingRunnable not found - unable to determine URL
    at org.aspectj.apache.bcel.util.ClassLoaderRepository.loadClass(ClassLoaderRepository.java:292) ~[aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.bcel.BcelWorld.lookupJavaClass(BcelWorld.java:418) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.bcel.BcelWorld.resolveDelegate(BcelWorld.java:392) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.ltw.LTWWorld.resolveDelegate(LTWWorld.java:107) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.World.resolveToReferenceType(World.java:477) [aspectjweaver-1.8.6.jar:1.8.6]
[error] o.a.w.b.BcelWorld - Unable to find class 'akka.event.Logging.LogEvent' in repository
java.lang.ClassNotFoundException: akka.event.Logging.LogEvent not found - unable to determine URL
    at org.aspectj.apache.bcel.util.ClassLoaderRepository.loadClass(ClassLoaderRepository.java:292) ~[aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.bcel.BcelWorld.lookupJavaClass(BcelWorld.java:418) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.bcel.BcelWorld.resolveDelegate(BcelWorld.java:392) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.ltw.LTWWorld.resolveDelegate(LTWWorld.java:107) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.World.resolveToReferenceType(World.java:477) [aspectjweaver-1.8.6.jar:1.8.6]
[error] o.a.w.b.BcelWorld - Unable to find class 'akka.event.Logging.LogEvent' in repository
java.lang.ClassNotFoundException: akka.event.Logging.LogEvent not found - unable to determine URL
    at org.aspectj.apache.bcel.util.ClassLoaderRepository.loadClass(ClassLoaderRepository.java:292) ~[aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.bcel.BcelWorld.lookupJavaClass(BcelWorld.java:418) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.bcel.BcelWorld.resolveDelegate(BcelWorld.java:392) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.ltw.LTWWorld.resolveDelegate(LTWWorld.java:107) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.World.resolveToReferenceType(World.java:477) [aspectjweaver-1.8.6.jar:1.8.6]
[error] o.a.w.b.BcelWorld - Unable to find class 'akka.event.Logging.LogEvent' in repository
java.lang.ClassNotFoundException: akka.event.Logging.LogEvent not found - unable to determine URL
    at org.aspectj.apache.bcel.util.ClassLoaderRepository.loadClass(ClassLoaderRepository.java:292) ~[aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.bcel.BcelWorld.lookupJavaClass(BcelWorld.java:418) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.bcel.BcelWorld.resolveDelegate(BcelWorld.java:392) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.ltw.LTWWorld.resolveDelegate(LTWWorld.java:107) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.World.resolveToReferenceType(World.java:477) [aspectjweaver-1.8.6.jar:1.8.6]
[error] o.a.w.b.BcelWorld - Unable to find class 'akka.event.Logging.LogEvent' in repository
java.lang.ClassNotFoundException: akka.event.Logging.LogEvent not found - unable to determine URL
    at org.aspectj.apache.bcel.util.ClassLoaderRepository.loadClass(ClassLoaderRepository.java:292) ~[aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.bcel.BcelWorld.lookupJavaClass(BcelWorld.java:418) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.bcel.BcelWorld.resolveDelegate(BcelWorld.java:392) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.ltw.LTWWorld.resolveDelegate(LTWWorld.java:107) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.World.resolveToReferenceType(World.java:477) [aspectjweaver-1.8.6.jar:1.8.6]
[error] o.a.w.b.BcelWorld - Unable to find class 'akka.dispatch.Dispatcher.LazyExecutorServiceDelegate' in repository
java.lang.ClassNotFoundException: akka.dispatch.Dispatcher.LazyExecutorServiceDelegate not found - unable to determine URL
    at org.aspectj.apache.bcel.util.ClassLoaderRepository.loadClass(ClassLoaderRepository.java:292) ~[aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.bcel.BcelWorld.lookupJavaClass(BcelWorld.java:418) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.bcel.BcelWorld.resolveDelegate(BcelWorld.java:392) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.ltw.LTWWorld.resolveDelegate(LTWWorld.java:107) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.World.resolveToReferenceType(World.java:477) [aspectjweaver-1.8.6.jar:1.8.6]
[error] o.a.w.b.BcelWorld - Unable to find class 'akka.dispatch.Dispatcher.LazyExecutorServiceDelegate' in repository
java.lang.ClassNotFoundException: akka.dispatch.Dispatcher.LazyExecutorServiceDelegate not found - unable to determine URL
    at org.aspectj.apache.bcel.util.ClassLoaderRepository.loadClass(ClassLoaderRepository.java:292) ~[aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.bcel.BcelWorld.lookupJavaClass(BcelWorld.java:418) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.bcel.BcelWorld.resolveDelegate(BcelWorld.java:392) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.ltw.LTWWorld.resolveDelegate(LTWWorld.java:107) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.World.resolveToReferenceType(World.java:477) [aspectjweaver-1.8.6.jar:1.8.6]
[error] o.a.w.b.BcelWorld - Unable to find class 'akka.dispatch.Dispatcher.LazyExecutorServiceDelegate' in repository
java.lang.ClassNotFoundException: akka.dispatch.Dispatcher.LazyExecutorServiceDelegate not found - unable to determine URL
    at org.aspectj.apache.bcel.util.ClassLoaderRepository.loadClass(ClassLoaderRepository.java:292) ~[aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.bcel.BcelWorld.lookupJavaClass(BcelWorld.java:418) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.bcel.BcelWorld.resolveDelegate(BcelWorld.java:392) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.ltw.LTWWorld.resolveDelegate(LTWWorld.java:107) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.World.resolveToReferenceType(World.java:477) [aspectjweaver-1.8.6.jar:1.8.6]
[error] o.a.w.b.BcelWorld - Unable to find class 'akka.dispatch.Dispatcher.LazyExecutorServiceDelegate' in repository
java.lang.ClassNotFoundException: akka.dispatch.Dispatcher.LazyExecutorServiceDelegate not found - unable to determine URL
    at org.aspectj.apache.bcel.util.ClassLoaderRepository.loadClass(ClassLoaderRepository.java:292) ~[aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.bcel.BcelWorld.lookupJavaClass(BcelWorld.java:418) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.bcel.BcelWorld.resolveDelegate(BcelWorld.java:392) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.ltw.LTWWorld.resolveDelegate(LTWWorld.java:107) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.World.resolveToReferenceType(World.java:477) [aspectjweaver-1.8.6.jar:1.8.6]
[error] o.a.w.b.BcelWorld - Unable to find class 'akka.dispatch.Dispatcher.LazyExecutorServiceDelegate' in repository
java.lang.ClassNotFoundException: akka.dispatch.Dispatcher.LazyExecutorServiceDelegate not found - unable to determine URL
    at org.aspectj.apache.bcel.util.ClassLoaderRepository.loadClass(ClassLoaderRepository.java:292) ~[aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.bcel.BcelWorld.lookupJavaClass(BcelWorld.java:418) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.bcel.BcelWorld.resolveDelegate(BcelWorld.java:392) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.ltw.LTWWorld.resolveDelegate(LTWWorld.java:107) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.World.resolveToReferenceType(World.java:477) [aspectjweaver-1.8.6.jar:1.8.6]
[error] o.a.w.b.BcelWorld - Unable to find class 'akka.dispatch.Dispatcher.LazyExecutorServiceDelegate' in repository
java.lang.ClassNotFoundException: akka.dispatch.Dispatcher.LazyExecutorServiceDelegate not found - unable to determine URL
    at org.aspectj.apache.bcel.util.ClassLoaderRepository.loadClass(ClassLoaderRepository.java:292) ~[aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.bcel.BcelWorld.lookupJavaClass(BcelWorld.java:418) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.bcel.BcelWorld.resolveDelegate(BcelWorld.java:392) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.ltw.LTWWorld.resolveDelegate(LTWWorld.java:107) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.World.resolveToReferenceType(World.java:477) [aspectjweaver-1.8.6.jar:1.8.6]
[error] o.a.w.b.BcelWorld - Unable to find class 'akka.dispatch.Dispatcher.LazyExecutorServiceDelegate' in repository
java.lang.ClassNotFoundException: akka.dispatch.Dispatcher.LazyExecutorServiceDelegate not found - unable to determine URL
    at org.aspectj.apache.bcel.util.ClassLoaderRepository.loadClass(ClassLoaderRepository.java:292) ~[aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.bcel.BcelWorld.lookupJavaClass(BcelWorld.java:418) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.bcel.BcelWorld.resolveDelegate(BcelWorld.java:392) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.ltw.LTWWorld.resolveDelegate(LTWWorld.java:107) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.World.resolveToReferenceType(World.java:477) [aspectjweaver-1.8.6.jar:1.8.6]
[error] o.a.w.b.BcelWorld - Unable to find class 'akka.dispatch.Dispatcher.LazyExecutorServiceDelegate' in repository
java.lang.ClassNotFoundException: akka.dispatch.Dispatcher.LazyExecutorServiceDelegate not found - unable to determine URL
    at org.aspectj.apache.bcel.util.ClassLoaderRepository.loadClass(ClassLoaderRepository.java:292) ~[aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.bcel.BcelWorld.lookupJavaClass(BcelWorld.java:418) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.bcel.BcelWorld.resolveDelegate(BcelWorld.java:392) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.ltw.LTWWorld.resolveDelegate(LTWWorld.java:107) [aspectjweaver-1.8.6.jar:1.8.6]
    at org.aspectj.weaver.World.resolveToReferenceType(World.java:477) [aspectjweaver-1.8.6.jar:1.8.6]

Is there a way to get rid of these exceptions?

And then, the application crashes with the following stacktrace:

play.api.UnexpectedException: Unexpected exception[ConfigurationException: Could not start logger due to [akka.ConfigurationException: Logger specified in config can't be loaded [akka.event.Logging$DefaultLogger] due to [java.lang.RuntimeException: Cannot retrieve extensions while Kamon is being initialized.]]]
    at play.core.ReloadableApplication$$anonfun$get$1$$anonfun$apply$1$$anonfun$1.apply(ApplicationProvider.scala:166) ~[play_2.11-2.3.8.jar:2.3.8]
    at play.core.ReloadableApplication$$anonfun$get$1$$anonfun$apply$1$$anonfun$1.apply(ApplicationProvider.scala:130) ~[play_2.11-2.3.8.jar:2.3.8]
    at scala.Option.map(Option.scala:146) ~[scala-library-2.11.6.jar:na]
    at play.core.ReloadableApplication$$anonfun$get$1$$anonfun$apply$1.apply(ApplicationProvider.scala:130) ~[play_2.11-2.3.8.jar:2.3.8]
    at play.core.ReloadableApplication$$anonfun$get$1$$anonfun$apply$1.apply(ApplicationProvider.scala:128) ~[play_2.11-2.3.8.jar:2.3.8]
Caused by: akka.ConfigurationException: Could not start logger due to [akka.ConfigurationException: Logger specified in config can't be loaded [akka.event.Logging$DefaultLogger] due to [java.lang.RuntimeException: Cannot retrieve extensions while Kamon is being initialized.]]
    at akka.event.LoggingBus$class.startDefaultLoggers(Logging.scala:144) ~[akka-actor_2.11-2.3.9.jar:na]
    at akka.event.EventStream.startDefaultLoggers(EventStream.scala:26) ~[akka-actor_2.11-2.3.9.jar:na]
    at akka.actor.LocalActorRefProvider.init(ActorRefProvider.scala:622) ~[akka-actor_2.11-2.3.9.jar:na]
    at akka.actor.ActorSystemImpl.liftedTree2$1(ActorSystem.scala:619) ~[akka-actor_2.11-2.3.9.jar:na]
    at akka.actor.ActorSystemImpl._start$lzycompute(ActorSystem.scala:616) ~[akka-actor_2.11-2.3.9.jar:na]

I looked around on stackoverflow and Google, but did not find any solution to my problem.

If someone would be kind enough to help, I would greatly appreciate!

by user3849748 at August 03, 2015 06:25 PM

return nested fields using elastic4s

I have data stored with a nested location object and can't figure out how to get elastic4s to return the location as part of the result of a search. I have data that when queried (from the REST endpoint) looks like this:

{
    "_index": "us_large_cities",
    "_type": "city",
    "_id": "AU7ke-xU_N_KRYZ5Iii_",
    "_score": 1,
    "_source": {
        "city": "Oakland",
        "state": "CA",
        "location": {
            "lat": "37.8043722",
            "lon": "-122.2708026"
        }
    }
}

When I try querying it using elastic4s like so:

search in "us_large_cities"->"city" fields("location", "city", ) query {
filteredQuery filter {
  geoPolygon("location") point(37.9, -122.31) point(37.8, -122.31) point(37.8, -122.25) point(37.9, -122.25)
}

I get back results like this:

{
  "_index" : "us_large_cities",
  "_type" : "city",
  "_id" : "AU7keH9l_N_KRYZ5Iig0",
  "_score" : 1.0,
  "fields" : {
    "city" : [ "Berkeley" ]
  }
}

Where I would expect to see "location" but don't. Does anyone know how I specify the fields so that I can actually get the location?

by Danny Hatcher at August 03, 2015 06:24 PM

/r/netsec

CompsciOverflow

Graph partitioning algorithm that doesn't minimize edge cuts

I have a set of planar graphs, and I wish to partition it $k$-ways such that the sum of the weights within a given partition are the same, and the partitions are connected.

Are there algorithms that do this, such that they can easily be injected with some degree of randomness (this is the real point)?

I have tried some open source software (like GP METIS) which is based on KL refinement, but that has the negative property that it either has to minimize edge cuts or communication volume. Setting it to a graph where all edges were of 0 weight did not help.

If it matters, the algorithm only needs to deal with unweighted, undirected, planar graphs.

Are there such things?

by soandos at August 03, 2015 06:16 PM

/r/scala

StackOverflow

spray.io debugging directives - converting rejection to StatusCodes

I am using logRequestResponse debugging directive in order to log every request/response failing through whole path tree. Log entries looks as follows:

2015-07-29 14:03:13,643 [INFO ] [DataImportServices-akka.actor.default-dispatcher-6] [akka.actor.ActorSystemImpl]ActorSystem(DataImportServices) - get-userr: Response for
  Request : HttpRequest(POST,https://localhost:8080/city/v1/transaction/1234,List(Accept-Language: cs, Accept-Encoding: gzip,...
  Response: Rejected(List(MalformedRequestContentRejection(Protocol message tag had invalid wire type.,...

My root route trait which assembles all partial routes to one look as follows:

trait RestAPI extends Directives {
  this: ServiceActors with Core =>

  private implicit val _ = system.dispatcher

  val route: Route =
      logRequestResponse("log-activity", Logging.InfoLevel) {
        new CountryImportServiceApi().route ~
        new CityImportServiceApi().route
  }
}

And partial routes are defined as following:

class CinemaImportServiceApi()(implicit executionContext: ExecutionContext) extends Directives {

  implicit val timeout = Timeout(15 seconds)    

  val route: Route = {
    pathPrefix("city") {
      pathPrefix("v1") {
        path("transaction" / Segment ) {
          (siteId: String, transactionId: String) =>
            post {
              authenticate(BasicAuth(cityUserPasswordAuthenticator _, realm = "bd city import api")) {
                user =>
                  entity(as[CityTrans]) { e =>
                    complete {
                      StatusCodes.OK

                    }
                  }
              }
            }
        }
      }
    }
  }

Assembled routes are run via HttpServiceActor runRoute.

I would like to convert rejection to StatusCode and log that via logRequestResponse. Even though I write a custom function for logging I get rejection. What seems to me fishy is that since it is wrapped the whole route tree rejection is still not converted to HttpResponse. In tests we are sealing the route in order to convert Rejection to HttpResponse. Is there a way how to mark a route as a complete route hence actually seal it? Or am I missing some important concept here?

Thx

by jaksky at August 03, 2015 06:14 PM

Lobsters

TheoryOverflow

A question on GCT

In paper 'On vanishing of Kronecker coefficients' here in http://arxiv.org/pdf/1507.02955v1.pdf, it is showed that deciding positivity of kronecker coefficients is in general NP hard. However there is a caveat which states that only positivity of 'rectangular Kronecker coefficients' is needed in GCT. What is the implication to GCT if this also turns to be NP hard?

A related question is what is consequence to GCT if there is no general positive formula akin to one for special case of LR coefficients?

by Turbo at August 03, 2015 06:11 PM

StackOverflow

Make =:= commutative?

It just makes sense that =:= should be commutative: A =:= B implies B =:= A. I was wondering if there is a way to make the scala understand this. To elaborate, if I provide the scala with

implicit def typeEqCommutes[A,B](ev: A=:=B): B=:=A = ev.asInstanceOf[B=:=A]
then is there a way so the following would compile:

class Pair[T, S](var first: T, var second: S) {
def swap(implicit ev: T =:= S) {
  val temp = first
  first = second //error: type mismatch; found: S required: T

  second = temp
}

}

by ashy_32bit at August 03, 2015 06:10 PM

Exception in thread "main" org.apache.spark.SparkException: org.apache.spark.SparkException: Couldn't find leaders for Set()-Spark Steaming-kafka

I am working on a data pipeline which takes tweets from Twitter4j -> publishes those tweets to a topic in Kafka -> Spark Streaming subscribes those tweets for processing. But when I run the code I am getting the exception -

Exception in thread "main" org.apache.spark.SparkException: org.apache.spark.SparkException: Couldn't find leaders for Set([LiveTweets,0])

The code is -

import java.util.HashMap
import java.util.Properties
import twitter4j._
import twitter4j.FilterQuery;
import twitter4j.StallWarning;
import twitter4j.Status;
import twitter4j.StatusDeletionNotice;
import twitter4j.StatusListener;
import twitter4j.TwitterStream;
import twitter4j.TwitterStreamFactory;
import twitter4j.conf.ConfigurationBuilder;
import twitter4j.json.DataObjectFactory;
import kafka.serializer.StringDecoder
import org.apache.spark.streaming.kafka._
import kafka.javaapi.producer.Producer
import kafka.producer.{KeyedMessage, ProducerConfig}
import org.apache.spark._
import org.apache.spark.streaming._
import org.apache.spark.streaming.StreamingContext._ 

object TwitterPopularTags {
    def main(args: Array[String]) {

            /** Information necessary for accessing the Twitter API */
        val consumerKey= ""
        val consumerSecret= ""
        val accessToken= ""
        val accessTokenSecret = ""
        val cb = new ConfigurationBuilder()
        cb.setOAuthConsumerKey(consumerKey)
        cb.setOAuthConsumerSecret(consumerSecret)
        cb.setOAuthAccessToken(accessToken)
        cb.setOAuthAccessTokenSecret(accessTokenSecret)
        cb.setJSONStoreEnabled(true)
        cb.setIncludeEntitiesEnabled(true)
        def twitterStream = new TwitterStreamFactory(cb.build()).getInstance()      

        val KafkaTopic = "LiveTweets"
        /* kafka producer properties */
        val kafkaProducer = {
                        val props = new Properties()
                        props.put("metadata.broker.list", "localhost:9092")
                        props.put("serializer.class", "kafka.serializer.StringEncoder")
                        props.put("request.required.acks", "1")
                        val config = new ProducerConfig(props)
                        new Producer[String, String](config)
                     }


        /* Invoked when a new tweet comes */
        val listener = new StatusListener() { 

                           override def onStatus(status: Status): Unit = {
                               val msg = new KeyedMessage[String, String](KafkaTopic,DataObjectFactory.getRawJSON(status))
                               kafkaProducer.send(msg)
              }
                                  override def onException(ex: Exception): Unit = throw ex

                  // no-op for the following events
                  override def onStallWarning(warning: StallWarning): Unit = {}
                  override def onDeletionNotice(statusDeletionNotice: StatusDeletionNotice): Unit = {}
                  override def onScrubGeo(userId: Long, upToStatusId: Long): Unit = {}
                  override def onTrackLimitationNotice(numberOfLimitedStatuses: Int): Unit = {}
        }

        twitterStream.addListener(listener)
        // Create Spark Streaming context
        val sparkConf = new SparkConf().setAppName("Twitter-Kafka-Spark Streaming")
        val sc = new SparkContext(sparkConf)
        val ssc = new StreamingContext(sc, Seconds(2))

        // Define the Kafka parameters, broker list must be specified
        val kafkaParams = Map("metadata.broker.list" -> "localhost:9092")
        val topics = Set(KafkaTopic)

        // Create the direct stream with the Kafka parameters and topics
        val kafkaStream = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](ssc,kafkaParams,topics)
        val tweets = kafkaStream.map(_._2)
        tweets.print()
        ssc.start();
        ssc.awaitTermination();

  }

}

and the stack trace is -

Exception in thread "main" org.apache.spark.SparkException: org.apache.spark.SparkException: Couldn't find leaders for Set([LiveTweets,0])
    at org.apache.spark.streaming.kafka.KafkaUtils$$anonfun$createDirectStream$2.apply(KafkaUtils.scala:413)
    at org.apache.spark.streaming.kafka.KafkaUtils$$anonfun$createDirectStream$2.apply(KafkaUtils.scala:413)
    at scala.util.Either.fold(Either.scala:97)
    at org.apache.spark.streaming.kafka.KafkaUtils$.createDirectStream(KafkaUtils.scala:412)
    at TwitterPopularTags$.main(TwitterPopularTags.scala:98)
    at TwitterPopularTags.main(TwitterPopularTags.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
15/08/03 11:34:54 WARN DFSClient: Unable to persist blocks in hflush for /user/spark/applicationHistory/local-1438619692937.inprogress
java.io.IOException: The client is stopped
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1500)
    at org.apache.hadoop.ipc.Client.call(Client.java:1438)
    at org.apache.hadoop.ipc.Client.call(Client.java:1399)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at com.sun.proxy.$Proxy19.fsync(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.fsync(ClientNamenodeProtocolTranslatorPB.java:814)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy20.fsync(Unknown Source)
    at org.apache.hadoop.hdfs.DFSOutputStream.flushOrSync(DFSOutputStream.java:2067)
    at org.apache.hadoop.hdfs.DFSOutputStream.hflush(DFSOutputStream.java:1959)
    at org.apache.hadoop.fs.FSDataOutputStream.hflush(FSDataOutputStream.java:130)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:144)
    at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:144)
    at scala.Option.foreach(Option.scala:236)
    at org.apache.spark.scheduler.EventLoggingListener.logEvent(EventLoggingListener.scala:144)
    at org.apache.spark.scheduler.EventLoggingListener.onBlockManagerAdded(EventLoggingListener.scala:171)
    at org.apache.spark.scheduler.SparkListenerBus$class.onPostEvent(SparkListenerBus.scala:46)
    at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
    at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
    at org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:53)
    at org.apache.spark.util.AsynchronousListenerBus.postToAll(AsynchronousListenerBus.scala:36)
    at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(AsynchronousListenerBus.scala:76)
    at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply(AsynchronousListenerBus.scala:61)
    at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply(AsynchronousListenerBus.scala:61)
    at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1617)
    at org.apache.spark.util.AsynchronousListenerBus$$anon$1.run(AsynchronousListenerBus.scala:60)
15/08/03 11:34:54 WARN DFSClient: Error while syncing
java.nio.channels.ClosedChannelException
    at org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:1635)
    at org.apache.hadoop.hdfs.DFSOutputStream.flushOrSync(DFSOutputStream.java:2074)
    at org.apache.hadoop.hdfs.DFSOutputStream.hflush(DFSOutputStream.java:1959)
    at org.apache.hadoop.fs.FSDataOutputStream.hflush(FSDataOutputStream.java:130)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:144)
    at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:144)
    at scala.Option.foreach(Option.scala:236)
    at org.apache.spark.scheduler.EventLoggingListener.logEvent(EventLoggingListener.scala:144)
    at org.apache.spark.scheduler.EventLoggingListener.onBlockManagerAdded(EventLoggingListener.scala:171)
    at org.apache.spark.scheduler.SparkListenerBus$class.onPostEvent(SparkListenerBus.scala:46)
    at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
    at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
    at org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:53)
    at org.apache.spark.util.AsynchronousListenerBus.postToAll(AsynchronousListenerBus.scala:36)
    at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(AsynchronousListenerBus.scala:76)
    at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply(AsynchronousListenerBus.scala:61)
    at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply(AsynchronousListenerBus.scala:61)
    at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1617)
    at org.apache.spark.util.AsynchronousListenerBus$$anon$1.run(AsynchronousListenerBus.scala:60)
15/08/03 11:34:54 ERROR LiveListenerBus: Listener EventLoggingListener threw an exception
java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:144)
    at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:144)
    at scala.Option.foreach(Option.scala:236)
    at org.apache.spark.scheduler.EventLoggingListener.logEvent(EventLoggingListener.scala:144)
    at org.apache.spark.scheduler.EventLoggingListener.onBlockManagerAdded(EventLoggingListener.scala:171)
    at org.apache.spark.scheduler.SparkListenerBus$class.onPostEvent(SparkListenerBus.scala:46)
    at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
    at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
    at org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:53)
    at org.apache.spark.util.AsynchronousListenerBus.postToAll(AsynchronousListenerBus.scala:36)
    at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(AsynchronousListenerBus.scala:76)
    at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply(AsynchronousListenerBus.scala:61)
    at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply(AsynchronousListenerBus.scala:61)
    at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1617)
    at org.apache.spark.util.AsynchronousListenerBus$$anon$1.run(AsynchronousListenerBus.scala:60)
Caused by: java.nio.channels.ClosedChannelException
    at org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:1635)
    at org.apache.hadoop.hdfs.DFSOutputStream.flushOrSync(DFSOutputStream.java:2074)
    at org.apache.hadoop.hdfs.DFSOutputStream.hflush(DFSOutputStream.java:1959)
    at org.apache.hadoop.fs.FSDataOutputStream.hflush(FSDataOutputStream.java:130)
    ... 19 more
15/08/03 11:34:54 ERROR LiveListenerBus: Listener EventLoggingListener threw an exception
java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:144)
    at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:144)
    at scala.Option.foreach(Option.scala:236)
    at org.apache.spark.scheduler.EventLoggingListener.logEvent(EventLoggingListener.scala:144)
    at org.apache.spark.scheduler.EventLoggingListener.onApplicationStart(EventLoggingListener.scala:177)
    at org.apache.spark.scheduler.SparkListenerBus$class.onPostEvent(SparkListenerBus.scala:52)
    at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
    at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
    at org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:53)
    at org.apache.spark.util.AsynchronousListenerBus.postToAll(AsynchronousListenerBus.scala:36)
    at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(AsynchronousListenerBus.scala:76)
    at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply(AsynchronousListenerBus.scala:61)
    at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply(AsynchronousListenerBus.scala:61)
    at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1617)
    at org.apache.spark.util.AsynchronousListenerBus$$anon$1.run(AsynchronousListenerBus.scala:60)
Caused by: java.io.IOException: Filesystem closed
    at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:794)
    at org.apache.hadoop.hdfs.DFSOutputStream.flushOrSync(DFSOutputStream.java:1998)
    at org.apache.hadoop.hdfs.DFSOutputStream.hflush(DFSOutputStream.java:1959)
    at org.apache.hadoop.fs.FSDataOutputStream.hflush(FSDataOutputStream.java:130)
    ... 19 more

Thank you.

by Naren at August 03, 2015 06:08 PM

/r/netsec

Lobsters

TheoryOverflow

Is there a method to create a trackable image file?

Is there a method for creating an image file so that it was trackable remotely?

I am considering a method whereby I can create an image to share on the net that when it is used, I can easily generate stats on that particular images use. Think almost an RSS Feed for a particular image file. Where each file usage is followable.

A sample use case would be to create one image per player of a challenge. Each image file is technically that players 'game piece' and the playing field is the internet. Wherever that user get's that image posted, a dashboard would show utilization of that image at all times. (i.e., Image used 4000 times, create date and time charts showing placement time points, list of domains image used in, etc.)

by randomblink at August 03, 2015 06:01 PM

StackOverflow

Why do we need "Algebraic Data Types"?

I've read some explanation of Algebraic Data Types:

These articles give very detail description and code samples.

At first I was thinking Algebraic Data Type was just for defining some types easily and we can match them with pattern matching. But after reading these articles, I found "pattern matching" is not even mentioned there, and the content looks interesting but far more complex than I expected.

So I have some questions (which are not answered in these articles):

  • Why do we need it, say, in Haskell or Scala?
  • What we can do if we have it, and what we can't do if we don't have it?

by Freewind at August 03, 2015 05:56 PM

Pass-by-reference hinders gcc from tail call elimination

See BlendingTable::create and BlendingTable::print. Both have the same form of tail recursion, but while create will be optimized as a loop, print will not and cause a stack overflow.

Go down to see a fix, which I got from a hint from one of the gcc devs on my bug report of this problem.

#include <cstdlib>
#include <iostream>
#include <memory>
#include <array>
#include <limits>

class System {
public:
    template<typename T, typename... Ts>
    static void print(const T& t, const Ts&... ts) {
        std::cout << t << std::flush;
        print(ts...);
    }

    static void print() {}

    template<typename... Ts>
    static void printLine(const Ts&... ts) {
        print(ts..., '\n');
    }
};

template<typename T, int dimension = 1>
class Array {
private:
    std::unique_ptr<T[]> pointer;
    std::array<int, dimension> sizes;
    int realSize;

public:
    Array() {}

    template<typename... Ns>
    Array(Ns... ns):
    realSize(1) {
        checkArguments(ns...);
        create(1, ns...);
    }

private:
    template<typename... Ns>
    static void checkArguments(Ns...) {
        static_assert(sizeof...(Ns) == dimension, "dimension mismatch");
    }

    template<typename... Ns>
    void create(int d, int n, Ns... ns) {
        realSize *= n;
        sizes[d - 1] = n;
        create(d + 1, ns...);
    }

    void create(int) {
        pointer = std::unique_ptr<T[]>(new T[realSize]);
    }

    int computeSubSize(int d) const {
        if (d == dimension) {
            return 1;
        }
        return sizes[d] * computeSubSize(d + 1);
    }

    template<typename... Ns>
    int getIndex(int d, int n, Ns... ns) const {
        return n * computeSubSize(d) + getIndex(d + 1, ns...);
    }

    int getIndex(int) const {
        return 0;
    }

public:
    template<typename... Ns>
    T& operator()(Ns... ns) const {
        checkArguments(ns...);
        return pointer[getIndex(1, ns...)];
    }

    int getSize(int d = 1) const {
        return sizes[d - 1];
    }
};

class BlendingTable : public Array<unsigned char, 3> {
private:
    enum {
        SIZE = 0x100,
        FF = SIZE - 1,
    };

public:
    BlendingTable():
    Array<unsigned char, 3>(SIZE, SIZE, SIZE) {
        static_assert(std::numeric_limits<unsigned char>::max() == FF, "unsupported byte format");
        create(FF, FF, FF);
    }

private:
    void create(int dst, int src, int a) {
        (*this)(dst, src, a) = (src * a + dst * (FF - a)) / FF;
        if (a > 0) {
            create(dst, src, a - 1);
        } else if (src > 0) {
            create(dst, src - 1, FF);
        } else if (dst > 0) {
            create(dst - 1, FF, FF);
        } else {
            return;
        }
    }

    void print(int dst, int src, int a) const {
        System::print(static_cast<int>((*this)(FF - dst, FF - src, FF - a)), ' ');
        if (a > 0) {
            print(dst, src, a - 1);
        } else if (src > 0) {
            print(dst, src - 1, FF);
        } else if (dst > 0) {
            print(dst - 1, FF, FF);
        } else {
            System::printLine();
            return;
        }
    }

public:
    void print() const {
        print(FF, FF, FF);
    }
};

int main() {
    BlendingTable().print();
    return EXIT_SUCCESS;
}

Changing the class definition of System from

class System {
public:
    template<typename T, typename... Ts>
    static void print(const T& t, const Ts&... ts) {
        std::cout << t << std::flush;
        print(ts...);
    }

    static void print() {}

    template<typename... Ts>
    static void printLine(const Ts&... ts) {
        print(ts..., '\n');
    }
};

to

class System {
public:
    template<typename T, typename... Ts>
    static void print(T t, Ts... ts) {
        std::cout << t << std::flush;
        print(ts...);
    }

    static void print() {}

    template<typename... Ts>
    static void printLine(Ts... ts) {
        print(ts..., '\n');
    }
};

magically allows gcc to eliminate the tail calls.

Why does 'whether or not passing function arguments by reference' make such a big difference in gcc's behaviour? Semantically they both look the same to me in this case.

by xiver77 at August 03, 2015 05:54 PM

Is Fortify-code scan possible with Scala

Can I use Fortify to scan scala-code or the generated java (jar) files ? I know that I can do the jar option technically but are there any known challenges with respect to the generated java code?

by Hawk66 at August 03, 2015 05:43 PM

AWS

AWS Week in Review – July 27, 2015

Let’s take a quick look at what happened in AWS-land last week:

Monday, July 27
Tuesday, July 28
Wednesday, July 29
Thursday, July 30
Friday, July 31

New & Notable Open Source

New SlideShare Content

New Marketplace Applications

New YouTube Videos

Upcoming Events

Upcoming Events at the AWS Loft (San Francisco)

Upcoming Events at the AWS Loft (New York)

Help Wanted

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

Jeff;

by Jeff Barr at August 03, 2015 05:39 PM

CompsciOverflow

Reference for Elliptical Basis Function Neural Networks

I found the following assertion in a neural networks FAQ:

Radial networks typically have only one hidden layer, but it can be useful to include a linear layer for dimensionality reduction or oblique rotation before the RBF layer

But I could not find any "formal" reference (published works) showing it. I found some papers describing EBFNN's, but they implement full covariance matrices on the RBF units. I could not find anything about this approach with an extra linear layer before the RBF layer. The theory is ok for me, it makes sense and works. What I need is any published work with this information. Is there one?

by rcpinto at August 03, 2015 05:31 PM

StackOverflow

clojure precision counter

How do I determine the count of digits after the decimal point. I want to get the amount of precisions in the values of a vector.

[ 1.6712 2.053 3.52 ]
;;1.6712 => 4
;;2.053 => 3
;;3.52 => 2

by Ezekiel at August 03, 2015 05:27 PM

Spark joinWithCassandraTable() on map multiple partition key ERROR

I'm trying to filter on a small part of a huge Cassandra table by using:

val snapshotsFiltered = sc.parallelize(startDate to endDate).map(TableKey(_2)).joinWithCassandraTable("listener","snapshots_test_b")

I want to map the rows in the cassandra table on 'created' column that is part of the partition key.

My table key (the partition key of the table) defined as:

case class TableKey(imei: String, created: Long, when: Long)

The result is an error:

[error] /home/ubuntu/scala/test/test.scala:61: not enough arguments for method apply: (imei: String, created: Long)test.TableKey in object TableKey. [error] Unspecified value parameter created. [error] val snapshotsFiltered = sc.parallelize(startDate to endDate).map(TableKey(_2)).joinWithCassandraTable("listener","snapshots_test_b") [error] ^ [error] one error found [error] (compile:compile) Compilation failed

It worked with only one object in the partition key as in the Documentation.

Why there is a problem with multiple partition key?

by Rada at August 03, 2015 05:27 PM

Using sbt plugin from Bintray

I'm experiencing a kind of impedance mismatch between sbt and bintray-sbt plugin. The plugin is published via bintray-sbt at https://bintray.com/artifact/download/synapse/sbt-plugins/me/synapse/my-sbt-plugin/0.0.1/my-sbt-plugin-0.0.1.pom (publishMavenStyle set to true. If set to false a different directory structure is created but still not the one sbt expects). Test project has

resolvers += Resolver.bintrayRepo("synapse", "sbt-plugins")

addSbtPlugin("me.synapse" % "my-sbt-plugin" % "0.0.1")

in project/plugins.sbt and sbt tries to download https://dl.bintray.com/synapse/sbt-plugins/me/synapse/my-sbt-plugin_2.10_0.13/0.0.1/my-sbt-plugin-0.0.1.pom

What settings should be used in plugin build definition to a) be able to test it from current repository and b) to be able to link it to sbt-plugin-releases repo when the time comes?

by synapse at August 03, 2015 05:26 PM

Non blocking tail recursive

I have a tail recursive implementation as below

@tailrec
def generate() : String = {
  val token = UUID.randomUUID().toString
  val isTokenExist = Await.result(get(token), 5.seconds).isDefined
  if(isTokenExist) generate()
  else token
}

get(token) will return a Future[Option[Token]].

I know that blocking in a Future is not good. I tried to return a Future[String] instead of String. But seems that its not possible unless I wait for isTokenExist to be completed.

Any other way/suggestion to implement this?

by Allan at August 03, 2015 05:24 PM

Declaring multiple variables in Scala

I'd like to use val to declare multiple variable like this:

val a = 1, b = 2, c = 3

But for whatever reason, it's a syntax error, so I ended up using either:

val a = 1
val b = 2
val c = 3

or

val a = 1; val b = 2; val c = 3;

I personally find both options overly verbose and kind of ugly.

Is there a better option?

Also, I know Scala is very well thought-out language, so why isn't the val a = 1, b = 2, c = 3 syntax allowed?

by Xavi at August 03, 2015 05:18 PM

Clojure - Exception handling

I want my web page to behave as expected when there's an exception by a function, now (openid/validate r) doesn't return a value unless a user is redirected from /login. But since this is a home page, I want it also be visible without a problem. So I wonder why the "if" there doesn't work as I expect?

java.lang.NullPointerException at /
NullPointerException [no message]

tfs.routes.home/[fn]
home.clj, line 48

--> line 48
(if (openid/validate r)

So instead of NullPointerException, I expected (home-page r) to execute.

(openid/validate r) fails in itself and causes an exception because its not fed by the needed parameters, is that the cause? If so how can I fix it? If not, why my (home-page r) function doesn't get executed?

(if (openid/validate r)
     (show-response (:params r))
     (home-page r))

by Fiat Pax at August 03, 2015 05:17 PM

TheoryOverflow

Approximation for distance problems

Some problems in genome rearrangements ask for finding a number $c$ which is a similarity measure between two given genomes. Distance between these two genomes is given by another number, typically $n - c$, where $n$ is a number related to the size of the input and $1 \leq c \leq n$. There are many variations to the problem. Suppose, for instance, that the problem of finding the similarity $c$ is NP-hard. Thus, finding the distance $n - c$ is also NP-hard. My question is about the approximation for the distance: since computing $n - c$ is NP-hard, even when the distance is zero, how could one obtain an approximation to the distance problem? Notice that when an optimum distance is too close to zero, the approximation factor for any heuristics explodes...

by Fabio at August 03, 2015 05:16 PM

StackOverflow

Specs2: type mismatch compilation error on collection matchers

Using specs2 2.3.12, Scala 2.11.6. I'm seeing a type mismatch error on an example I feel I otherwise followed from the documentation. Code below:

val newUsers: Seq[(String, User)] = response.newUsers.toSeq
newUsers must contain((email: String, user: User) => (user.email.toString must be_==(email))).forall

I'm getting the following error:

[error] <redacted>/UserSpec.scala:561: type mismatch;
[error]  found   : org.specs2.matcher.ContainWithResult[(String, com.nitro.models.User) => org.specs2.matcher.MatchResult[String]]
[error]  required: org.specs2.matcher.Matcher[Seq[(String, com.nitro.models.User)]]
[error]       newUsers must contain((email: String, user: User) => (user.email.toString must be_==(email))).forall
[error]                                                                                                     ^
[error] one error found
[error] (api/test:compileIncremental) Compilation failed
[error] Total time: 2 s, completed Aug 3, 2015 10:07:04 AM

These are the examples I was following:

// contain matcher accepting a function
Seq(1, 2, 3) must contain((i: Int) => i must be_>=(2))
Seq(1, 2, 3) must contain(be_>(0)).forall  // this will stop after the first failure

I can definitely rewrite the test to get around this error, but I wanted to understand where I went wrong. Thanks for any pointers!

by gregsilin at August 03, 2015 05:12 PM

Lobsters

Shrimper - Material Design Android client for Lobste.rs

I finally published Shrimper on Google Play - material design Android client for Lobste.rs. Feel free to give it a try and spread the word!

And thanks to everyone who took part to the beta! @lethargicpanda

Comments

by Lethargicpanda at August 03, 2015 05:09 PM

StackOverflow

Subscribe is not working while publishing from Python

Original question is here: https://github.com/JustinTulloss/zeromq.node/issues/444

Hi,

If I subscribe from Node.js to a publisher in Python, subscriber can not receive messages. On the other hand, Node-publisher can send both python-subscriber and node-subscriber, python-publisher can send python subscriber.

Node subscriber:

// Generated by LiveScript 1.4.0
(function(){
  var zmq, sock;
  zmq = require('zmq');
  sock = zmq.socket('sub');
  sock.connect('tcp://127.0.0.1:3000');
  sock.subscribe('');
  console.log('Subscriber connected to port 3000');
  sock.on('message', function(message){
    return console.log('Received a message related to: ', 'containing message: ', message.toString());
  });
}).call(this);

Node publisher:

// Generated by LiveScript 1.4.0
(function(){
  var zmq, sock;
  zmq = require('zmq');
  sock = zmq.socket('pub');
  sock.bindSync('tcp://127.0.0.1:3000');
  console.log('Publisher bound to port 3000');
  setInterval(function(){
    console.log('Sending a multipart message envelope');
    return sock.send('TestMessage(node)!');
  }, 1500);
}).call(this);

Python publisher

import zmq
import time

context = zmq.Context()
publisher = context.socket(zmq.PUB)
publisher.bind("tcp://127.0.0.1:3000")

while True:
    time.sleep(1)
    publisher.send("TestMessage")
    print "Sended"

Python subscriber:

import zmq

context = zmq.Context()
socket = context.socket(zmq.SUB)

socket.setsockopt(zmq.SUBSCRIBE, "")
socket.connect("tcp://127.0.0.1:3000")

while True:
    string = socket.recv()
    print string

by ceremcem at August 03, 2015 05:07 PM

How to use string directive extractor in a nested route in Spray

Answering my own question here because this took me over a day to figure out and it was a really simple gotcha that I think others might run into.

While working on a RESTful-esk service I'm creating using spray, I wanted to match routes that had an alphanumeric id as part of the path. This is what I originally started out with:

case class APIPagination(val page: Option[Int], val perPage: Option[Int])
get {
  pathPrefix("v0" / "things") {
    pathEndOrSingleSlash {
      parameters('page ? 0, 'perPage ? 10).as(APIPagination) { pagination =>
        respondWithMediaType(`application/json`) {
          complete("things")
        }
      }
    } ~ 
    path(Segment) { thingStringId =>
      pathEnd {
        complete(thingStringId)
      } ~
      pathSuffix("subthings") {
        pathEndOrSingleSlash {
          complete("subthings")
        }
      } ~
      pathSuffix("othersubthings") {
        pathEndOrSingleSlash {
          complete("othersubthings")
        }
      } 
    }
  }
} ~ //more routes...

And this has no issue compiling, however when using scalatest to verify that the routing structure is correct, I was surprised to find this type of output:

"ThingServiceTests:"
"Thing Service Routes should not reject:"
- should /v0/things
- should /v0/things/thingId
- should /v0/things/thingId/subthings *** FAILED ***
  Request was not handled (RouteTest.scala:64)
- should /v0/things/thingId/othersubthings *** FAILED ***
  Request was not handled (RouteTest.scala:64)

What's wrong with my route?

by EdgeCaseBerg at August 03, 2015 05:06 PM

Lobsters

StackOverflow

Scala for Junior Programmers?

we are considering Scala for a new Project within our company. We have some Junior Programmers with only PHP knowledge, and we are in doubt that they can handle Scala. What are your opinions? Some say: "Scala is a complicated beast!", some say: "It's easy once you got it." Maybe someone has real-world experience?

by Traldin at August 03, 2015 05:05 PM

CompsciOverflow

What is the relation between a regular language, $L$, and $\Sigma^*$?

Let's say I have $\Sigma = \{0\}$.

Can a language $L$ be as large as $\Sigma^*$? So $L = \Sigma^*$.

Can a language $L$ be as small as just $\{0\}$? A subset of $\Sigma^*$.

Can multiple languages, $L_1, L_2$, come from the same $\Sigma$, i.e. be subsets of the same $\Sigma^*$?

by fossdeep at August 03, 2015 05:03 PM

StackOverflow

How to convert the Scala case class definition to Haskell?

I'm learning Haskell along with Scala. I tried to do define the following Scala type in Haskell, but failed:

sealed trait Expr
case class Value(n: Int) extends Expr
case class Add(e1: Expr, e2: Expr) extends Expr
case class Subtract(e1: Expr, e2: Expr) extends Expr

Could someone give me an example?

by Freewind at August 03, 2015 05:01 PM

QuantOverflow

How to fully replicate ADX + DI Indicators in Excel?

For black box testing, I was hoping that I could replicate the ADX + DI+ and DI- indicators that are provided in trading platforms such as ThinkOrSwim, ScottradeElite etc.

However, I noticed that even after applying the formula I found on WikiPedia, I noticed that initially calculated data tends to be off by a large margin when compared to ThinkOrSwim or ScottradeElite.

Example spreadsheet is based on HIMX @ Google Drive. In the spreadsheet, you will see the ADX + DI oscillators that I calculated. Here is the screenshot of my calculated ADX + DI HIMX in self calculated spreadsheet

Below is the screenshot based on ScottradeElite and ThinkOrSwim for HIMX HIMX in ScottradeElite HIMX in ThinkOrSwim

The period for the ADX & DI are 14 periods.

I have been spending countless hours, trying to figure out why my calculated results would deviate from the existing trading platforms. However, I have not been able to spot mistake. Can anyone help me out by pointing out what I am missing? Thanks a bunch!

by Antony at August 03, 2015 04:57 PM

StackOverflow

Using Gatling with Jenkins for e2e scenarios with scala scripts

where do I being with using Gatling integrated with Jenkins? I can get the Gatling recorder to launch but when I pull up Jenkins in I get stuck. I already loaded Java environment and tried to executed java war in cmd but I get nothing.

by Mr. B at August 03, 2015 04:55 PM

zipWithUniqueId() in flambo using clojure

I want to create a rdd such that each row has an index. I tried the following

Given an rdd:

["a" "b" "c"] 

(defn make-row-index [input]
  (let [{:keys [col]} input]
    (swap! @rdd assoc :rdd (-> (:rdd xctx)
                          (f/map #(vector %1 %2 ) (range))))))

Desired output:

 (["a" 0] ["b" 1] ["c" 2])

I got an arity error, since f/map is used as (f/map rdd fn) Wanted to use zipWithUniqueId() in apache spark but I'm lost on how to implement this and I cant find equivalent function in flambo. Any suggestion and help is appreciated.

Apache-spark zip with Index

Map implementation in flambo

Thanks

by Jyd at August 03, 2015 04:53 PM

CompsciOverflow

Complexity of a modified Clique problem

Consider a similar clique problem where we need to find a clique with size k.

However, what if we adjust the original clique problem.

For example, in the example below, this is our graph G(V,E). However, what if we iterate over D and E only , look at its neighbours and find the complete graph with size k. Can that be done in polynomial time?

For example, in the below example, one solution would D = {A, C} and E = {B}. Since A and C are complete and are connected to D, and for E, only B is connected.

Note: This is also a decision problem, for example for input we will be given a graph and k.

enter image description here

by Mark at August 03, 2015 04:52 PM

Can I couple non-terminals in context-free grammars?

If I had productions like so...

$S \rightarrow A1B$

$A \rightarrow \epsilon$

$A \rightarrow 0$

$B \rightarrow \epsilon$

$B \rightarrow 1$

If I only want strings $\{\epsilon1\epsilon, 011\}$, am I allowed to choose when both $A$ and $B$ become $\epsilon$? Or will I get resulting strings such as $\epsilon11$ and $01\epsilon$? (I know the $\epsilon$'s disappear, but I just left them in to avoid confusion.)

If I get the unwanted strings, then how would you force both $A$ and $B$ to be $\epsilon$ only when you need them to be, while still having $A$ and $B$ result in different terminals?

by fossdeep at August 03, 2015 04:50 PM

Clique decision problem restricted to a subgraph [on hold]

I know that the clique problem is NP-complete.

However, what if we change the problem a little bit?

For example,

Given a graph $G(V,E)$, an integer $k$ and a subset $S$ of $m$ vertices, we are given a decision problem to find a clique with size $k$ contained within $S$.

A real world example would be something like a social networking graph. Where each node is a person and an edge represents that they are friends with each other. Now what if we were to find clique with size $k$ amongst the set of teenagers.

Does that change the complexity? Since we are only looking into a subgraph, e.g. less vertices to look for?

Thanks!

by Mark at August 03, 2015 04:49 PM

StackOverflow

Update a field in an Elm-lang record via dot function?

Is it possible to update a field in an Elm record via a function (or some other way) without explicitly specifying the precise field name?

Example:

> fields = { a = 1, b = 2, c = 3 }
> field = .a
> updateField fields val field = { fields | field <- val }
> updateField fields field 5 -- does not work

UPDATE:

To add some context, I'm trying to DRY up the following code:

UpdatePhraseInput contents ->
  let currentInputFields = model.inputFields
  in { model | inputFields <- { currentInputFields | phrase <- contents }}

UpdatePointsInput contents ->
  let currentInputFields = model.inputFields
  in { model | inputFields <- { currentInputFields | points <- contents }}

Would be really nice if I could call a mythical updateInput function like this:

UpdatePhraseInput contents -> updateInput model contents .phrase
UpdatePointsInput contents -> updateInput model contents .points

by seanomlor at August 03, 2015 04:48 PM

/r/scala

/r/compsci

StackOverflow

How to instatiate a Csv/Map to a scala object

I am currently working on some Scala script. I have a dependency library with some Java classes.

A class looks like this :

public class Animal {
   protected String name;
   protected String animalBreed;
   public String getName() {
      return this.Name;
   }

   public void setName(String value) {
      this.Name = value;
   }

   public String getAnimalBreed() {
      return this.animalBreed;
   }

   public void setAnimalBreed(String value) {
      this.animalBreed = value;
   }
}

And I have some CSV input files that don't necessarily contain all the fields of the class and might have some other fields that are not defined.

Ex:

name,age
Spyke,2

I already have some code that transform the CSV into a Map[String,String]. But I am looking a way to instantiate my Animal class "dynamically". And by dynamically, I mean automatically set the fields available and skip the other ones. In this case it would create a new Animal object with a name but no breed and no age.

I don't really know if this is possible in Scala, or what key word would help me do a Google search – any help is appreciated!

by user1418079 at August 03, 2015 04:43 PM

CompsciOverflow

Least number of comparisons needed to sort (order) 5 elements

Find the least number of comparisons needed to sort (order) five elements and devise an algorithm that sorts these elements using this number of comparisons.

Solution: There are 5! = 120 possible outcomes. Therefore a binary tree for the sorting procedure will have at least 7 levels. Indeed, 2h ≥ 120 implies h ≥ 7. But 7 comparisons is not enough. The least number of comparisons needed to sort (order) five elements is 8.

Here is my actual question: I did found an algorithm that does it in 8 comparison but how can I prove that it can't be done in 7 comparisons?

by Atul Gangwar at August 03, 2015 04:43 PM

StackOverflow

Calculate the standard deviation of grouped data in a Spark DataFrame

I have user logs that I have taken from a csv and converted into a DataFrame in order to leverage the SparkSQL querying features. A single user will create numerous entries per hour, and I would like to gather some basic statistical information for each user; really just the count of the user instances, the average, and the standard deviation of numerous columns. I was able to quickly get the mean and count information by using groupBy($"user") and the aggregator with SparkSQL functions for count and avg:

val meanData = selectedData.groupBy($"user").agg(count($"logOn"),
avg($"transaction"), avg($"submit"), avg($"submitsPerHour"), avg($"replies"),
avg($"repliesPerHour"), avg($"duration"))

However, I cannot seem to find an equally elegant way to calculate the standard deviation. So far I can only calculate it by mapping a string, double pair and use StatCounter().stdev utility:

val stdevduration = duration.groupByKey().mapValues(value =>
org.apache.spark.util.StatCounter(value).stdev)

This returns an RDD however, and I would like to try and keep it all in a DataFrame for further queries to be possible on the returned data.

by the3rdNotch at August 03, 2015 04:40 PM

/r/emacs

evil-mode Trial 2 Day 3 (Follow-up)

Generally digging the setup so far. Some of the features, like the HTML tag match have been really handy. Mostly the problems I've run into have to do with evil-mode not playing nice with some of the other modes, I've been able to work around a few of them.

I'm hoping the ansi-term won't be a deal breaker today.

Pasting

Working in the GUI and I believe terminal as well--don't know why it wasn't working before.

Fuzzy matching file names.

Projectile seems great, and the fuzzy match works better than what I expected M-x projectile-switch-project has a book mark list it loads, that I can add code too.

File explorer

I like neotree better than NERDTree so far, probably because I set it up right from the start.

Tabs

I used the elscreen mentioned on the EmacsWiki for EvilMode. It works great with my evil-leader settings.

REPL

I found a REPL for Node with something called swank.js. Looks swanky but haven't set it up yet. I'm a bit confused by inferior/superior/lisp repls and differences between slime/cider and what comes out of the box.

Shell in emacs

The biggest issue I have is pressing ctrl-p/n tries to expand for tags in insert mode. This makes navigating history problematic. If I turn off evil mode then I get stuck in that window, with exception of clicking on another tab. I get into states where exiting emacs is easier than figuring out what I hit.

For some reason, sourcing my bash_profile with M-x shell results in blank screen.

submitted by base698
[link] [1 comment]

August 03, 2015 04:40 PM

StackOverflow

In Clojure how can I convert a String to a number?

I have various strings, some like "45", some like "45px". How how I convert both of these to the number 45?

by Zubair at August 03, 2015 04:37 PM

clojure: Removing maps from lazy-seq of maps

I have a lazy-seq of maps and I'm attempting to remove maps from that lazy-seq based on the return value from another function. The other function will return true or false depending on whether or not a call of get returns a value equal to the parameter. The problem is the function isn't working correctly and I'm not too sure why.

    (defn filter-by-name "Filter by names" [name m]
       (if (= name (get m :name_of_person)) true false)) 
        ;To be called on each map

    (defn remove-nonmatching-values "Remove anything not matching" [filter-val all-maps]
      (map #(remove (filter-by-name filter-val %)) all-maps)) 
       ;trying to call on the lazy seq

by m0butt at August 03, 2015 04:32 PM

Scala: Memory issues

I'm faced with an optimisation challenge. I'm in the middle of writing a Scala based service that uses Play Framework and ElasticSearch to analyse about 200k documents in an index.

Now, the analysis can only be done on all the documents at once, and I have a model class on top of ES, which when passed as a list to another method, draws the analysis on the model class.

Now, to fetch the 200k documents at once and analyse them is out of the question since that is beyond our constraints. So what I did was this – From within a recursive function:

def getOverallAnalytics(accumulatedAnalytics: Map[...], limit: Int, startFrom: Int) = {
    ElasticModel.getAnalytics(limit, startFrom).flatMap({
        case (hasMore, newAnalytics) => {
            val combinedAnalytics = combine(accumulatedAnalytics, newAnalytics)
            if (hasMore) getOverallAnalytics(combinedAnalytics, limit, startFrom + limit)
            else Future(newAccumulator)
        }
    })        
}

And you have;

object ElasticModel {
    getAnalytics(limit, startFrom) = {
        val recordObjects = queryElastic.flatMap(result => new ElasticModel(result))
        Future((haveMore(), getAnalysis(recordObjects))) //
    }
}

Something to that effect. Now, the map containing the analytics has a very small set of keys. Given this, one would not expect to see a

java.lang.OutOfMemoryError:
Unable to create new native thread

Which is running on a 16GB RAM machine.

My assumptions: recordObjects doesn't keep consuming memory once the call to getAnalytics finishes.

That seems like the only possibility where this is going wrong.

What exactly am I doing wrong here?

by Ashesh at August 03, 2015 04:30 PM

typetag of an inner class

i have a function with an argument of type Class[_], i want to get the fields of that class and then apply something depending on the the field's type, and do a recrusive call of my function with the field found ( if its a case class ) my problem is when i have Seq of Case classes. i tried using TypeTags but i guess im doing it the wrong way. here is my function,

def func1[T: TypeTag]: T = {
      val tag = implicitly[TypeTag[T]] 
      val mirror = tag.mirror
      val clz = mirror.runtimeClass(tag.tpe.typeSymbol.asClass)
      val fieldsTypes = tag.tpe.member(nme.CONSTRUCTOR).asMethod.paramss.head map {_.typeSignature}
      func2(clz,fieldsTypes)
}


def func2[A : TypeTag](clz: Class[_], typlist: List[scala.reflect.runtime.universe.Type], sep: Int = 1): A = {
      val constructor = clz.getConstructors()(0)
      val fieldStrings = source.split(sep.toChar.toString, -1)
      val declaredFields= clz.getDeclaredFields
  tmp zip (e zip (fieldStrings)) foreach {
{
        case (typ,(field ,str)) =>
          field.getType match {
            case t if ((t == classOf[Int]) || (t == classOf[Double]) || (t == classOf[Boolean])) =>
              args = args :+ primitives(t,str)
            case t if t == classOf[String] =>
              args = args :+ str
            case t if classOf[EntitySerializable2].isAssignableFrom(t) =>  
              val spt = str.split((sep + 1).toChar)
              args = args :+ str.func2(t.asInstanceOf[Class[_]],List(typ),sep+2)
            case t if (t == classOf[Seq[_]]) || (t == classOf[List[_]]) => 
              val nextLevelClass = field
                  .getGenericType.asInstanceOf[ParameterizedType]
                  .getActualTypeArguments()(0).asInstanceOf[Class[_]]

              val spt = str.split((sep + 1).toChar)

              val sub = if (spt.length == 1 && spt(0) == EmptyList)
              {
                            println("empty")
                            List[ClassTag[nextLevelClass.type]]()
              }
                else if (spt.length == 1 && spt(0) == "nulls") {
                List(null)
              }
              else if (spt.length == 1 && spt(0) == "null") {
                println("cas null")
                null
              }
                  else {
                  val nexLevelType = typ.asInstanceOf[TypeRefApi].args.head
                  println("cas else "+nexLevelType)
                  spt.map {
                      s => if ((nexLevelType =:= typeOf[Int]) || (nexLevelType =:= typeOf[Double]) || (nexLevelType =:= typeOf[Boolean])) {
                        primitivestype(nexLevelType,s)
                        }
                          else if  (nexLevelType.toString=="String") s
                                else  s.func2(nextLevelClass,typ.asInstanceOf[TypeRefApi].args, sep + 2)
                  }.toList
                }
                 args = args :+ sub
            case other =>
              throw new Exception(s"${field.getName} can not be parsed, " +
                "make sure if it is either a primitive type or a collection of EntitySerializable.")
          }
      }

      val finalArgs = args.map(_.asInstanceOf[AnyRef])
      println("finalArgs = " + finalArgs.toList)
      constructor.newInstance(finalArgs: _*).asInstanceOf[A]
    }
}

My problem is that i can't get the fields of the "nextLevelClass" when its a Seq or List

by Tahar IFRAH at August 03, 2015 04:17 PM

SORM: How can I use Sorm in Scala 2.11.6

How can I use Sorm in Scala 2.11.6, in my build.sbt I am using...

libraryDependencies ++= Seq(
  jdbc,
  cache,
  ws,
  "org.scala-lang" % "scala-library" % "2.11.6",
  "org.sorm-framework" % "sorm" % "0.3.18",
  "org.webjars" %% "webjars-play" % "2.4.0",
  "org.webjars" % "bootstrap" % "3.3.5",
  specs2 % Test
)

in compile I am getting the following errors

[error] Modules were resolved with conflicting cross-version suffixes in ...
[error] org.scala-lang.modules:scala-xml _2.11, _2.12.0-M1
[error] org.scala-lang.modules:scala-parser-combinators _2.11, _2.12.0-M1

it seems that it only works up to version 2.4.X

by RobsonFagundes at August 03, 2015 04:13 PM

High Scalability

Seven of the Nastiest Anti-patterns in Microservices

Daniel Bryant gave an energetic talk at Devoxx UK 2015 on lessons learned from over five years of experience with microservice based projects. The talk: The Seven Deadly Sins of Microservices: Redux (video, slides).

If you don't want to risk your immortal API then be sure to avoid:

  1. Lust - using the latest and greatest tech with the idea it will solve all your problems. It won't. Do you really need microservices at all? If you do go microservices do you really need new tech in your stack? Choose boring technology. Know why you are choosing something. A monolith can perform better and because a monolith can be developed faster it may also be the correct choice in proving your business case 
  2. Gluttony - excessive communication protocols. Projects often have a crazy number of protocols for gluing parts together. Standardize on the glue across an organization. Choose one synchronous and one asynchronous protocol. Don't gold-plate.
  3. Greed - all your service are belong to us. Do not underestimate the impact moving to a microservice approach will have on your organization. Your business organization needs to change to take advantage of microservices. Typically orgs will have silos between Dev, QA, and Ops with even more silos inside each silo like front-end, middleware, and database. Use cross functional teams like Spotify, Amazon, and Gilt. Connect rather than divide your company. 
  4. Sloth - creating a distributed monolith. If you can't deploy your services independently then they aren't microservices. Decouple. Transform data at a less central part of the stack. Some options are schema-first design and consumer-driven contracts.
  5. Wrath - blowing up when bad things happen. Bad things happen all the time so you need to test. Microservices are inherently distributed so you have network problems to deal with that weren't a problem in a monolith. The book Release It! has a lot of good fault tolerance patterns. Operationally you need to implement continuous delivery, agile, and devops. Test for failures using real life disaster scenarios testing, live injection failure testing, and something like Amazon's Simian Army.
  6. Envy - the shared single domain fallacy. A lot of time has been spent building and perfecting the model of a single domain. There's one big database with a unified schema. Microservices decompose a system along different lines and that can cause contention in an organization. Reports can be generated using pull by service or data pumps with events. 
  7. Pride - testing in the world of transience. Does your stuff really work? We all make mistakes. Think testing at the developer level, operational level, and business level. Surprisingly little has been written about testing microservices. Invest in your build pipeline testing. Some tools: Serenity BOD, Wiremock/Saboteur, Jenkins Performance Plugin. Testing in production is an emerging idea with companies that deploy many microservices.

by General Chicken at August 03, 2015 03:56 PM

StackOverflow

Automatically convert Scala code to Java code

I have an app written in Scala and some of my team members want a Java version of it. It is a demo app to use another API written in Scala, and they want a Java version of the app to be able to use the API from Java. However, the app is somewhat large and I don't want to manually rewerite in Java (and they don't want to learn Scala). Is there any tool that will automatically generate (readable) Java code from the Scala code?

by Jus12 at August 03, 2015 03:56 PM

How to perform a simple json post with spray-json in spray?

I'm trying to perform a simple json post with spray. But it seems that i can get an http entity for a json object that can be Marshall.

here is my error:

[error] ...../IdeaProjects/PoolpartyConnector/src/main/scala/org/iadb/poolpartyconnector/thesaurusoperation/ThesaurusCacheService.scala:172: could not find implicit value for evidence parameter of type spray.httpx.marshalling.Marshaller[spray.json.JsValue]

[error] val request = Post(s"$thesaurusapiEndpoint/$coreProjectId/suggestFreeConcept?", suggestionJsonBody)

and the code that comes with it:

 override def createSuggestedFreeConcept(suggestedPrefLabel: String, lang: String, scheme: String, b: Boolean): String = {

    import system.dispatcher
    import spray.json._

    val pipeline      = addCredentials(BasicHttpCredentials("superadmin", "poolparty")) ~> sendReceive


    val label              = LanguageLiteral(suggestedPrefLabel, lang)
    val suggestion         = SuggestFreeConcept(List(label), b, Some(List(scheme)), None, None,None, None)
    val suggestionJsonBody = suggestion.toJson

    val request            = Post(s"$thesaurusapiEndpoint/$coreProjectId/suggestFreeConcept?", suggestionJsonBody)

    val res                = pipeline(request)

    getSuggestedFromFutureHttpResponse(res) match {

      case None => ""
      case Some(e) => e

    }
  }

Please, does any one has an idea of what is going on with the implicit marshaller. I though spray Json would come with implicit marshaller.

by MaatDeamon at August 03, 2015 03:55 PM

How to stashAll()?

I based my developments on the existence of stashAll(), that I thought essential and useful when I stumbled upon stash, but did not find it... Is there an alternative to stashAll() the messages in my actor's mailbox?

by wipman at August 03, 2015 03:54 PM

How Immutable.js comparation works?

I am using Immutable.js for some time and just now i find that comparation does not work as I think whole time.

Code is simple

a = Immutable.Map({a:1, b:[1,2,3]})
b = Immutable.Map({a:1, b:[1,2,3]})
a == b // false
Immutable.is(a, b) // false

Is there any way how to compare two same Immutable objects and get true?

Functional Logic tells me they should be equal, but Javacript tells no :)

by Schovi at August 03, 2015 03:53 PM

How to handle (\w+)+? regex as string list in pattern matching scala

I am developing a scala project. In fact, I should create some commands like CLI commands. For example: exit, help, load filename, find name. And I want to handle these commands with regex pattern matching. But I have some problems. If string is "load filename", the filename can be handle. However if string is "load filename1 filename2", I can not handle filename1 and filename2 together.

My code is as follow:

val help = """help""".r
val exit = """exit""".r
val load = """load(\s+\w+)+?\s*""".r
val find = """findbyName(\s+\w+)+?\s*""".r

val input = "load filename1"
 input match {
    case help() => println("help")
    case exit() => println("exit)
    case load(filename) => println(filename)
    case find(name) => println(name)
    case _ => println "error"
  }

***********************
console: filename1

I want to print all filename if n filenames. How can I proceed?

by Burak Dağlı at August 03, 2015 03:51 PM

Is there a C-like mini-syntax/language that can be both translated to native C/C++ and Java?

I would like to allow my application to be "scripted". The script language should be typed and C-like. It should have the usual control statements, primitive types, arrays, and operators that can be both found in C and Java. The scripts should be able to define functions, and call pre-defined API functions, like sin() ... that I choose to offer. If the script pass the syntax check, it would then be translated by the application to Java and then compiled on the fly, but it should also be possible to translate it to C/C++, and have it natively compiled. The syntax-check and translation to Java/C should run in the JVM. I don't need an interpreter for it, since it would always be translated to Java/C. Only a syntax-checker and translator.

Is there such a language out there? If not, what is the easiest way of doing this in the JVM, taking into consideration that I'm not knowledgeable in compiler/interpreter programming? (If I was, I would not need to ask this question ...)

If there is a "Scala" solution, it would also be fine, since I'm actually moving my Java code to Scala.

[EDIT] The only reason I want C/C++ translation is performance. I expect a lot of "bit manipulation" over arrays, which Java isn't really suited for, in particular due to range-checking at every array index operation. I also expect many objects, which costs indirectly in GC cycles.

[EDIT#2] OK, I see I have to get concrete. I am considering programming a Minecraft clone, as an exercise, to get my mind off "Business Computing". I'm talking about the engine, not the gameplay. And I'm more interested in the server-side than in the 3D, because I'm a "server guy". We're talking about using the flightweight pattern to represent millions of objects (blocks/voxels), and accessing them all many times per seconds. This isn't what the JVM was made for. Look at this article for an example of what I mean: Why we chose CPP over Java

I don't want to program everything in C/C++, but I suspect this is the only way to get a good performance. Another example of what I mean is VoltDB, which is arguably the fastest SQL database out there. They wrote it in Java and C/C++, using Java for I/O and network, and C for the heavy memory manipulation and bit fumbling. The user-written stored procedures are in Java, but I don't think it needs to be. I think it should be possible to compile to Java on the client and in tests builds, and compile to C on the server, where a full development environment can be configured.

by Sebastien Diot at August 03, 2015 03:49 PM

Passing & validating multiple values to Scala macro

So, I am new to macros in Scala and am having some difficulty with my current objective. I have a class that performs Json parsing of case classes. If the given case class has fields that are of type Enumeration, an implicit array of those enumerations needs to be provided when calling the parser.

I want to use a macro to verify the usages of the parser (at compile time), examine the class that is being sent to the parser and verify whether it has enums in it. If it does have enums, I want to verify that the implicit array of enums is not empty.

Here is what I have so far. I'd like to avoid c.eval if possible, so if there is a better way, I am all for it. I keep getting errors about "forgetting to splice a variable" without much more context than that. Am I even on the right track here?

object Macro {
  def compileTimeCheck(tpe: Any, enums: Array[Enumeration]) = macro impl
  def impl(c: Context)(tpe: c.Expr[Any], enums: c.Expr[Array[Enumeration]]) : c.Expr[Any] = {
    import c.universe._
    reify {
      if (c.eval(enums).isEmpty) {
        if (c.eval(tpe).getClass.getDeclaredFields.exists(_.getType.isInstanceOf[Enumeration])) {
          c.abort(c.macroApplication.pos,
            "You must provide an implicit list of enums in this case")
        }
      }
    }
  }
}

by Jake Sankey at August 03, 2015 03:46 PM

TheoryOverflow

Are there any learning algorithms with any provable guarantees for manifold learning or manifold regularization?

First of all, I want to make clear that my question is about algorithms. I'd like to know if there are any algorithms with provable guarantees in the context of manifold learning (or manifold regularization).

I do not necessarily require that guarantees are with respect to generalization. However, those are always nice though but I just want to emphasize that the main focus of the question is about algorithms (with guarantees), not necessarily about generalization.

What I do want is that given a finite sample of data (maybe mixed with labeled and unlabeled data) if we can provably learn. For example, are there algorithms that guarantee that we learn the manifold or its structure? Or maybe, are there algorithms that theoretically guarantee better prediction with the manifold assumption? Any algorithm with provable guarantees is good.


To be clear of what I mean with provable guarantees I will give a couple of examples of scenarios/problems (and algorithms) where there are provable guarantees for (learning) algorithms.

Consider the non-negative matrix factorization problem where we have:

$$M = AW$$

s.t. A and W are $m \times k$ and $k \times n$ and are required to be entry-wise nonnegative. In fact, lets suppose that the columns of M each sum to one. One can interpret this problem as follows. Each column is a document from our model. Its generated as a convex combination from topics (i.e. the columns of A). The combination is specified by the columns of W. Can we recover the best non-negative factorization of the model? It turns out that Vavasis proved its NP-hard 1. However, under the separability condition on the topics, one can show that there exists a polynomial time algorithm to compute such a non-negative factorization (of minimum inner-dimension). So this scenario has two things that are interesting:

  1. We can learn the true model (the negative factorization) from the (samples) documents (with some probability conditions)
  2. There a polynomial time algorithm for it.

The algorithm for this is based on finding the "Anchor words" (i.e. the highly technical words for each document) and then taking advantage of that to find the factorization. The some of the details for this can be found on this monograph on algorithmic aspects of machine learning.

This is exactly the type of thing I am interested in, on learning algorithms with some provable guarantees under some conditions. For more example, the following monograph has many that are appropriate examples. To name the monograph explains tensor methods, ICA, alternating minimization, mixtures of gaussians, matrix completion, phylogenetic trees, noisy parity and more!


References:

1 S. Vavasis. On the complexity of nonnegative matrix factorization. SIAM Journal on Optimization, pages 1364-1377, 2009.

by Charlie Parker at August 03, 2015 03:42 PM

StackOverflow

Non-required arguments in compojure-api/schema/swagger?

When I have a definition of an API like this:

(POST* "/register" []
    :body-params [username :- String,
                  password :- String,
                  name :- String]
    (ok)))

what's the appropriate way of making name optional? Is it:

(POST* "/register" []
    :body-params [username :- String,
                  password :- String,
                  {name :- String nil}]
    (ok)))

by Pablo at August 03, 2015 03:40 PM

Add or append new element to XML file in Scala instead of replacing it

My scala code currently ends up replacing an entire section of my xml file with the new tag that I'm adding. I want it to only add the tag once as a child of ClientConfig but it replaces all the tags present in this section with itself.

val data = XML.load(file)
val p = new XMLPrettyPrinter(2)
val tryingtoAdd = addNewEntry(data,host,env)
p.write(tryingtoAdd)(System.out)

where host=bob and env=flat are previously defined and addNewEntry is defined as follows

 private def isCorrectLocation(parent: Elem, node: Elem, host: String): Boolean = {
    parent.label == "ClientConfig" && node.label == "host"
  }

  def addNewEntry(elem:Elem, host: String, env: String): Elem ={
    val toAdd = <host name={host} env={env} />
    def addNew(current: Elem): Elem = current.copy(
      child = current.child.map {
        case e: Elem if isCorrectLocation(current, e, host) ⇒ toAdd
        case e: Elem ⇒ addNew(e)
        case other ⇒ other
      }
    )
    addNew(elem)
  }

The xml it produces is

<ClientConfig>
    <host name="bob" env="flat"/>
    <host name="bob" env="flat"/>
    <host name="bob" env="flat"/>
    <host name="bob" env="flat"/>
</ClientConfig>

where instead I want it to just append it as a single child of ClientConfig such as this where the last three children were already present in the file

<ClientConfig>
    <host name="bob" env="flat"/>
    <host name="george" env="flat"/>
    <host name="alice" env="flat"/>
    <host name="bernice" env="flat"/>
</ClientConfig>

What do i do? For example python has a simple insert method

by Zee at August 03, 2015 03:36 PM

Scala source-to-source transformation

I would like to know if there has any open source project or tool for Scala source-to-source transformation. My purpose is to write a pass to modify a specific part of Scala source code. Actually Clang is exactly what I expect but unfortunately it does not support Scala or even Java.

I know Scala is a relatively new language so maybe no one did this before. If this is a case, would you give me some suggestions?

by Cody Yu at August 03, 2015 03:34 PM

Planet Clojure

Using Datomic With Immutant

Immutant 2.x is just a set of libraries, usable from a standard Clojure application. The Datomic peer client is also a library, so in theory we can use Immutant and Datomic together in the same application. But in pratice, it's not that simple.

The sticking point is HornetQ - Datomic uses HornetQ to connect peers to the transactor and depends on HornetQ 2.3.x, while Immutant depends on 2.4.x. We should be able to resolve this with the proper dependency exclusions - the API that Datomic uses in 2.3.x is available in 2.4.x, but there are two issues that prevent that from working with the current stable release of Immutant (2.0.2):

  1. Immutant 2.0.x uses the JMS 2.0 API, which didn't appear in HornetQ until 2.4.0, so 2.4.x is required.
  2. The Datomic transactor is running HornetQ 2.3.x, and a 2.4.x client can't connect to a 2.3.x server, so 2.3.x is required.

Hence our pickle.

But, all is not lost - with Immutant 2.0.x, you have two (non-awesome) options for using Datomic, and if you are willing to use recent Immutant incremental builds (and are willing to upgrade to Immutant 2.1.0 when it is released in a few weeks), you have a third, more palatable option.

Option 1 - No Immutant Messaging

The first option for using Datomic with Immutant 2.0.x is to not use Immutant messaging. This requires either depending individually on the Immutant libraries you are using:

        :dependencies [[com.datomic/datomic-pro "0.9.5206"]
                       [org.immutant/web "2.0.2"]
                       [org.immutant/scheduling "2.0.2"]
                       ...]

or using the catch-all artifact, and excluding messaging:

        :dependencies [[com.datomic/datomic-pro "0.9.5206"]
                       [org.immutant/immutant "2.0.2"
                        :exclusions [org.immutant/messaging]]
                       ...]

But this means you can't use any of the Immutant messaging features, which isn't great.

Option 2 - In WildFly Only

An Immutant application deployed to a WildFly application server doesn't directly use any on the HornetQ APIs, and instead uses the JMS API to communicate with the HornetQ provided by WildFly. That HornetQ is ClassLoader-isolated, which means your application can bring in its own version of HornetQ (in this case, 2.3.x via Datomic), which can be used without issue.

But this means you have to do all of your development against an application running inside WildFly, which isn't a great development experience. With our "dev" war, you can still have a REPL-driven process, but it is definitely more painful than out-of-container development.

Option 3 - Use Recent Incrementals, i.e. 2.1.0

For the soon-to-be-released Immutant 2.1.0, we're working on supporting Red Hat JBoss Enterprise Application Platform (which is a mouthful, so we'll just call it EAP). EAP is the commercialized version of Red Hat's open source JBoss Application Server (now known as WildFly), and the current version (6.4.0) is based off an older WildFly that uses HornetQ 2.3.x. We'll cover what EAP support really means in a future blog post - what matters today is that changes we've made in Immutant to support EAP allow you to use Immutant messaging with Datomic both in and out of WildFly (and soon EAP).

The only issues with this option is you have to use a recent incremental build of Immutant until we release 2.1.0, and do a few dependency exclusions/inclusions to make Immutant messaging and Datomic play nicely. Luckily, we've figured that out for you! The bare minimum to get things working is:

      :dependencies [[org.immutant/immutant "2.x.incremental.602"]
                     ;; Datomic transitively brings in HornetQ 2.3.17.Final, which
                     ;; overrides the HornetQ 2.4.5.Final from org.immutant/messaging
                     [com.datomic/datomic-pro "0.9.5206"]
                     ;; org.immutant/messaging requires this, but Datomic doesn't
                     ;; bring it in, so we have to depend on it explicitly
                     [org.hornetq/hornetq-jms-server "2.3.17.Final"
                      :exclusions [org.jboss.jbossts.jts/jbossjts-jacorb]]]

Now that you have it working, you'll probably notice that Leiningen's pedantic report is chock full of warnings. Both Datomic and Immutant have large dependency trees, so conflicts are inevitable. If you want to get rid of those warnings, we've figured that out for you as well:

      :dependencies [[org.immutant/immutant "2.x.incremental.602"
                      :exclusions [org.hornetq/hornetq-server
                                   org.hornetq/hornetq-jms-server
                                   org.hornetq/hornetq-journal
                                   org.hornetq/hornetq-commons]]
                     [com.datomic/datomic-pro "0.9.5206"
                      :exclusions [org.slf4j/slf4j-nop
                                   joda-time
                                   commons-codec
                                   org.jboss.logging/jboss-logging]]
                     [org.hornetq/hornetq-jms-server "2.3.17.Final"
                      :exclusions [org.jboss.spec.javax.transaction/jboss-transaction-api_1.1_spec
                                   org.jboss.logging/jboss-logging
                                   org.jboss/jboss-transaction-spi
                                   org.jgroups/jgroups
                                   org.jboss.jbossts.jts/jbossjts-jacorb]]]

Note again that this option currently requires you to run a recent incremental build (#602 or newer), which requires relying on our incremental repo:

      :repositories [["Immutant incremental builds" "http://downloads.immutant.org/incremental/"]]

Get In Touch

If you have any questions, issues, or other feedback about Datomic with Immutant, you can always find us on #immutant on freenode or our mailing lists.

by The Immutant Team at August 03, 2015 03:32 PM

Lobsters

Planet Clojure

Immutant 2.0.2 Patch Release

This patch release resolves issues with Websocket on-close handlers not being fired when close frames aren't actually sent.

What is Immutant?

Immutant is an integrated suite of Clojure libraries backed by Undertow for web, HornetQ for messaging, Infinispan for caching, Quartz for scheduling, and Narayana for transactions. Applications built with Immutant can optionally be deployed to a WildFly cluster for enhanced features. Its fundamental goal is to reduce the inherent incidental complexity in real world applications.

Get In Touch

If you have any questions, issues, or other feedback about Immutant, you can always find us on #immutant on freenode or our mailing lists.

Issues resolved in 2.0.2

  • [IMMUTANT-563] - on-close handler for async channels doesn't fire if server is stopped
  • [IMMUTANT-564] - Websocket on-close handlers not being called when Safari client goes away

by The Immutant Team at August 03, 2015 03:27 PM

CompsciOverflow

Is $\epsilon$ always contained in $\Sigma^*$?

Please correct me on any terminology. For some reason I'm a bit confused.

$\Sigma = \{\epsilon, 0, 1\}$

This means my alphabet, $\Sigma$, contains three symbols ($\epsilon, 0, 1$).

$\Sigma^*$ is the language over $\Sigma$, and it equals $\{\epsilon, 0, 1, 01, 10\}$.

My regular expression for $\Sigma^*$: $\epsilon+0+1+(01)+(10)$.

First question: Does every $\Sigma^*$ include $\epsilon$? I see some with, and some without. I feel like this is a big difference because your regular expression and DFSA will be different.

Second question: At this point, I would have five accepting states in a DFSA? Since the first state is the empty string, is it $\epsilon$? Or is the first state just nothing, which transitions to a second state via $\epsilon$ which contains the empty string?

by fossdeep at August 03, 2015 03:19 PM

StackOverflow

How does Rose Tree unfold work (from Origami Programming)

I have been reading the article Origami Programming by Jeremy Gibbons and am having trouble figuring out how the unfoldR and unfoldF functions work for Rose Trees.

In the paper the Rose Tree type is defined as:

data Rose α = Node α (Forest α)
type Forest α = List (Rose α)

The unfoldR and unfoldF functions are mutually recursive and defined as:

unfoldR :: (β → α) → (β → List β) → β → Rose α
unfoldR f g x = Node (f x) (unfoldF f g x)

unfoldF :: (β → α) → (β → List β) → β → Forest α
unfoldF f g x = mapL (unfoldR f g) (g x)

It looks like that, except in a few small edge cases, these functions will recurse infinitely. How do these two mutually recursive functions terminate?

by egerhard at August 03, 2015 03:18 PM

clojure recursion conj a list

((fn foo [x] (when (> x 0) (conj (foo (dec x)) x))) 5)

For this code, the result is [5 4 3 2 1] Why isn't is [1,2,3,4,5]? I see we do conf from result of recursive foo call with a value. For I thought it should be 1 2 3 4 5? Need help to understand this. Thanks.

by BufBills at August 03, 2015 03:17 PM

Is topByKey scalable?

When i use topByKey with different clusters i have the same amount of time to execute this code independently of the number of slaves i use. The RDD_distance size is beetween 10^8 and 10^12 units.

        for( ind <- 1 to maxIterForYstar  ) {

        var rdd_distance = rdd_temp.cartesian(parsedData).map{ case (x,y) => (x.get_id,(y.get_vector,-Vectors.sqdist(x.get_vector,y.get_vector))) }

        var rdd_knn_bykey = rdd_distance.topByKey(k)(Ordering[(Double)].on(x=>x._2))
    }

So my question is about knowing if topByKey is scalable or if it is something wrong with my code.

by KyBe at August 03, 2015 03:13 PM

CompsciOverflow

Path in digraph passing through given set of vertices

Suppose we have digraph G, set of its vertices W and two (possibly equal) vertices s and f. I'm looking for an algorithm which will solve the following problem: whether there is path from s to f passing through all vertices from W; if yes, return any of such paths. By "path" i mean path in the most general sense: repetitions of edges and vertices are allowed.

Unfortunately no information about graph is known: it needn't be connected in any sense, it might have oriented cycles, etc.

My idea was to modificate DFS somehow but i didn't manage to proceed. Any ideas and hints will be highly appreciated. Thanks in advance.

by Igor at August 03, 2015 03:11 PM

StackOverflow

What's is the meaning of this error 'command failed because the 'ok' field is missing or equals 0' when executing RawCommand in ReactiveMongo?

I am trying do execute the following command using RawCommand in ReactiveMongo:

val commandDoc =
      BSONDocument(
        "update" -> "users",
        "updates" -> BSONArray(
          BSONDocument("q" -> BSONDocument("_id" -> BSONObjectID("53f265da13d3f885ed8bf75d")),
            "u" -> BSONDocument("$push" -> BSONDocument("v" -> 6)), "upsert" -> false, "multi" -> false)
         // BSONDocument("q" -> query, "u" -> update2, "upsert" -> false, "multi" -> false)
        ),
        "ordered" -> true
      )

    // we get a Future[BSONDocument]
    val futureResult = con("tests").command[BSONDocument](RawCommand(commandDoc))

    futureResult.map { result => // result is a BSONDocument

      println(result)

      true
    }

But I got this strange error as result:

[DefaultCommandError: BSONCommandError['command failed because the 'ok' field is missing or equals 0'] with original doc { ok: BSONInteger(1), nModified: BSONInteger(0), n: BSONInteger(0) }]

What's happening?

by Lucas Batistussi at August 03, 2015 03:05 PM

Twitter

Twitter at @MesosCon 2015

Once again, we’re pleased to sponsor and participate in #MesosCon. As heavy users of both Mesos and Apache Aurora to power our cloud infrastructure, we’re excited to be part of this growing community event.

The conference, organized by the Apache Mesos community, features talks on the popular open source cluster management software and its ecosystem of software for running distributed applications at the scale of tens of thousands of servers.

Conference highlights
This year’s #MesosCon will be significantly larger than last year and features simultaneous tracks including beginner talks, the Mesos core, frameworks, and operations. We have a stellar lineup of invited keynote speakers including Adrian Cockcroft (@adrianco, Battery Ventures), Neha Narula (@neha, MIT), Peter Bailis (@pbailis, UC Berkeley), and Benjamin Hindman (@benh, Mesosphere).

We’re also pleased that Twitter will have a strong presence. We’ll be sharing our latest work as well as best practices from the last four-plus years of using Apache Mesos and Apache Aurora. And if you’re interested in learning more about engineering opportunities, stop by our booth.

There’s a pre-conference hackathon that several of us Twitter folks will be attending. We’re also hosting a #MesosSocial in our Seattle office on Wednesday, August 19 to kick off the conference. You can follow @TwitterOSS for updates when we announce more details next week. See you at #MesosCon!

Twitter speakers

The New Mesos HTTP API - Vinod Kone, Twitter, Isabel Jimenez (@ijimene), Mesosphere
This session will provide a comprehensive walkthrough of recent advancements with the Mesos API, explaining the design rationale and highlighting specific improvements that simplify writing frameworks to Mesos.

Twitter’s Production Scale: Mesos and Aurora Operations - Joe Smith, Twitter
This talk will offer an operations perspective on the management of a Mesos + Aurora cluster, and cover many of the cluster management best practices that have evolved here from real-world production experience.

Supporting Stateful Services on Mesos using Persistence Primitives - Jie Yu, Twitter, and Michael Park, Mesosphere
This talk will cover the persistence primitives recently built into Mesos, which provide native support for running stateful services like Cassandra and MySQL in Mesos. The goal of persistent primitives is to allow a framework to have assured access to its lost state even after task failover or slave restart.

Apache Cotton MySQL on Mesos - Yan Xu, Twitter
Cotton is a framework for launching and managing MySQL clusters within a Mesos cluster. Recently open-sourced by Twitter as Mysos and later renamed, Cotton dramatically simplifies the management of MySQL instances and is one of the first frameworks that leverages Mesos’ persistent resources API. We’ll share our experience using this framework. It’s our hope that this is helpful to other Mesos framework developers, especially those wanting to leverage Mesos’ persistent resources API.

Tactical Mesos: How Internet-Scale Ad Bidding Works on Mesos/Aurora - Dobromir Montauk, TellApart
Dobromir will present TellApart’s full stack in detail, which includes Mesos/Aurora, ZK service discovery, Finagle-Mux RPC, and a Lambda architecture with Voldemort as the serving layer.

Scaling a Highly-Available Scheduler Using the Mesos Replicated Log: Pitfalls and Lessons Learned - Kevin Sweeney, Twitter
This talk will give you tools for writing a framework scheduler for a large-scale Mesos cluster using Apache Aurora as a case study. It will also explore the tools the Aurora scheduler has used to meet these challenges, including Apache Thrift for schema management.

Simplifying Maintenance with Mesos - Benjamin Mahler, Twitter
Today, individual frameworks are responsible for maintenance which poses challenges when running multiple frameworks (e.g. services, storage, batch compute). We’ll explore a current proposal for adding maintenance primitive in Mesos to address these concerns, enabling tooling for automated maintenance.

Generalizing Software Deployment - The Many Meanings of “Update” - Bill Farner, Twitter
Bill will present the evolution of how Apache Aurora managed deployments and describe some of the challenges imposed by wide variance in requirements. This talk will also share how deployments on Aurora currently run major services at Twitter.

Per Container Network Monitoring and Isolation in Mesos - Jie Yu, Twitter
This talk will discuss the per container network monitoring and isolation feature introduced in Mesos 0.21.0. We’ll show you the implications of this approach and lessons we learned during the deployment and use of this feature.

Join us!
Good news: there’s still time to register for #MesosCon and join us in Seattle on August 20-21.

There’s a pre-conference hackathon that several of us Twitter folks will be attending. We’re also hosting a #MesosSocial in our Seattle office on Wednesday, August 19 to kick off the conference. You can follow @TwitterOSS for updates when we announce more details next week. See you at #MesosCon!

August 03, 2015 03:01 PM

QuantOverflow

Is there a considered floor for variation the 1st principal component must explain?

I am wondering if there is a considered floor to the percentage variation the 1st principal component must explain in general for PCA - ie. any lower and it is not worth doing PCA at all? Is the floor near 75%, 80% or should the 1st 3 explain a minimum of 90% or what?

As a follow on, if I have 10 X variables (index & sector returns) and only 6 are highly correlated (I take a correlation above 0.8 to be highly correlated - or is that too high?) should I just do PCA on those 6, then combine the 1st two principal components with the 4 remaining original variables and use that as my X for regression?

What I was doing was doing PCA on all 10 variables, taking the 1st 2 or 3 principal components, regressing those on Y (which is a single stock's return) then taking those PC's betas and matrix multiplying them by the eigenvectors to back out sensitivities to the original 10 factors but I am left with the situation of not knowing if all 10 factors are significant (all I know is that the 1st 2 PCs are significant)

by NickF at August 03, 2015 03:01 PM

StackOverflow

Play Enumeratee that counts and samples values from input Enumerator

I am interested in how to best implement something like that with Play's Iteratee library:

def sampleEvery[A](i: Int): Enumeratee[A, Int] = ???

such that given a stream of As, the Enumeratee would count them and emit the current value for that counter every i, and then add the last value as well (if possible without repeating the value).

For example, Enumerator('q','w','e','r','t','y','u','i','o').through(sampleEvery(3)) would yield something like Enumerator(0,3,6,8).

by betehess at August 03, 2015 02:51 PM

TheoryOverflow

Rendering of type-level computation

Programming languages with dependent types and/or higher-kinded types feature what might be called compile-time computation at the type-level. This is usually defined as follows (I'm omitting some details for simplicity), see e.g. [1, 2, 3].

  • A notion $\equiv_{type}$ of type-equivalence is defined on types, e.g. $(\lambda x^{K}.\alpha)\beta \equiv_{type} \alpha[\beta/x]$ or $(\Pi x^{\alpha}.\beta)M \equiv_{type} \beta[M/x]$, where $M$ ranges over programs, $K$ over kinds, and $\alpha, \beta$ over types.

  • Then the typing system is enriched with a variant of the following rule. $$ \frac{ \Gamma \vdash M : \alpha \quad \alpha \equiv_{type} \beta }{ \Gamma \vdash M : \beta } $$

However, we can easily extend the relation $\equiv_{type}$ to typing environments $\Gamma$ pointwise, and also to programs, e.g. $\lambda x^{\alpha}.M \equiv_{type} \lambda x^{\beta}.M$ whenever $\alpha \equiv_{type} \beta$, leading to a rule

$$ \frac{ \Gamma \vdash M : \alpha \quad \Gamma \equiv_{type} \Delta \quad M \equiv_{type} N \quad \alpha \equiv_{type} \beta }{ \Delta \vdash N : \beta } $$

I have never seen this done. I assume that it's just a question of convenience and that both approaches are equally expressive, but I am not sure! Is there some problem with the second approach?


  1. H. Barendregt, Introduction to generalised type systems.

  2. H. Barendregt, Lambda Calculi with Types.

  3. B. C. Pierce, Types and Programming Languages.

by Martin Berger at August 03, 2015 02:49 PM

StackOverflow

Counting with "Shapeless style" Dense Binary Numbers

I have been working on a "shapeless style" implementation of Okasaki's dense binary number system. The encoding is a sort of HList of binary Digits:

sealed trait Digit
case object Zero extends Digit
case object One extends Digit

sealed trait Dense { type N <: Dense }

final case class ::[+H <: Digit, +T <: Dense](digit: H, tail: T) extends Dense {
  type N = digit.type :: tail.N
}

sealed trait DNil extends Dense {
  type N = DNil
}

case object DNil extends DNil

I have completed a first draft of my ops, which include the standard math operations you'd expect for natural numbers. Only now do I realize a big problem in my encoding.

Take the successor operation:

trait Succ[N <: Dense] extends DepFn1[N] { type Out <: Dense }

object Succ {
  type Aux[N <: Dense, Out0 <: Dense] = Succ[N] { type Out = Out0 }

  def apply[N <: Dense](implicit succ: Succ[N]): Aux[N, succ.Out] = succ

  implicit val succ0: Aux[DNil, One.type :: DNil] =
    new Succ[DNil] {
      type Out = One.type :: DNil
      def apply(DNil: DNil) = One :: DNil
    }

  implicit def succ1[T <: Dense]: Aux[Zero.type :: T, One.type :: T] =
    new Succ[Zero.type :: T] {
      type Out = One.type :: T
      def apply(n: Zero.type :: T) = One :: n.tail
  }

  implicit def succ2[T <: Dense, S <: Dense](implicit ev: Aux[T, S],
                                            sl: ShiftLeft[S]): Aux[One.type :: T, sl.Out] =
    new Succ[One.type :: T] {
      type Out = sl.Out
      def apply(n: One.type :: T) = n.tail.succ.shiftLeft
    }
}

I'd expect this to work:

type _0 = DNil
val _0: _0 = DNil

val _1 = _0.succ
type _1 = _1.N

val _2 = _1.succ
type _2 = _2.N

trait Induction[A <: Dense]

object Induction{
  def apply[A <: Dense](a: A)(implicit r: Induction[A]) = r
  implicit val r0 = new Induction[_0] {}
  implicit def r1[A <: Dense](implicit r: Induction[A], s: Succ[A])= new Induction[s.Out]{}
}

Induction(_0)
Induction(_1)
Induction(_2) // <- Could not find implicit value for parameter r...

How can I properly use Succ for induction over Dense?

NOTE

I use a value class to inject my syntax.

For completeness, here is my ShiftLeft typeclass, and its helper SafeCons:

/* Disallows Leading Zeros */
trait SafeCons[H <: Digit, T <: Dense] extends DepFn2[H, T] { type Out <: Dense }

trait LowPrioritySafeCons {
  type Aux[H <: Digit, T <: Dense, Out0 <: Dense] = SafeCons[H, T] { type Out = Out0 }

  implicit def sc1[H <: Digit, T <: Dense]: Aux[H, T, H :: T] =
    new SafeCons[H, T] {
      type Out = H :: T
      def apply(h: H, t: T) = h :: t
  }
}

object SafeCons extends LowPrioritySafeCons {
  implicit val sc0: Aux[Zero.type, DNil, DNil] =
    new SafeCons[Zero.type, DNil] {
      type Out = DNil
      def apply(h: Zero.type, t: DNil) = DNil
  }
}

trait ShiftLeft[N <: Dense] extends DepFn1[N] { type Out <: Dense }

object ShiftLeft {
  type Aux[N <: Dense, Out0 <: Dense] = ShiftLeft[N] { type Out = Out0 }

  implicit def sl1[T <: Dense](implicit sc: SafeCons[Zero.type, T]): Aux[T, sc.Out] =
    new ShiftLeft[T] {
      type Out = sc.Out
      def apply(n: T) = Zero safe_:: n
    }
}

by beefyhalo at August 03, 2015 02:43 PM

Is Scala too complex and inefficient compared to Java? [on hold]

I was particularly concerned when I read this letter about why a company chose to stop using Scala. It named a number of difficulties when using Scala, both for programmer productivity and performance, ultimately stating that it was better to just use plain Java. I'm new to Scala and I can't see any date on the gist, so I can't tell if, for example, some of the performance issues have been fixed since then. Otherwise, how much do people agree with the criticisms? Does the writer appear to have been using Scala poorly?

by Alex Hall at August 03, 2015 02:33 PM

Lobsters

StackOverflow

Opsagent UnsupportedOperationException with PersistentHashMap

I'm running Cassandra along with opscenter agent, and got the following error in the log when Opscenter tries to get general and CF metrics.

INFO [jmx-metrics-1] 2015-08-02 21:55:20,555 New JMX connection (127.0.0.1:7199)
INFO [jmx-metrics-1] 2015-08-02 21:55:20,558 New JMX connection (127.0.0.1:7199)
ERROR [jmx-metrics-2] 2015-08-02 21:55:25,448 Error getting CF metrics
java.lang.UnsupportedOperationException: nth not supported on this type: PersistentArrayMap
at clojure.lang.RT.nthFrom(RT.java:857)
at clojure.lang.RT.nth(RT.java:807)
at opsagent.rollup$process_metric_map.invoke(rollup.clj:252)
at opsagent.metrics.jmx$cf_metric_helper.invoke(jmx.clj:96)
at opsagent.metrics.jmx$start_pool$fn__15320.invoke(jmx.clj:159)
at clojure.lang.AFn.run(AFn.java:24)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
ERROR [jmx-metrics-4] 2015-08-02 21:56:26,238 Error getting general metrics
java.lang.UnsupportedOperationException: nth not supported on this type: PersistentHashMap
at clojure.lang.RT.nthFrom(RT.java:857)
at clojure.lang.RT.nth(RT.java:807)
at opsagent.rollup$process_metric_map.invoke(rollup.clj:252)
at opsagent.metrics.jmx$generic_metric_helper.invoke(jmx.clj:73)
at opsagent.metrics.jmx$start_pool$fn__15334$fn__15335.invoke(jmx.clj:171)
at opsagent.metrics.jmx$start_pool$fn__15334.invoke(jmx.clj:170)
at clojure.lang.AFn.run(AFn.java:24)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

Any idea what the UnsupportedOperationException is about? I'm not familiar with Clojure, but I don't know why a) the error occurred and b) why nth isn't supported by PersistentArrayMap.

Please help, thanks.

by stackoverflower at August 03, 2015 02:29 PM

DragonFly BSD Digest

BSDNow 100: Straight from the Src

I managed to be on the road and so did not post about the milestone 100th episode of BSDNow, which has an interview with Sebastian Wiedenroth about both pkg and pkgSrcCon, along with all their other news.

I’m glad to see 100 episodes together of a video podcast for BSD; if you had asked me a few years ago if that was possible, I’d have dismissed the idea.  Not for lack of news, obviously, but because I didn’t think anyone would have that level of dedication.  Investing time and care is what sets people apart, and they’ve done it.

by Justin Sherrill at August 03, 2015 02:28 PM

QuantOverflow

Forward parity in fixed income

In stock and index we have a beautiful forward-spot parity $$ F(t,T) = S(t)\cdot B(t,T) \tag{1} $$ which tells us that to price a forward contract at time $t$ with expiry $T$ we can just borrow money using the bond $B$ and buy a stock now to deliver it at expiry. If the parity does not hold, given that all securities involved are very liquid, we can make free money by going short one leg and long another. One can even say that all risk-neutral/martingale pricing idea arises from an elaborate version of $(1)$.

I wonder whether similar relations do exist in Fixed Income world. For example, I was thinking of Eurodollar futures: if I short the futures, at expiry I'll lose if 3 months LIBOR goes higher than the initial forward price. Thus, to find an opposing leg as a hedge, I need to somehow gain from LIBOR going up. Intuitively thinking, I shall benefit from future upward movements of LIBOR in case I borrow money at this rate. However, I am not sure how to translate it into a valid strategy. In general, I'd be interested in valuation techniques for Eurodollar futures.

by Ulysses at August 03, 2015 02:26 PM

StackOverflow

Play json transformers map optional field

I have the following play framework 2.3 json transformer

val transAddress = (
  (__ \ 'address \ 'line1).json.copyFrom( (__ \ 'line1).json.pick ) and
  (__ \ 'address \ 'line2).json.copyFrom( (__ \ 'line2).json.pick ) and
  (__ \ 'address \ 'line3).json.copyFrom( (__ \ 'line3).json.pick ) and

  (__ \ 'address \ 'line4).json.copyFrom( (__ \ 'line4).json.pick ) and

  (__ \ 'address \ 'postcode).json.copyFrom( (__ \ 'postcode).json.pick ) reduce
)

So this:

{
    line1: "My Street",
    line2: "My Borough",
    line3: "My Town",
    line4: "My County"
}

Should transform to this:

{
    address: {
        line1: "My Street",
        line2: "My Borough",
        line3: "My Town",
        line4: "My County"
    }
}

My problem is that in the source json model, line4 is optional, so i only want to map it to address.line4 optionally as well. So:

{
    line1: "My Street",
    line2: "My Borough",
    line3: "My Town"
}

Should also transform to this:

{
    address: {
        line1: "My Street",
        line2: "My Borough",
        line3: "My Town"
    }
}

I have no idea how to do this with these transformers, and can find no similar problem after a lot of googling.

Thanks! Nic

by nfvindaloo at August 03, 2015 02:26 PM

Lobsters

NeXTEVNT 2015: Doug Menuez, Peter Graffagnino, Don Melton

NeXTEVNT was a mini-conf that ran in parallel to WWDC this year, focusing on NeXT

Comments

by sevan at August 03, 2015 02:26 PM

StackOverflow

calling handler from other play in ansible

I've got a large application with few services depending on the same config file (on different servers).

I configure config file in 'app-common' role and install services in app-servicename roles.

I want to restart service if any of two condition happens:

  1. Config file was changed in app-common role
  2. Package for the service was updated in app-servicename role

For now I stuck at 'task' level:

role 'app-common'

- name: Configure app
  template: app.conf.j2 dest=/etc/app.conf
  register: app_configured

role 'app-service_one'

- name: Install app service one
  apt: name=app-service1 state=latest
  register: app_updated

...and I doing it now:

- name: Restarting app service one
  service: name=app-service1 state=restarted
  when: app_updated|changed or app_configured|changed

It works, but it printing a lot of noise when nothing changed ("skipped bla-bla-bla").

Handlers looks like good idea for this, but how can I call handler in one role from other?

by George Shuklin at August 03, 2015 02:21 PM

Stubbing methods

I have an HTTP request generator and I would like to create unit tests that avoid performing the actual calls (mostly to validate the request structure).
My class is something like:

class Requestor {        
    def get (params : Map[String, String]) = {
        process("GET", params)
    }
    def post(params : Map[String, String]) = {
        process("POST", params)
    }
    private def process(s: String, p : Map[String, String]) = {
        val res = createRequestAndExec(s, p)
        doStuffToReposnse(res)
    }
    private createRequestAndExec(s: String, p : Map[String, String]) = {
       // create apache HTTPBaseRequest
       ..
       // execute request using apache DefaultHttpClient
    }
}

Can I somehow stubbing this method ? If I to use MockFactory (if I understand it correctly) I should create a trait to be mocked which will look like

trait A {
  def post ..
  def get ...
  def createRequestAndExec ...
}

And mock A but createRequestAndExec shouldn't be a part of the public API ..

by Oded Rosenberg at August 03, 2015 02:17 PM

Using Find function in scala

case class Meth(name: String, typ: Type, np : Int )
def lookupMethod(cls:String,mth:String,np:Int,list:List[MyClassType]):Option[Meth] =
{ .....
val findMeth = listMeth.find( a = > ( a.name == mth && a.np == np) )
.....
}

I have listMeth : List[Meth] , and I want to find the method which have the "mth" name and the "np" parameters . My above code doesnt work,so how to fix it ?

by Dương Anh Khoa at August 03, 2015 02:16 PM

QuantOverflow

Unsmoothing of returns

The following problem arises in the context of private equity, which typically report "smoothed" returns (think of it as a moving average). As you can imagine, "smoothed" returns would have a much lower volatility compared to the volatility of "unsmoothed" returns. For risk calculation we are interested in volatility of "unsmoothed" returns.

Mathematically, suppose I observe a process $\bar{r}_t$ which is a moving average of process $r_t$, i.e., $\bar{r}_t = \sum_{k=0}^p w_k r_{t-k}$. I also know that $r_t = \alpha + \beta r_{I, t} + \epsilon_t$, where $r_{I, t}$ are returns of a public index and $\epsilon_t = N(0, \sigma^2)$. I would like to estimate "unsmoothed" returns $r_t, t = 0, \ldots, T$ from the data: $\bar{r}_t, r_{I,t}, t=0, 1, \ldots, T$.

Can somebody suggest how should I go about this estimation? If there is a reference to a similar problem, that should be fine too. Thanks.

by vdesai at August 03, 2015 02:16 PM

Lobsters

StackOverflow

IntelliJ IDEA: use macros in presentation compiler

I am using macros to add synthetic companion objects (with apply method and other stuff) to annotated classes. Such as

@myTransform class Foo(i: Int)

that will output

object Foo {
  def apply(i: Int): Foo = new Foo(i)
}
class Foo(i: Int)

Now if I write in the source code of a sub-project that depends on these macros, Foo(1234), this is highlighted as error by IntelliJ IDEA.

Is it possible to configure the presentation compiler of IntelliJ IDEA to respect these kind of macros and invoke them to operate on the properly transformed code, avoiding these highlighting errors?

by 0__ at August 03, 2015 02:08 PM

Lobsters

DataTau

StackOverflow

slick 3 auto-generated - default value (timestamp) column, how to define a Rep[Date] function

I have the following postgres column definition:

record_time TIMESTAMP WITHOUT TIME ZONE DEFAULT now()

How would I map it to slick? Please take into account that I wish to map the default value generated by the now() function

i.e:

def recordTimestamp: Rep[Date] = column[Date]("record_time", ...???...)

Should any extra definition go where the ...???... is currently located?

EDIT (1)

I do not want to use

column[Date]("record_time", O.Default(new Date(System.currentTimeMillis()))) // or some such applicative generation of the date column value

by Yaneeve at August 03, 2015 01:55 PM

Read only attributes in graph database

I want to make property read-only. When I am creating Vertex in DB i want to set property value and do not allow update in future. Is there any possible solutions on DB side? Or I have to do it in my scala Back-end? What is the best practise? Thx Lot.

My back end solution:

/Schem

mgmt.makePropertyKey("guid").dataType(classOf[java.lang.String]).make()
mgmt.makePropertyKey("propFoo1").dataType(classOf[java.lang.Long]).make()
mgmt.makePropertyKey("propFoo2").dataType(classOf[java.lang.Long]).make()
mgmt.makePropertyKey("propFoo3").dataType(classOf[java.lang.Long]).make()
mgmt.makePropertyKey("propFoo4").dataType(classOf[java.lang.Long]).make()
mgmt.makePropertyKey("propFoo5").dataType(classOf[java.lang.Long]).make()

In controller for Update method:

// Map of no changeable atb

val vertexEntityOld = EntityController.findByGuid(newEntity.guid.toString())
newEntity.propFoo1  = oldEntity.propFoo1 
newEntity.propFoo2  = oldEntity.propFoo2 

by Eddy_Screamer at August 03, 2015 01:55 PM

/r/emacs

$PATH in eshell is different from compile buffer?

I'm using Prelude configurations with Emacs 24.5.

In eshell the $PATH is:

/Users/suyejun/.erlenv/shims:/Users/suyejun/.erlenv/bin:/Users/suyejun/nobackup/global-6.5/gtags:/Users/suyejun/nobackup/global-6.5/global:/usr/local/opt/coreutils/libexec/gnubin:/Users/suyejun/.rbenv/shims:bin:node_modules/.bin:/opt/boxen/nodenv/shims:/opt/boxen/nodenv/bin:/usr/local/bin:/usr/local/sbin:/opt/boxen/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/Applications/Emacs.app/Contents/MacOS/bin-x86_64-10_9:/Applications/Emacs.app/Contents/MacOS/libexec-x86_64-10_9

But in compile buffer which I run "echo $PATH", it prints out:

/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/Users/suyejun/.erlenv/shims:/Users/suyejun/.erlenv/bin:/Users/suyejun/nobackup/global-6.5/gtags:/Users/suyejun/nobackup/global-6.5/global:/usr/local/opt/coreutils/libexec/gnubin:/Users/suyejun/.rbenv/shims:bin:node_modules/.bin:/opt/boxen/nodenv/shims:/opt/boxen/nodenv/bin:/opt/boxen/bin:/Applications/Emacs.app/Contents/MacOS/bin-x86_64-10_9:/Applications/Emacs.app/Contents/MacOS/libexec-x86_64-10_9

You can see the compile buffer have anther "/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:". This should not exist here. Any suggestions? Thanks

And I don't know why there are two "/Applications/Emacs.app/Contents/MacOS/bin-x86_64-10_9". I don't set that in $PATH.

submitted by goofansu
[link] [1 comment]

August 03, 2015 01:50 PM

/r/compsci

Calculating amount of data needed

Hi Reddit,

Not really sure where else to post this! I will be sending simple data to a web server, most probably just as a GET request which will be:

  • Latitude
  • Longitude
  • Speed
  • Height
  • deviceId

I could do this via SSH I suppose if it uses less data - but I cant see this being much.

So if i send this in myurl.com/parsedata.php?id=1&.... etc, how much data will it send require per request? the page will not load anything and it will do it over cURL or a similar protocol.

So, typically how much data will be in this per single request?

Thanks.

submitted by iangar
[link] [2 comments]

August 03, 2015 01:48 PM

StackOverflow

Ansible synchronize asking for a password

I am using Ansible (1.9.2) to deploy some files to a Redhat 6.4 server.

The playbook looks something like this

- name: deploy files
  hosts: web
  tasks:
    - name sync files
      sudo: no
      synchronize:
        src={{ local_path }}
        dest={{ dest_path }}

And to kick this off I run something like the following

ansible-playbook -i myinventory myplaybook.yml -u DOMAIN\\user --ask-pass

When I start the play I enter my password at the prompt, facts are then obtained successfully, however as soon as the synchronize task is reached another prompt asks for my password again, like the following

DOMAIN\user@hostname's password:

If I enter my password again the deploy completes correctly.

My questions are

  1. How can I fix or work around this, so that I do not have to enter my password for every use of the synchronize module?
  2. Is this currently expected behaviour for the synchronize module? Or is this a bug in Ansible?

I cannot use ssh keys due to environment restrictions.

I do not want to use the copy module for scalability reasons.

Things I have tried

  1. I have seen a number of other questions on this subject but I have not been able to use any of them to fix my issue or understand if this is expected behavior.
  2. The Ansible docs are generally excellent but I have not been able to find anything about this on the offical docs.
  3. I have tried specifiying the user and password in the inventory file and not using the --ask-pass and -u parameters. But while I then do not have to enter the password to collect facts, the synchronize module still requests my password.
  4. I have tried setting the --ask-sudo-pass as well, but it did not help
  5. I have been using a CentOS 7 control box, but I have also tried an Ubuntu 14.04 box

Can anyone help?

by pete.c at August 03, 2015 01:46 PM

/r/compsci

Planet FreeBSD

Lumina Desktop 0.8.6 Released!

Just in time for PC-BSD & FreeBSD 10.2 (coming soon), the Lumina desktop has been updated to version 0.8.6! This version contains a number of updates for non-English users (following up all the new translations which are now available), as well as a number of important bug-fixes, and support for an additional FreeDesktop specification. The PC-BSD “Edge” packages have already been updated to this version and the FreeBSD ports tree will be getting this update very soon as well.

In addition, the Lumina desktop now has its own website! While we are still working on cleaning up some of the visuals, all the information about Lumina (how to download/install it on various OS’s, a summary of the features, description of the project, screenshots, etc..) is all there and up-to-date. We are also working on a full handbook for Lumina (similar to the PC-BSD/FreeBSD handbooks) which can also be viewed directly from the website. Please check it out and let us know what you think!

 

Changes Since 0.8.5:

  1. Localizations
    • Add the ability to set system-locale overrides (used on login), allowing the user to “mix” locale settings for the various outputs.
    • Add the ability for the user to switch the locale of the current session on the fly (all locale settings changed for the current session only), and these settings will be used when launching any applications later.
    • Fix up the translation mechanisms of the Lumina interface, so everything will instantly get re-translated to the new locale.
    • More languages are now fully translated! Make sure to install the x11/lumina-i18n port or pkg to install the localizations and enable all these new features!
  2. Add support for the “Actions” extension to the XDG Desktop specifications.
    • This allows applications to set a number of various “actions” (alternate startup routines) within their XDG desktop registration file.
    • These actions are shown within Lumina as new sub-menus within the Applications menu as well as in the User button (look for the down arrow next to the application icon).
  3. Change the Lumina OSD to a different widget – allowing it to be shown much faster.
  4. Add new “_ifexists” functionality to any session options in luminaDesktop.conf. This allows the distributor to more easily setup default applications (browser, email, etc..) through an intelligent tree of options (which may or may not be installed).
  5. Bug Fixes
    • Apply a work-around for new users which fixes a bug in Fluxbox where the virtual desktop windows could still be changed/closed by various Fluxbox keyboard shortcuts. If an existing user wants to apply this fix, you need to replace your ~/.lumina/fluxbox-keys file with the new Lumina default (/usr/local/share/Lumina-DE/fluxbox-keys) – which will overwrite any custom keyboard shortcuts you had previously setup.
    • Fix some bugs in the new window detection/adjustment routines – fixing up issues with full-screen apps that change around the X session settings to suit their own temporary needs.
    • Fix a couple bugs with the automatic detection/load routines for the new QtQuick plugins.
    • Add in the “Ctrl-X” keyboard shortcut for cutting items in the Insight file manager.
    • Fix up the active re-loading of icons when the user changes the icon theme.

 

by Ken Moore at August 03, 2015 01:35 PM

StackOverflow

Reducing with a bloom filter

I would like to get a fast approximate set membership, based on a String-valued function applied to a large Spark RDD of String Vectors (~1B records). Basically the idea would be to reduce into a Bloom filter. This bloom filter could then be broadcasted to the workers for further use.

More specifically, I currently have

rdd: RDD[Vector[String]]
f: Vector[String] => String
val uniqueVals = rdd.map(f).distinct().collect()
val uv = sc.broadcast(uniqueVals)

But uniqueVals is too large to be practical, and I would like to replace it with something of smaller (and known) size, i.e. a bloom filter.

My questions:

  • is it possible to reduce into a Bloom filter, or do I have to collect first, and then construct it in the driver?

  • is there a mature Scala/Java Bloom filter implementation available that would be suitable for this?

by mitchus at August 03, 2015 01:31 PM

Problem with Tail Recursion in g++

I'm was messing around with tail-recursive functions in C++, and I've run into a bit of a snag with the g++ compiler.

The following code results in a stack overflow when numbers[] is over a couple hundred integers in size. Examining the assembly code generated by g++ for the following reveals that twoSum_Helper is executing a recursive call instruction to itself.

The question is which of the following is causing this?

  • A mistake in the following that I am overlooking which prevents tail-recursion.
  • A mistake with my usage of g++.
  • A flaw in the detection of tail-recursive functions within the g++ compiler.

I am compiling with g++ -O3 -Wall -fno-stack-protector test.c on Windows Vista x64 via MinGW with g++ 4.5.0.

struct result
{
    int i;
    int j;
    bool found;
};

struct result gen_Result(int i, int j, bool found)
{
    struct result r;
    r.i = i;
    r.j = j;
    r.found = found;
    return r;
}

// Return 2 indexes from numbers that sum up to target.
struct result twoSum_Helper(int numbers[], int size, int target, int i, int j)
{
    if (numbers[i] + numbers[j] == target)
        return gen_Result(i, j, true);
    if (i >= (size - 1))
        return gen_Result(i, j, false);
    if (j >= size)
        return twoSum_Helper(numbers, size, target, i + 1, i + 2);
    else
        return twoSum_Helper(numbers, size, target, i, j + 1);
}

by Swiss at August 03, 2015 01:26 PM

Dave Winer

What I loved about Twitter

This came up in a thread on Facebook.

There are a lot of things that could be done to shake up Twitter and provide users with some fresh functionality to explore. Because that's what I think we all loved about Twitter, the chance to do new things. I love the network, the combination of people, software, ideas and data. Twitter got stagnant. That's the real problem. Almost any change that opened up new functionality for people to explore that allowed them to connect with other people in new interesting and meaningful ways would rekindle the spark that Twitter used to be.

Then I read a piece asking what's wrong with the web. Same thing, same problem. I want to do interesting things with smart people. That's what I loved about the web.

August 03, 2015 01:24 PM

/r/compsci

StackOverflow

SPARK : failure: ``union'' expected but `(' found

I have a dataframe called df with column named employee_id. I am doing:

 df.registerTempTable("d_f")
val query = """SELECT *, ROW_NUMBER() OVER (ORDER BY employee_id) row_number FROM d_f"""
val result = Spark.getSqlContext().sql(query)

But getting following issue. Any help?

[1.29] failure: ``union'' expected but `(' found
SELECT *, ROW_NUMBER() OVER (ORDER BY employee_id) row_number FROM d_f
                            ^
java.lang.RuntimeException: [1.29] failure: ``union'' expected but `(' found
SELECT *, ROW_NUMBER() OVER (ORDER BY employee_id) row_number FROM d_f

by user1735076 at August 03, 2015 01:11 PM

QuantOverflow

Influencing factors on credit

There was the following question on an exam:

Which factors are influencing the effective interest rate of a credit?

  • loan amount
  • fees
  • interest rate
  • running time

I would have said all of them have an influence on the effective interest rate, but my professor said, that the loan amount would not have any influences.

Is he right?

by Martin at August 03, 2015 01:11 PM

Greeks of a swaption using Brigo

I struggeling with calculating the delta of a swaption. In the interest rate case I usually mess around with the multiple cash flows over time so that the discounting is more complex than in the equity case.

Let me first introduce some notation. We denote with $D(0,T)$ the discounting factor with maturity $T$, $P(0,T)$ the price of a zero coupon bond with maturity $T$ and let $Q$ denote the risk neutral measure.

By simple risk neutral valuation we know:

$$D(0,0)V_0 = V_0 = E_Q[V_TD(0,T)|\mathcal{F}_t]$$

No we are interested in a swaption, where we expiry of the option is at $T_\alpha$ and the underlying swap has a tenor $T_\beta$. The discounted value of the swpation can be writen as

$$D(t,T_\alpha)(S_{\alpha,\beta}(T_\alpha)-K)^+\sum_{i=\alpha + 1}^\beta\tau_iP(T_\alpha,T_i)$$

where $\tau_i$ is the daycount convention between $T_{i-1}$ and $T_i$.

Now regarding valution using the above two equations:

$$ V_0 = E_Q[D(0,T_\alpha)(S_{\alpha,\beta}(T_\alpha)-K)^+\sum_{i=\alpha + 1}^\beta\tau_iP(T_\alpha,T_i)|\mathcal{F}_0]$$

using a smart change of numeraire, the swap measuer $S$, i.e. the numeraire introduced by $\sum_{i=\alpha + 1}^\beta\tau_iP(t,T_i)$ yield

$$ V_0 = E_Q[D(0,T_\alpha)(S_{\alpha,\beta}(T_\alpha)-K)^+\sum_{i=\alpha + 1}^\beta\tau_iP(T_\alpha,T_i)|\mathcal{F}_0]=\sum_{i=\alpha + 1}^\beta\tau_iP(0,T_i)E_S[(S_{\alpha,\beta}(T_{\alpha})-K)^+|\mathcal{F}_0]$$

We know that under the measure $S$, the forward swap rate $S_{\alpha,\beta}(t)$ is a martingale. For the price we could now simple apply Black formula, if we assume that the forward swap rate is normally distributed.

Now my question, if I would apply the normal calculation for the delta I would get $\sum_{i=\alpha + 1}^\beta\tau_iP(0,T_i) N(d_1)$, where $d_1$ is the expression from Black 76 formula. However this term $\sum_{i=\alpha + 1}^\beta\tau_iP(0,T_i)$ annoys me. I get completely wrong results. If I used just $N(d_1)$ I would get reasonable result. So my question, is the delta given by $N(d_1)$ for a swaption as well? If so, where is my mistake?

For simplicity I add an example with concrete numbers.

example We take a swaption with expiry $5$ years and underlying tenor of $5$ years. $S_{\alpha,\beta}(0) = 0.0271$, $\sigma = 0.34$, $r = 0.011$, $T=5$, $K = 0.028$ and annuity $A=4.92$. Using Black 76 we should get for $\Delta:

$$\Delta = A\cdot N(d_1)$, where

$$d_1 = \frac{\log{\frac{S_{\alpha,\beta}(0)}{K}}+\frac{\sigma^2\cdot T}{2}}{\sigma\cdot\sqrt{T}}$$

Here I get the values $N(d_1) = 0.332296$ and $\Delta = 1.634896$, which doesnt make sense.

by user8 at August 03, 2015 01:11 PM

StackOverflow

Ansible synchronise module says --out-format is unknown option

I have a simple ansible playbook by which i want to rsync folders from target machines onto my ansible host.

---
- hosts: testServers
  sudo: yes
  gather_facts: yes
  tasks:
  - synchronize: mode=pull src=/home/prod/live-tpb/log/ dest=/root/playbooks/backup_live_folders/logs/{{ ansible_hostname }}

but when i run this playbook, it errors out saying

rsync: --out-format=<>%i %n%L: unknown option

The full error generated using the -vvvv option is as below.

failed: [192.168.101.174 -> 127.0.0.1] => {"cmd": "rsync --delay-updates -FF --compress --archive --rsh 'ssh  -S none -o StrictHostKeyChecking=no' --rsync-path=\"sudo rsync\" --out-format='<<CHANGED>>%i %n%L' \"ansible@192.168.101.174:/home/prod/live-tpb/log/\" \"/root/playbooks/backup_live_folders/logs/serverC6174\"", "failed": true, "rc": 1}
msg: rsync: --out-format=<<CHANGED>>%i %n%L: unknown option
rsync error: syntax or usage error (code 1) at main.c(1231) [client=2.6.8]

When I run the command directly too this fails -

[root@server11 backup_live_folders]# rsync --delay-updates -FF --compress --archive --rsh 'ssh  -S none -o StrictHostKeyChecking=no' --rsync-path="sudo rsync" --out-format='<<CHANGED>>%i %n%L'  ansible@192.168.101.174:/home/prod/live-tpb/logs/ /root/playbooks/backup_live_folders/logs/serverC6174
rsync: --out-format=<<CHANGED>>%i %n%L: unknown option
rsync error: syntax or usage error (code 1) at main.c(1231) [client=2.6.8]

This looks like it is a problem with rsync on 14.04. My host machine is RHEL 5.4 and target machine is ubuntu 14.04.

How do i disable the --out-format option from ansible? Has anyone else faced similar problem? If yes, how to work around this issue?

Thanks

by Mukul Jain at August 03, 2015 01:08 PM

Planet Theory

TR15-124 | Noncommutative Valiant&#39;s Classes: Structure and Complete Problems | Vikraman Arvind, Pushkar Joglekar, Raja S

In this paper we explore the noncommutative analogues, $\mathrm{VP}_{nc}$ and $\mathrm{VNP}_{nc}$, of Valiant's algebraic complexity classes and show some striking connections to classical formal language theory. Our main results are the following: (1) We show that Dyck polynomials (defined from the Dyck languages of formal language theory) are complete for the class $\mathrm{VNP}_{nc}$ under $\le_{abp}$ reductions. Likewise, it turns out that $\mathrm{PAL}$ (Palindrome polynomials defined from palindromes) are complete for the class $\mathrm{VSKEW}_{nc}$ (defined by polynomial-size skew circuits) under $\le_{abp}$ reductions. The proof of these results is by suitably adapting the classical Chomsky-Sch\"{u}tzenberger theorem showing that Dyck languages are the hardest CFLs. (2) Next, we consider the class $\mathrm{VNP}_{nc}$. It is known~\cite{HWY10a} that, assuming the sum-of-squares conjecture, the noncommutative polynomial $\sum_{w\in\{x_0,x_1\}^n}ww$ requires exponential size circuits. We unconditionally show that $\sum_{w\in\{x_0,x_1\}^n}ww$ is not $\mathrm{VNP}_{nc}$-complete under the projection reducibility. As a consequence, assuming the sum-of-squares conjecture, we exhibit a strictly infinite hierarchy of p-families under projections inside $\mathrm{VNP}_{nc}$ (analogous to Ladner's theorem~\cite{Ladner75}). In the final section we discuss some new $\mathrm{VNP}_{nc}$-complete problems under $\le_{abp}$-reductions. (3) Inside $\mathrm{VNP}_{nc}$ too we show there is a strict hierarchy of p-families (based on the nesting depth of Dyck polynomials) under the $\le_{abp}$ reducibility.

August 03, 2015 01:03 PM

TheoryOverflow

Is there a stand-alone statistical ZK argument with concurrent knowledge extraction?

Is any known construction for an interactive argument of knowledge that

  • is stand-alone statistical zero-knowledge, and
  • allows concurrent knowledge extraction?

This is a weakening of my previous question from 6 months ago, and is something that would presumably be easier to construct.

by Ricky Demer at August 03, 2015 01:02 PM

Lamer News

Lobsters

StackOverflow

I can't map more than 18 fields in form with model [duplicate]

This question already has an answer here:

In my Play Scala Project I have more than 18 fields in my form.I inserted a form values from input type submit to DB If fields were less than 18.If I add more than 18 fields in form mapping it shows error like below

missing arguments for method unapply in object EmployeeRegister; follow this method with `_' if you want to treat it as a partially applied function

controllers/Employees.scala

val employeeForm = Form(
    mapping(
      "employeeid" -> ignored(None: Option[Int]),
      "employeename" -> nonEmptyText,
      "employeedesc" -> nonEmptyText,     
    .....
    .....      
      "languages" -> list(number)   
      )(models.EmployeeRegister.apply)(models.EmployeeRegister.unapply))

models/Employee.scala

case class EmployeeRegister(
  employeeid: Option[Int] = None,
  employeename: String,
  employeedesc: String,
  .....
  ......
  languages: List[Int]

  )

What I tried

I used nested values concept for adding more than 18 fields.It didn't show any error but form submission didn't work It forwarded to same page by using this error handling code.

employeeForm.bindFromRequest.fold(             

            formWithErrors => BadRequest(html.employeeRegisterForm(formWithErrors),
               employee => {
               ....
               ...
               }

nested values

controller/Employees.scala

 val employeeForm = Form(
        mapping(
          "employeeid" -> ignored(None: Option[Int]),
          "employeename" -> nonEmptyText,
          "employeedesc" -> nonEmptyText,     
        .....
        ..... 
       "os" -> mapping(
          "operatingsystem" -> text,   
            "osversion" -> text, 
            "osbit" -> text      
        )(models.OS.apply)(models.OS.unapply),
          "languages" -> list(number)   
          )(models.EmployeeRegister.apply)(models.EmployeeRegister.unapply))

models/employee.scala

 case class EmployeeRegister(
      employeeid: Option[Int] = None,
      employeename: String,
      employeedesc: String,
      .....
      ......
      os:OS,
      languages: List[Int]

      )

      case class OS(
       operatingsystem: String,   
      osversion: String,
      osbit:String  
    )

by Jamal at August 03, 2015 12:46 PM

Kafka - Delayed Queue implementation using high level consumer

Want to implement a delayed consumer using the high level consumer api

main idea:

  • produce messages by key (each msg contains creation timestamp) this makes sure that each partition has ordered messages by produced time.
  • auto.commit.enable=false (will explicitly commit after each message process)
  • consume a message
  • check message timestamp and check if enough time has passed
  • process message (this operation will never fail)
  • commit 1 offset

    while (it.hasNext()) {
      val msg = it.next().message()
      //checks timestamp in msg to see delay period exceeded
      while (!delayedPeriodPassed(msg)) { 
         waitSomeTime() //Thread.sleep or something....
      }
      //certain that the msg was delayed and can now be handled
      Try { process(msg) } //the msg process will never fail the consumer
      consumer.commitOffsets //commit each msg
    }
    

some concerns about this implementation:

  1. commit each offset might slow ZK down
  2. can consumer.commitOffsets throw an exception? if yes i will consume the same message twice (can solve with idempotent messages)
  3. problem waiting long time without committing the offset, for example delay period is 24 hours, will get next from iterator, sleep for 24 hours, process and commit (ZK session timeout ?)
  4. how can ZK session keep-alive without commit new offsets ? (setting a hive zookeeper.session.timeout.ms can resolve in dead consumer without recognising it)
  5. any other problems im missing?

Thanks!

by Nimrod007 at August 03, 2015 12:32 PM

Why classes that don't extends other classes must extend from traits? (with doesn't work)

i'm starting with Scala and i found this a little weird. In java i could do something like this:

interface Foo{}

public class Bar implements Foo{}

I'm trying to do something similar with Scala, but it doesn't work:

trait Foo;
class Bar with Foo; // This doesn't work!

I have to use the "extends" keyword:

class Bar extends Foo; // This works OK!

Now, it's fine, but it's not what i wanted.

Another thing weird i noted is that given every class in Scala extends from AnyRef (see this image from scala-lang.org: http://www.scala-lang.org/sites/default/files/images/classhierarchy.png) i can do this:

class Bar extends AnyRef with Foo; // This works too!

So, what am i missing? Doesn't have sense to use a trait without extending it?

Thank you!

by santiagobasulto at August 03, 2015 12:20 PM

Planet Emacsen

Chen Bin (redguardtoo): Use which-func-mode with js2-mode

Two years ago in my article "Why Emacs is better editor - a case study for javascript developer" I proved that Emacs is a better javascript editor because its js2-mode can parse the tags ) in opened file more intelligently.

Since then, the js2-mode keeps improving and becomes better every day. I'm absolutely satisfied with it except one minor issue.

For performance reason, js2-mode place the imenu results into a cache which is updated by a timer.

`(which-function)` from which-func-mode takes advantage of the imenu results. So the result of `(which-function)` may be incorrect if the cache is not updated by timer.

The solution is to re-define a `my-which-function`:

(defun my-which-function ()
  ;; clean the imenu cache
  ;; @see http://stackoverflow.com/questions/13426564/how-to-force-a-rescan-in-imenu-by-a-function
  (setq imenu--index-alist nil)
  (which-function))

The new API is very useful when I use yasnippet to insert logging code in javascript file.

Here is a sample (log-which-func.yasnippet):

# -*- coding: utf-8; mode: snippet -*-
# name: console.log which function called
# key: lwf
# --
console.log('${1:`(my-which-function)`} called');

by Chen Bin at August 03, 2015 12:08 PM

Fefe

In Hong Kong gelten Brüste jetzt als Passivbewaffnung. ...

In Hong Kong gelten Brüste jetzt als Passivbewaffnung. Ein armer, unschldiger Cop war von einer Frau tätlich angegriffen worden, indem sie ihn mit ihren Brüsten attackiert hat.

Aus Sicht der Frau hat sie der Typ begrapscht, aber wir wissen ja alle, wem geglaubt wird, wenn die Aussage eines Polizisten gegen die Aussage eines Zivilisten steht.

August 03, 2015 12:01 PM

StackOverflow

0MQ telnet data C++

I'm trying to get working sending telnet commands with 0MQ with C++ on VS2013.

I used HW client sample code from ZMQ hompage.

But what I see on WireShark is telnet packet with no data inside.

This code is prototype, what I need is just to be able to send this command.

After making it work, it will get some cleaning.

//
//  Hello World client in C++
//  Connects REQ socket to tcp://localhost:5555
//  Sends "Hello" to server, expects "World" back
//
#include <zmq.hpp>
#include <zmq.h>
#include <string>
#include <iostream>

int main()
{
    //  Prepare our context and socket
    zmq::context_t context(1);
    zmq::socket_t socket(context, ZMQ_REQ);

    std::cout << "Connecting to hello world server…" << std::endl;
    socket.connect("tcp://10.40.6.226:23");

    //  Do 10 requests, waiting each time for a response
    for (int request_nbr = 0; request_nbr != 1; request_nbr++) {
        zmq::message_t request(2);
        memcpy(request.data(), "Hello", 5);
        std::cout << "Sending Hello " << request_nbr << "…" << std::endl;
        socket.send(request);
        //client_socket

        //  Get the reply.
        /*zmq::message_t reply;
        socket.recv(&reply);
        std::cout << "Received World " << request_nbr << std::endl;*/
    }
    return 0;
}

So everything looks good beside I'm cannot see the string "Hello" in telnet packet.

Original sample http://zguide.zeromq.org/cpp:hwclient

by DrMacak at August 03, 2015 11:57 AM

QuantOverflow

Significant price difference while calculating call option price using Monte Carlo approach?

I am trying to calculate the price of a european call option using the Monte Carlo approach. I coded the algorithm in C++ and Python. As far as I know the implementation is correct and as N (the number of trials) gets bigger, the price should converge to a similar value in both the programs.

My problem is that as N gets bigger, say just from 1000 to 10000 trials the prices converge to two different values. In C++ the price converges towards the value of 3.30 while with Python it converges towards 3.70.

I think that gap of 0.40 is too wide, I should get more similar resutls. Why is this gap so big? What did I do wrong? I cannot seem to find my mistake.

Here is the code I used:

Python

import numpy as np
import matplotlib.pyplot as plt


def stoc_walk(p,dr,vol,periods):
    w = np.random.normal(0,1,size=periods)
    for i in range(periods):
        p += dr*p + w[i]*vol*p
    return p

s0 = 10;
drift = 0.001502
volatility = 0.026
r = 0.02
days = 255
N = 10000
zero_trials = 0

k=12
payoffs = []

for i in range(N):
    temp = stoc_walk(s0,drift,volatility,days)
    if temp > k:
        payoff = temp-k
        payoffs.append(payoff*np.exp(-r))
    else:
        payoffs.append(0)
        zero_trials += 1

payoffs = np.array(payoffs)
avg = payoffs.mean()

print("MONTE CARLO PLAIN VANILLA CALL OPTION PRICING")
print("Option price: ",avg)
print("Initial price: ",s0)
print("Strike price: ",k)
print("Daily expected drift: ",drift)
print("Daily expected volatility: ",volatility)
print("Total trials: ",N)
print("Zero trials: ",zero_trials)
print("Percentage of total trials: ",zero_trials/N)

C++

//Call option Monte Carlo evaluation;

#include <iostream>
#include <random>
#include <math.h>
#include <chrono>

using namespace std;

/*  double stoc_walk: returns simulated price after periods

    p = price at t=t0
    dr = drift
    vol = volatility
    periods (days)
*/
double stoc_walk(double p,double dr,double vol,int periods)
{
    double mean = 0.0;
    double stdv = 1.0;

    /* initialize random seed: */
    int seed = rand() %1000 + 1;
    //unsigned seed = std::chrono::system_clock::now().time_since_epoch().count();
    std::default_random_engine generator(seed);
    std::normal_distribution<double> distribution(mean,stdv);

    for(int i=0; i < periods; i++)
    {
        double w = distribution(generator);
        p += dr*p + w*vol*p;
    }
    return p;
}

int main()
{
    //Initialize variables
    double s0 = 10;             //Initial price
    double drift = 0.001502;    //daily drift
    double volatility = 0.026;  //volatility (daily)
    double r = 0.02;            //Risk free yearly rate
    int days = 255;             //Days
    int N = 10000;              //Number of Monte Carlo trials
    double zero_trials = 0;

    double k = 12;               //Strike price
    int temp = 0;                //Temporary variable
    double payoffs[N];           //Payoff vector
    double payoff = 0;

    srand (time(NULL));         //Initialize random number generator

    //Calculate N payoffs
    for(int j=0; j < N; j++)
    {
        temp = stoc_walk(s0,drift,volatility,days);
        if(temp > k)
        {
            payoff = temp - k;
            payoffs[j] = payoff * exp(-r);
        }
        else
        {
            payoffs[j] = 0;
            zero_trials += 1;
        }
    }

    //Average the results
    double sum_ = 0;
    double avg_ = 0;
    for(int i=0; i<N; i++)
    {
        sum_ += payoffs[i];
    }
    avg_ = sum_/N;

    //Print results
    cout << "MONTE CARLO PLAIN VANILLA CALL OPTION PRICING" << endl;
    cout << "Option price: " << avg_ << endl;
    cout << "Initial price: " << s0 << endl;
    cout << "Strike price: " << k << endl;
    cout << "Daily expected drift: " << drift*100 << "%" << endl;
    cout << "Daily volatility: " << volatility*100 << "%" << endl;
    cout << "Total trials: " << N << endl;
    cout << "Zero trials: " << zero_trials << endl;
    cout << "Percentage of total trials: " << zero_trials/N*100 << "%";

    return 0;
}

by mickkk at August 03, 2015 11:56 AM

StackOverflow

How to deserialize json without index with json4s

Enter code here using json4s, what is best practice to deserialize JSON to Scala case class (without index-key) ?

some.json
{
  "1": {
    "id": 1,
    "year": 2014
  },
  "2": {
    "id": 2,
    "year": 2015
  },
  "3": {
    "id": 3,
    "year": 2016
  }
}

some case class case class Foo(id: Int, year: Int)

by sh0hei at August 03, 2015 11:53 AM

TheoryOverflow

Will a non-linear lower bound on some NP complete problem prove non-linear lower bound on 3SAT?

A problem $\Pi$ is $\mathsf{NP}$ complete if there is a polynomial time reduction from an $\mathsf{NP}$ complete problem $\Pi^\circ$ to $\Pi$ with polynomial blow up on number of variables and instance size.

What are some examples where the involved polynomial blow up on number of variables is a large degree polynomial (correspondingly giving a large degree polynomial time reduction)?

Reason I am asking is this: suppose someone proves a non-linear lower bound on some NP complete problem, is there a direct way to infer that there is a non-linear lower bound for 3SAT by tracing back reductions?

Related: Natural candidate against the Isomorphism Conjecture?

by Turbo at August 03, 2015 11:52 AM

StackOverflow

How to handle if/else in functional JavaScript transform of Immutable objects for debugging

var data = {
  type: 'TEST',
  active: true,
  letters: Immutable.Map({ 'a': true, 'b': false })
};

var dataToLog = _.object(
  _.map(data, function(v, k) {
    if (Immutable.Iterable.isIterable(v)) {
      return ['*Immutable* - ' + k, v.toJS()];
    } else {
      return [k, v];
    }
  })
);

console.log('Output: ', dataToLog);

"Output: "
[object Object] {
  *Immutable* - letters: [object Object] {
    a: true,
    b: false
  },
  active: true,
  type: "TEST"
}

I have this in a JSBin here.

Using lodash and some of the transform methods to manipulate Immutable objects flowing through my Flux Dispatcher to JSON for easier debugging and also indicate by prepending 'Immutable' to the key's name in the output. It works well, but I'm learning about using _.map, _.compose, _.curry, and wonder if this could be made even more functional.

Specifically, I'm wondering how to handle this if/else that's in my function sent to _.map:

if (Immutable.Iterable.isIterable(v)) {
  return ['*Immutable* - ' + k, v.toJS()];
} else {
  return [k, v];
}

I'm not understanding how I can transform this into something functional other than making the check for Immutable and return of that into separate functions that check for Immutable values of the object and another to transform it to JSON (making the call to Immutable's .toJS()).

I'm also struggling with how to operate on this and get the desired output while operating on this as an object instead of having my map function return an array of the key and value.

To clarify, the goals for the overall transformation:

  • Iterate over an object with key, value
  • If value is NOT an Immutable object, return key and value unaltered
  • If value IS an Immutable object, modify key to be prefixed with 'Immutable' and call .toJS() on value and return those transformations

by Kevin Old at August 03, 2015 11:42 AM

/r/clojure

Access-Control-Allow-Origin problem in liberator

I am using reagent with ajax to call a restul service in liberator, but it error out as " No 'Access-Control-Allow-Origin' header is present on the requested resource.". how to resolve the issue?

submitted by wqhhust
[link] [1 comment]

August 03, 2015 11:42 AM

Lobsters

StackOverflow

Is there any way to clear console output while using scala interpreter? [duplicate]

This question already has an answer here:

I am using scala interpreter in console, is there way to clear the console?

by Fahad at August 03, 2015 11:07 AM

Fefe

Nicht nur will es jetzt keiner mehr gewesen sein in ...

Nicht nur will es jetzt keiner mehr gewesen sein in der Causa Landesverrat, nein, die Sache ist SO OFFENSICHTLICH, dass sogar die Merkel auf Distanz zum Generalbundesanwalt geht. DIE MERKEL! Die hat wie immer abgewartet, bis die Schlammlavine im Tal die Häuser begraben hat, bis alle Menschen ganz sicher tot sind, bis man ganz sicher nichts mehr tun kann, und verkündet dann das Offensichtliche. Dass das möglicherweise doch gar kein Landesverrat ist, wenn Journalisten ihren Job machen.

Du weißt, dass du ganz sicher verloren hast, wenn es sogar der Merkel auffällt.

August 03, 2015 11:01 AM

Lobsters

What are you working on this week?

This is the weekly thread to discuss what you have done recently and are working on this week.

Please be descriptive and don’t hesitate to ask for help, advice or other guidance.

by Widdershin at August 03, 2015 10:44 AM

/r/netsec

StackOverflow

how to write unit test case in scala

def PagTable(pastConf: configure, keyvalue: String): OurProperty = {   
    val table1 = pastConf.getString("%s.table_no1".format(keyvalue))
    val table2 = pastConf.getString("table_no2")
    val value = pastConf.getInt("value-range")

    val loadProps = OurProperty(table1,table2, value)
    logger.info("Our properties - %s".format(loadProps.toString))
    loadProps
}

How to write the unit test for above method in scala? m using sbt to compile and run. assume file str: src/main/scala/foo.scala src/test/scala/(empty)

by spk at August 03, 2015 10:34 AM

IllegalArgumentException The bucketName parameter must be specified. com.amazonaws.services.s3.AmazonS3Client.rejectNull

running a Clojure jar on AWS-EMR cluster using (hfs-textline) and getting:

IllegalArgumentException The bucketName parameter must be specified.  com.amazonaws.services.s3.AmazonS3Client.rejectNull`.

by yonatan at August 03, 2015 10:31 AM

TheoryOverflow

Quantum computing paradigms other than QFT and Grover's search

In most of the standard resources related to quantum computing and quantum information it is stated that QFT and Grover's Algorithm are the two major paradigms/techniques which are used in the development of quantum algorithms. Some of these resources are some what old considering the pace with which the field grows. So has any new techniques been developed that are independent of these two? Or are these two still the only 'game-changing' techniques that we know about?

by biryani at August 03, 2015 10:26 AM

UnixOverflow

Cron service running but not visible by "top"

I'm on my freeBSD server. Cronjob suddenly stop working. When I run "top" the service is not listed. But when i check "service cron status" , it is running with given PID. I restarted the service and even server but the problem still persist. How can I troubleshoot this?

by hellilyntax at August 03, 2015 10:22 AM

/r/netsec

Fred Wilson

The Other Benefit Of Fundraising

The reason people go out and fundraise is they need capital for their business. I would not recommend doing it for any other reason. It’s hard and time consuming work and can be extremely frustrating.

But there is another benefit of fundraising. You get feedback on your business from people who see a lot of businesses like yours every day.

The feedback you get from any one investor can be horrible and you need to learn to ignore off base feedback from idiot investors. And you will find that on the fundraising trail.

However the aggregate feedback you get from a diverse collection of investors, ideally dozens or more, can be super helpful.

So what I suggest to entrepreneurs is to use some sort of note taking system, paper or electronic, and write down the hard questions you got and the points of feedback you received after each meeting. The sooner you do it after the meetings the better.

Then start to sort them into a list of “issues” that you are hearing about your business. And the ones you hear the most are the one to focus on.

These are not just sales hurdles to overcome in your financing (although they are that too), these are the things that make your business less attractive to investors and they are things you need to address in your business.

These issues could be about your team, your product, your competition, your market, your go to market strategy, your business model, etc, etc.

The point I am making is that fundraising is a bit like the customer development process. You are showing your business to the market and it is critical to listen to what the market is telling you as no business is perfect and investors will take the time to tell you what is wrong with yours, often right in the meeting.

So treat your fundraising process as two things. First and foremost, it is about getting the capital you need to operate and grow your business. But it is also a fact finding mission about the things you need to address to make your business better. Don’t forget to do the second thing because it is a fantastic opportunity to improve your business for the long haul.

by Fred Wilson at August 03, 2015 10:21 AM

TheoryOverflow

Problems Between P and NPC

Factoring and graph isomorphism are problems in NP that are not known to be in P nor to be NP-Complete. What are some other (sufficiently different) natural problems that share this property? Artificial examples coming directly from the proof of Ladner's theorem do not count.

Are any of these example provably NP-intermediate, assuming only some "reasonable" hypothesis?

by Lev Reyzin at August 03, 2015 10:17 AM

Planet Clojure

Senior Software Engineer, goCatch, Sydney, Australia

Dear Will,

goCatch is looking for a senior software engineer to work full time on
its core product, which is a real time tracking and dispatch system for
taxi services. Our offices are located in Sydney, Australia. We will not
consider telecommuting, however we may be able to assist a promising
candidate to relocate.

We are a small team who prize themselves on writing quality, readable,
maintainable code, and are looking for like-minded individuals.

The successful candidate will have knowledge of

* Clojure [or a Scheme or Common Lisp, and a willingness to learn]
* willingness to work closely with collaborators, including occasional pair-programming
* cloud hosted systems; deployment on AWS

And any of the following additional skills would be considered an asset:
* programming in a functional style
* knowledge of the clojure ecosystem. We
* RabbitMQ or another message broker
* SCALA
* REST architecture
* IOS development
* Android/Java development
* modern front end web technologies (Angular, React, etc.)

Applicants are encouraged to send us a resume with a cover letter explaining
they development philosophy and describing their ideal working environment.
Please forward offers of interest to Alain Picard (alain@gocatch.com)

goCatch is an equal opportunity employer.

================================================================

Sydney, Australia


by Will Fitzgerald at August 03, 2015 10:15 AM

StackOverflow

Spark 1.4 - filtering one DataFrame basing on the other DataFrame

suppose I've got two tables - people_worldwide and people_Europe. Both tables have the same person_id. I want to filter out all the entries in people_worldwide that match people_Europe table. However, I dont want to do any joining on tables because that will be too heavy.

Is there a way to use this filter method on people_worldwide to eliminate all entries that do not exist in people_Europe table?

by Niemand at August 03, 2015 10:11 AM

More than one socket stops all sockets from working

I have a few computers on a network and I'm trying to coordinate work between them by broadcasting instructions and receiving replies from individual workers. When I use zmq to assign a single socket to each program it works fine, but when I try to assign another, none of them work. For example, the master program runs on one machine. With the code as such it works fine as a publisher, but when I uncomment the commented lines neither socket works. I've seen example code extremely similar to this so I believe it should work, but I must be missing something.

Here's some example code, first with the master program and then the worker program. The idea is to control the worker programs from the master based on input from the workers to the master.

import zmq
import time
import sys

def master():
    word = sys.argv[1]
    numWord = sys.argv[2]
    port1 = int(sys.argv[3])
    port2 = int(sys.argv[4])
    context = zmq.Context() 
    publisher = context.socket(zmq.PUB)
    publisher.bind("tcp://*:%s" % port1)

    #receiver = context.socket(zmq.REP)
    #receiver.bind("tcp://*:%s" % port2)

    for i in range(int(numWord)):
        print str(i)+": "+word
        print "Publishing 1"
        publisher.send("READY_FOR_NEXT_WORD")
        print "Publishing 2"
        publisher.send(word)
        #print "Published.  Waiting for REQ"
        #word = receiver.recv()
        #receiver.send("Master IRO")
        time.sleep(1)
        print "Received: "+word
    publisher.send("EXIT_NOW")


master()

Ditto for the workers:

import zmq
import random
import zipfile
import sys

def worker(workerID, fileFirst, fileLast):
    print "Worker "+ str(workerID) + " started"
    port1 = int(sys.argv[4])
    port2 = int(sys.argv[5])

    # Socket to talk to server
    context = zmq.Context()

    #pusher = context.socket(zmq.REQ)
    #pusher.connect("tcp://10.122.102.45:%s" % port2)

    receiver = context.socket(zmq.SUB)
    receiver.connect ("tcp://10.122.102.45:%s" % port1)
    receiver.setsockopt(zmq.SUBSCRIBE, '')




    found = False
    done = False
    while True:
        print "Ready to receive"
        word = receiver.recv()
        print "Received order: "+word
        #pusher.send("Worker #"+str(workerID)+" IRO "+ word)
        #pusher.recv()
        #print "Confirmed receipt"



worker(sys.argv[1], sys.argv[2], sys.argv[3])

by user3636152 at August 03, 2015 10:07 AM

Lobsters

Planet Clojure

Clojure Gazette 135: Transients with Core Typed

Clojure Gazette -- Issue 135 - August 03, 2015

Transients with Core Typed
Read this email on the web
Clojure Gazette
Issue 135 - August 03, 2015

Hi!

I was fortunate enough to interview Akhilesh Srikanth who has been working on a Google Summer of Code project. His project is to be able to safely type transients.

Please enjoy the issue!

Rock on!
Eric Normand <eric@lispcast.com>

PS Want to be more attractive? Subscribe!
PPS Want to advertise to smart, talented, attractive Clojure devs?


Homegrown Labs

Sponsor: Homegrown Labs

Unit tests exercise the individual parts of your system. And acceptance tests make sure the business requirements are met. If you're worried about how much failure in production will cost in your complex, distributed, and unpredictable system, Homegrown Labs can help build your confidence. Homegrown Labs specializes in Simulation Testing, which tests your system as a whole by simulating a realistic workload. Please support the Gazette by visiting Homegrown Labs and signing up to learn more about Simulation Testing at the bottom of the page.


LispCast: How did you get into Clojure?

Akhilesh Srikanth: I've been aiming to become a good polyglot programmer for a while now. I've been experimenting with a bunch of new functional programming languages out there (Scala, Idris and Erlang), and I thought getting better at Clojure was the next step for me. I've been doing Scala for over a year now, I was in fact a GSoC student for Scala last year, and coming from a statically-typed background, I wanted to experiment with the idea of optional typing and develop a better understanding of type systems in the process and working on Typed Clojure, in my opinion, was the logical step forward.

LC: Can you describe your project?

AS: To summarize, Typed Clojure (core.typed) currently doesn't support type checking for Clojure's transient data structures and its functions (assoc!, conj!, etc). My GSoC project aims to add support to type check these Clojure constructs. Transients in Clojure provide for optimizations in functional data structures by allowing local mutation within code fragments that need to be efficient, resulting in lesser memory allocation and faster execution times. Now because of this property where they behave like mutable collections, we want to ensure that transients aren't used once they are updated (otherwise we could lookup values at different times and get different results). Using a type system to statically guarantee this property is definitely a promising way to tackle this problem. I'm currently working on encoding the type system with the idea of Uniqueness types which solves the above mentioned problem. This involves me doing a lot of Clojure while learning a whole bunch of new ideas regarding type systems, which I thoroughly enjoy. :)

LC: What are the main challenges?

AS: Well, to start off, understanding the behavior of transients and how they integrate into the type system because of the property that they aren't persistent collections was initially tricky. Transients are actually immutable collections which aren't persistent and creating a new version should invalidate the old one, which is where the idea of linear/uniqueness types fits in. Turns out incorporating this idea into core.typed involves a lot of corner cases which are quite tricky to handle, but should hopefully work out at the end.

LC: Typed Clojure is getting some really powerful features. How has it been working inside the type system?

AS: Yes, Typed Clojure has (is getting) some really interesting features.

It had support for generics and higher-kinded types for a while now which provide some powerful abstractions while designing large applications. It also supports occurrence typing, which allows the type system to ascribe precise types to terms based on control flow predicate checks, which I think is really cool.

In terms of future directions, here are some features that Typed Clojure users can expect:

  1. Support for gradual typing, which preserves static invariants of well-typed code outside of the compile-time sandbox of the type checker in statically typed languages. To get a better idea of what I'm talking about, do refer to this excellent blog post by Ambrose Bonnaire Sergeant, my mentor for this project.

  2. Support for refinement types, which allows users to generate new types that aren't combinations of existing types which satisfy some predicate, which allows for writing more expressive types while writing code.

  3. Enhanced support for dependent types, where types are predicated on values. This is a very interesting idea to ensure correctness in code, wherein program properties that we care about can be encoded within types and if those properties of interest are violated, the type system tells us at compile time that we're wrong.

  4. Support for uniqueness types (which I'm currently working on), which provides for statically enforced memory management, and provides for safe memory usage without the intervention of a garbage collector.

Apart from this, core.typed has recently got a REPL, which allows for writing typed code within the REPL. I definitely see an interesting road ahead for Typed Clojure!

LC: How can people contribute to your work? Where can they follow its progress?

AS: I've been meaning to write up a few blog posts to help people new to Typed Clojure get started using it and for developers to get started contributing, but for now, I think the core.typed wiki is probably the best place to go to for anyone wishing to get started.

You can follow the project's progress in my fork of core.typed.

LC: How can people follow you online, if they want to?

AS: I'm @spicytango on Twitter, that's probably the best place to follow me.

LC: What does Clojure eat for breakfast?

AS: Interesting...probably a bunch of parentheses mixed with some awesome sauce from Rich Hickey's kitchen!

LC: Thanks for the interview!



by Clojure Gazette at August 03, 2015 10:00 AM

UnixOverflow

Bulk remove a large directory on a ZFS without traversing it recursively

I want to remove a directory that has large amounts of data on it. This is my backup array, which is a ZFS filesystem, linear span, single pool. The data is in this pool called san, mounted on /san so i want to bulk remove /san/thispc/certainFolder

$ du -h -d 1 certainFolder/
1.2T    certainFolder/

Rather than me have to wait for rm -rf certainFolder/ cant i just destroy the handle to that directory so its overwrite-able(even by the same dir name if i chose to recreate it) ??

So for e.g. not knowing much about zfs fs internal mgmnt specifically how it maps directories, but if i found that map say fro e.g., and removed the right entries for e.g., the directory would no longer display, and that space that the directory formerly held has to be removed from some kind of audit as well.

Is there an easy way to do this, even if on an ext3 fs, or is that already what the recursive remove command has to do in the first place, i.e. pilfer through and edit journals...?

I'm just hoping to do something of the likes of kill thisDir to where it simply removes some kind of ID, and poof the directory no longer shows up in ls -la and the data is still there on the drive obviously, but the space will now be reused(overwritten), because its ZFS....

Ideally?

I mean i think zfs is really that cool, how can we do it? rubbing hands together :-)

EDIT: My specific use case (besides my love for zfs, is management of my backup archive. This backup dir is pushed to via freefilesync (AWESOME PROG) on my win box to an smb fileshare, but also has a version directory where old files go. im deleting top level directories that reside in the main backup, which were copied to the version... e.g. backup/someStuff, version/someStuff, bi monthly cleanup of rm -rm version/someStuff/* from a putty terminal, now i have to open another terminal, don't want to do that every time, im tired of uselessly having to monitor rm -rf.... i mean maybe i should set the command to just release the handle, then print to std out... that might be nice.. More realistically, recreate the dataset in a few seconds zfs destroy san/version; zfs create -p -o compression=on san/version after the thoughts from the response from @Gilles

by Brian Thomas at August 03, 2015 09:54 AM

StackOverflow

Retrieve typed stored values from Map

I'd like to put some data into a HashMap and retrieve these as typed values using a function. The function takes the expected type and also a default value in case the value is not stored in the HashMap. Type erasure of the JVM makes this a tricky thing.

Q: How can I retrieve a typed value?

Code and results below.

abstract class Parameters(val name: String) {
  val parameters = new HashMap[String, Any]()

  def put(key: String, value: Any) = parameters(key) = value

  def get(key: String) = parameters.getOrElse(key, None)

  def remove(key: String) = parameters.remove(key)


  def g0[T: TypeTag](key: String, defaultValue: T) = {
    get(key) match {
      case x: T => x
      case None => defaultValue
      case _ => defaultValue
    }
  }


  def g1[T: ClassTag](key: String, defaultValue: T) = {
    val compareClass = implicitly[ClassTag[T]].runtimeClass

    get(key) match {
      case None => defaultValue
      case x if compareClass.isInstance(x) => x.asInstanceOf[T]
    }
  }
}

class P extends Parameters("AParmList") {
  put("1", 1)
  put("3", "three")
  put("4", 4.0)
  put("width", 600)

  println(g0[Int]("width", -1))
  println(g0[Int]("fail", -2))
  println(g1[Int]("width", -3))
  println(g1[Int]("fail", -4))
}


object TypeMatching {
  def main(args: Array[String]) {
    new P
  }
}

The output is (comments in parenthesis): 600 (as expected) None (expected -2) and a match error (java.lang.Integer stored, Int required)

by osx at August 03, 2015 09:52 AM

Apache Spark using regex to filter input files

I have attempted to filter out dates for specific files using Apache spark inside the file to RDD function sc.textFile().

I have attempted to do the following:

sc.textFile("/user/Orders/201507(2[7-9]{1}|3[0-1]{1})*")

This should match the following:

/user/Orders/201507270010033.gz
/user/Orders/201507300060052.gz

Any idea how to achieve this?

by eboni at August 03, 2015 09:49 AM

How to perform basic statistics on a csv file to explore my numeric and non-numeric variables with Spark Scala?

I've imported a csv file which is like this :

MR; IT;  UPI; CAE; IIL;                ED;   NS;                  DATE; DUIOD;NBOPP;
30;  0; null;   2;   0; bgpel:10PT-MIP   ; null; 2013-05-20 21:03:00.0;   300;null;
20;  0; null;   4;   1; bzrgfel:125TZ-ATR; null; 2013-04-01 19:50:02.0;   302;null; 
10;  2; null;   2;   0; bhtuyel:105MF-AXI; null; 2013-04-26 17:12:00.0;   298;null;

I'm new on Spark and I want to perform basic statistics like

  • getting the min, max, mean, median and std of numeric variables
  • getting the values frequencies for non-numeric variables.

My questions are :

  • With what type of object is it better to work and how to import my csv into that type (RDD, DataFrame, ...)?
  • How to do those basic statistics easily ?

I've tried with RDD but there is probably a better way to do :

val csv=sc.textFile("myFile.csv");  
val summary: MultivariateStatisticalSummary = Statistics.colStats(csv)

I get an error like:

error: type mismatch;
found   : org.apache.spark.rdd.RDD[String]
required: org.apache.spark.rdd.RDD[org.apache.spark.mllib.linalg.Vector]

by SparkUser at August 03, 2015 09:49 AM

QuantOverflow

Exercise 2.2 from the book "The concept and practice of Mathematical Finance"

I am a newbie. Please help me understand how to resolve the exercise 2.2 from the book "The concept and practice of Mathematical Finance". The solution from the book says that our super-replicating portfolio will be $\alpha$ shares and $\beta$ bonds. It must dominate at zero. This implies that $\beta$ >= 0. First of all, what does it mean "it must dominate at zero". Secondly, why if it dominates at zero, then $\beta$ >= 0? Thanks so much for your help!

Problem

enter image description here

Solution

enter image description here enter image description here

by dullboy at August 03, 2015 09:43 AM

Likelihood of a caplet ending in the money

with what likelihood would one expect an ATM caplet to end up in the money? Just as a very rough guess, from real world experience.

When I consider N(d2) from the Black formula, for spot = strike = 4%, vola = 20%, T = 1, tenor = 12m, I get something around 46.*%.

On the other hand, when I think about it qualitatively: Under most market conditions, the forward rate curve is increasing. As a very, very rough argument, we would expect the spot rate 1 year into the future to be similar to today's spot rate rather than today's 1 year forward rate, i.e. to be lower than today's ATM strike. Thus, under the physical measure, the caplet should be less likely to end up in the money than end up out of the money. Does this roughly conform with a number around 46%?

For my thesis, I am using a simulation model that tries to create synthetic realizations of market data (in a complicated procedure that would go too far to explain here). In this synthetic world, I get ITM-likelihoods much lower than the above and I am wondering about real world estimates from a practitioner's point of view.

by ettlich at August 03, 2015 09:42 AM

StackOverflow

Scheme: Procedures that return another inner procedure

This is from the SICP book that I am sure many of you are familiar with. This is an early example in the book, but I feel an extremely important concept that I am just not able to get my head around yet. Here it is:

(define (cons x y)
 (define (dispatch m)
   (cond ((= m 0) x)
         ((= m 1) y)
         (else (error "Argument not 0 or 1 - CONS" m))))
 dispatch)
(define (car z) (z 0))
(define (cdr z) (z 1))

So here I understand that car and cdr are being defined within the scope of cons, and I get that they map some argument z to 1 and 0 respectively (argument z being some cons). But say I call (cons 3 4)...how are the arguments 3 and 4 evaluated, when we immediately go into this inner-procedure dispatch which takes some argument m that we have not specified yet? And, maybe more importantly, what is the point of returning 'dispatch? I don't really get that part at all. Any help is appreciated, thanks!

by Houdini at August 03, 2015 09:32 AM

Missing become password in ansible playbook

I am trying to create playbook for deploy with simple scenario: login to server and clone/update open github repo. All access parameters written in ~/.ssh/config

Here are my files:

  1. hosts

    [staging]

    staging

  2. deploy.yml

    - hosts: staging
      tasks:
      - name: Update code
      git: repo=https://github.com/travis-ci-examples/php.git dest=hello_ansible
    

    When I am trying to run ansible-playbook -s deploy.yml -i hosts, it outputs error like this:

GATHERING FACTS *************************************************************** fatal: [staging] => Missing become password

TASK: [Update code] *********************************************************** FATAL: no hosts matched or all hosts have already failed -- aborting

I have tried to add sudo: False and become: False, but it does not seem to have any effect. I assume this operation should not request sudo password as I am trying work with files in ssh user's home directory.

I am sorry if my question is a bit lame, but I do not have much experience with Ansible.

by user2665732 at August 03, 2015 09:31 AM

Error 413 in websocket, how to handle?

I have been developing a websocket server until recently I have encountered a 413 Entity too large error. I am using ratchetphp on my server. Have anyone encountered this? Is there anything I can do so that this won't occur? If there isn't anything I can do please help me recreate this error, the current solution I have on fixing this is clearing my browser's cache. I only find this error out by checking network tab on developer tools using chrome, but if clients connect to the server without knowledge of developer tools how can I let them know that they need to clear their cache?

by Dominick Navarro at August 03, 2015 09:16 AM

CompsciOverflow

Why can't a programming language be both fully recursive and polymorphic

In my theory of computation class last Spring my professor said in passing that a programming language cannot be both fully recursive and polymorphic. I didn't think much of it till now? What does it mean to be "fully" polymorphic and why does that mean you a language can't be "fully" recursive?

by Michael Chav at August 03, 2015 09:11 AM

StackOverflow

How does Storm handle with Garbage Collection?

How does Storm handle with Garbage Collection? And what is the reason of its fast performance. Is it because Disruptor-Pattern or there is another thing I am missing.

by Humoyun at August 03, 2015 09:08 AM

What is the Java equivalent to Python's reduce function?

Similar questions have been asked, here and here, but given the advent of Java 8, and the generally outdated nature of these questions I'm wondering if now there'd be something at least kindred to it?

This is what I'm referring to.

by Legato at August 03, 2015 08:44 AM

TheoryOverflow

What evidence is there that Graph Isomorphism is not in $P$?

Motivated by Fortnow's comment on my post, Evidence that Graph Isomorphism problem is not $NP$-complete, and by the fact that $GI$ is a prime candidate for $NP$-intermediate problem (not $NP$-complete nor in $P$), I am interested in known evidences that $GI$ is not in $P$.

One such evidence is the $NP$-completeness of a restricted Graph Automorphism problem(fixed-point free graph automorphism problem is $NP$-complete). This problem and other generalizations of $GI$ were studied in "Some NP-complete problems similar to Graph Isomorphism" by Lubiw. Some may argue as evidence the fact that despite more than 45 years no one found polynomial-time algorithm for $GI$.

What other evidence do we have to believe that $GI$ is not in $P$?

by Mohammad Al-Turkistany at August 03, 2015 08:37 AM

StackOverflow

errors in build.sbt with intellij for a brand new scala+spray project

I downloaded a spray template from and created an intellij project at this directory, but after the project creation, I saw that the build.sbt file encounters some errors, as shown here : http://i59.tinypic.com/264kvmg.png. I refreshed the project, having checked the auto-import option, I restarted intellij but the errors are still there, for example when I put the mouse pointer at the red "at" I get an error "cannot resolve the symbol at". when putting the pointer at the last line, I get the error : "Expression type (Seq[Any]) must conform to Settings[_] in SBT file".

I believe the project works when launching from the command line, but I would prefer if intellij would not display any error.

by lolveley at August 03, 2015 08:34 AM

parallel execution of tasks at a time in all hosts in ansible

I want to run tasks parallel for 30 hosts in ansible at a time. but for me the execution of playbooks is doing one after another. This is not the requirement.Because it seems to be tasks are done serially. For me execution of tasks to be done at a time.while running my playbook tasks are completing one after another.So how to do in all servers at a time.

So please give a suggestion to done tasks parallelly at a time in all hosts.

by manianandkumar at August 03, 2015 08:31 AM

/r/compsci

StackOverflow

add new error fields in play form validation

I Create form with mapping like this one

  val form = Form(
mapping(
  "card_number" -> text.verifying("Invalid visa/mastercard credit card number", creditCardValidator.isValid(_)),
  "card_exp_month" -> text(2, 2).verifying("Invalid month value", months.contains(_)),
  "card_exp_year" -> text(4, 4).verifying("Invalid year value", notPastYear(_)),
  "card_cvv" -> text(3, 4).verifying("Numeric value expected", mustDigit(_))
)(TokenRequest.apply)(TokenRequest.unapply).
  verifying("Invalid card expired month and year", notPast(_))

)

this is sample json return error from form.errorAsJson

{
  "code": 52,
  "message": "BadRequest",
  "errors": {
    "card_exp_month": [
      "Invalid month value"
    ]
  }
}

but when error in "Invalid card expired month and year", I get this json :

{
  "code": 52,
  "message": "BadRequest",
  "errors": {
    "": [
      "Invalid card expired month and year"
    ]
  }
}

it's return "" empty string in key object

How to add custom key in verifying() method so I can get error json like this :

"key" : [
   "error message 1",
   "error message 2"
]

by Eko Kurniawan Khannedy at August 03, 2015 08:19 AM

Why do i get a ClassNotFoundException on running a simple scala program on IntelliJ 14+?

I'm unable to figure out what is wrong with this program? I'm using an older verison of scala (2.7) because its compatible with certain libraries i'm using

Here is a simple program i'm attempting to run.

The program runs fine using scalac and scala commands.

However on IntelliJ 14+ -> when i create a new project -> select the compiler (scala 2.7) and try to run the above program i get this error below

object SimpleClass {
  def main(args: Array[String]) {
    println("This is a simple Class")
  }
}

Error output.

Why does intelliJ throw the ClassNotFoundException? I've saved the program as SimpleClass.scala

/usr/lib/jvm/java-7-openjdk-i386/bin/java -Didea.launcher.port=7532 -Didea.launcher.bin.path=/home/tejesh/Downloads/idea-IC-141.1532.4/bin -Dfile.encoding=UTF-8 -classpath /usr/lib/jvm/java-7-openjdk-i386/jre/lib/javazic.jar:/usr/lib/jvm/java-7-openjdk-i386/jre/lib/management-agent.jar:/usr/lib/jvm/java-7-openjdk-i386/jre/lib/resources.jar:/usr/lib/jvm/java-7-openjdk-i386/jre/lib/rhino.jar:/usr/lib/jvm/java-7-openjdk-i386/jre/lib/charsets.jar:/usr/lib/jvm/java-7-openjdk-i386/jre/lib/jce.jar:/usr/lib/jvm/java-7-openjdk-i386/jre/lib/jsse.jar:/usr/lib/jvm/java-7-openjdk-i386/jre/lib/rt.jar:/usr/lib/jvm/java-7-openjdk-i386/jre/lib/compilefontconfig.jar:/usr/lib/jvm/java-7-openjdk-i386/jre/lib/ext/localedata.jar:/usr/lib/jvm/java-7-openjdk-i386/jre/lib/ext/zipfs.jar:/usr/lib/jvm/java-7-openjdk-i386/jre/lib/ext/sunpkcs11.jar:/usr/lib/jvm/java-7-openjdk-i386/jre/lib/ext/dnsns.jar:/usr/lib/jvm/java-7-openjdk-i386/jre/lib/ext/java-atk-wrapper.jar:/usr/lib/jvm/java-7-openjdk-i386/jre/lib/ext/sunjce_provider.jar:/usr/lib/jvm/java-7-openjdk-i386/jre/lib/ext/icedtea-sound.jar:/usr/local/share/scala-2.7.3/lib/scala-swing.jar:/usr/local/share/scala-2.7.3/lib/scala-library.jar:/home/tejesh/Downloads/idea-IC-141.1532.4/lib/idea_rt.jar com.intellij.rt.execution.application.AppMain SimpleClass
Exception in thread "main" java.lang.ClassNotFoundException: SimpleClass
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:191)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:122)

I've added the folder containing the SimpleClass as under sources

enter image description here

enter image description here This is the full program here

by wolfgang at August 03, 2015 08:18 AM

ansible - tranferring files and changing ownership

I'm new to Ansible. The following is my requirement,

  1. Transfer files(.tar.gz) from one host to many machines (38+ Nodes) under /tmp as user1
  2. Log in to each machines as user2 and switch to root user using sudo su - (With Password)
  3. extract it to another directory (/opt/monitor)
  4. Change a configuration in the file (/opt/monitor/etc/config -> host= )
  5. Start the process under /opt/monitor/init.d

For this, should I use playbooks or ad-hoc commands ? I'll happy to use ad-hoc mode in ansible as I'm afraid of Playbooks.

Thanks in advance

by Lambo-Fan at August 03, 2015 08:16 AM

QuantOverflow

Fed Funds Rate: longer maturities

FFR published by Fed Bank of NY is the average rate US banks charge each other for the overnight loans of their reserves required by the Fed regulations. Since Fed acts similar to a clearing house here, I guess there is little credit risk involved. For that reason we may as well expect LIBOR rate for the very same maturity to be a bit higher. In fact, currently overnight USD LIBOR is 12 bps whereas FFR is 13-14 bps, so that does not quite hold.

Nevertheless, I'm looking into longer maturities LIBOR and of course the feature much higher rates. For example, 2-months is 25 bps and 3-months is 30 bps. I wonder which part of that comes from the credit risk, and which comes from the potential rate hike before that maturity. Unfortunately there is no 2- or 3-months FFR available, so I wondered whether there's any proxy for that: that would help estimating the rate hike component in LIBOR rates. From the other direction, I thought of estimating the credit risk component from LIBOR rates in other currencies (the less they are dependent on the US rates the better of course), and hence getting rate hike effect left. Any suggestions?

by Ulysses at August 03, 2015 07:53 AM

StackOverflow

scala macros how to get tree of specific class

Imagine i have a macro annotation that annotates case class:

class message(`type`: String) extends StaticAnnotation {
    def macroTransform(annottees: Any*) = macro message.impl
}

...

@message("SearchReq")
case class SearchReq(req: String)

I have MessageRegister object that located in another package. In annotation @message body message.impl i need to add type of message in register.

I have no idea how to do that. The first thing that came to mind is get the tree of MessageRegister object and add code into its body that executes in runtime. The next idea is that somehow @message annotation executes in runtime and i simply execute MessageRegister.registerMessage(msg).

How can i solve this problem ?

by Daryl at August 03, 2015 07:50 AM

Why should I avoid using local modifiable variables in Scala?

I'm pretty new to Scala and most of the time before I've used Java. Right now I have warnings all over my code saying that i should "Avoid mutable local variables" and I have a simple question - why?

Suppose I have small problem - determine max int out of four. My first approach was:

def max4(a: Int, b: Int,c: Int, d: Int): Int = {
  var subMax1 = a
  if (b > a) subMax1 = b

  var subMax2 = c
  if (d > c) subMax2 = d

  if (subMax1 > subMax2) subMax1
  else subMax2
}

After taking into account this warning message I found another solution:

def max4(a: Int, b: Int,c: Int, d: Int): Int = {
  max(max(a, b), max(c, d))
}

def max(a: Int, b: Int): Int = {
  if (a > b) a
  else b
}

It looks more pretty, but what is ideology behind this?

Whenever I approach a problem I'm thinking about it like: "Ok, we start from this and then we incrementally change things and get the answer". I understand that the problem is that I try to change some initial state to get an answer and do not understand why changing things at least locally is bad? How to iterate over collection then in functional languages like Scala?

Like an example: Suppose we have a list of ints, how to write a function that returns sublist of ints which are divisible by 6? Can't think of solution without local mutable variable.

by Oleksii Duzhyi at August 03, 2015 07:50 AM

Scala macro modify object

I have a next macro annotation

 class Foo(obj: String) extends StaticAnnotation {
    def macroTransform(annottees: Any*) = macro MacroImpl.impl
 }

 object MacroImpl {
    def impl(c: Context)(annottees: c.Expr[Any]*): c.Expr[Any] = {
      import c.universe._

      // i want find `obj` and modify body
    } 

 }

 // usage
 @Foo("pkg.myObject") class SomeClass {}

Is it possible with macro find object by name and modify body of object?

by mike at August 03, 2015 07:46 AM

/r/scala

I want to write tweets into a file

object StatusStreamer { def main(args: Array[String]) { val TwitterFactory = new TwitterFactory(twitterOath.config).getInstance

val fq = new FilterQuery() val twitterStream = new TwitterStreamFactory(twitterOath.config).getInstance twitterStream.addListener(twitterOath.simpleStatusListener) twitterStream val keyword = Array("scala") fq.track(keyword) val k = twitterStream.filter(fq) 

} } this my main function i want to save the tweets into a file

submitted by isaacamer
[link] [1 comment]

August 03, 2015 07:40 AM

/r/clojure

Lobsters

StackOverflow

removing `new` in favor of companion object apply makes my code much more complex, how to overcome?

In a large codebase, its very convenient for me that objects are created with new. This means I can search in the large codebase new Something and know when its being created. (I use github interface so I can't use any smart IDE show call hierarchy method). However after beginning to use scala I removed the new and placed in favor of companion object apply method. Now, I cannot stress enough how much its important for me to understand exactly when objects are required to be used and thus created but the usage of companion object apply removes the ability for me to do a simple search box text on all these creations. Am I missing something big or am I doomed once everyone starts using the companion object's apply method for objects creation?

(I know DI says we should move the new into factories but I'm talking mainly about plain DTO DI says that its ok to "new" DTOs, in anyway i have this problem once stopping to use new).

by Jas at August 03, 2015 07:08 AM

"Picked up JAVA_TOOL_OPTIONS: -javaagent:/usr/share/java/jayatanaag.jar" when starting the Scala interpreter

When running the Scala interpreter in Ubuntu 14.04, I get the following message printed as the first line:

Picked up JAVA_TOOL_OPTIONS: -javaagent:/usr/share/java/jayatanaag.jar 

Followed by the familiar "Welcome to Scala" message.

I'm worried because I haven't seen that when running Scala before - what does it mean, is it dangerous, etc?

Apparently the environment variable $JAVA_TOOL_OPTIONS is set to -javaagent:/usr/share/java/jayatanaag.jar - I didn't set that, but what did and why? Can I safely unset it?

Additional info:

  • recently installed Android Studio
  • The word "ayatana" in the JAR's name might point to Ubuntu's project Ayatana

by jco at August 03, 2015 07:07 AM

TheoryOverflow

How does one determine if a mixed bipartite quantum state is entangled or not?

My question is based on the structure of the NP-hardness proof in section 6 (page 17) of this paper, http://arxiv.org/pdf/quant-ph/0303055v1.pdf


Mathematically one can think of being given a positive semi-definite linear map $\rho : \mathbb{C}^n \otimes \mathbb{C}^m \rightarrow \mathbb{C}^n \otimes \mathbb{C}^m$ such that its trace is $1$ and one wants to determine if there exists some $k$ (column) vectors $x_i \in \mathbb{C}^n$ and another $k$ (column) vectors $y_i \in \mathbb{C}^m$ such that $\rho = \sum_{i=1}^{k} x_i x_i ^{\dagger} \otimes y_i y_i^{\dagger}$


If I understand right then this linked paper is proving the decision version of the above question to be NP-hard. (please correct me if my reading is wrong!) But I am curious to know as to what would be even a brute-force algorithm to solve this! (in my limited experience for all NP-hard questions there is a trivial brute force solution that is always obvious - but not here!)


[Expanding the statement in the comments]

  • Trivially it seems that there is no hope of being able to check this unless one allows for some finite precision error. But if with such a discretization the question is redefined then is the corresponding decision question still NP-Hard?

  • So is there a difference between the decision question that is being shown to be NP-Hard and the actual entanglement question that needs to be solved?

by Anirbit at August 03, 2015 07:06 AM

StackOverflow

EclipseFP for Haskell not working

I've installed the EclipseFP plugin so I can work with Haskell in Eclipse. I am running Eclipse on my Mac. I have the most recent JRE, found somewhere that that may be the issue. The plugin is not in Windows --> Open Perspective --> Other which is where is should be. I am using Eclipse Luna.

I am relatively new and would like to learn haskell

by oliverjones at August 03, 2015 07:05 AM

Can someone please explain the right way to use SBT?

I'm getting out off the closet on this! I don't understand SBT. There, I said it, now help me please.

All roads lead to Rome, and that is the same for SBT: To get started with SBT there is SBT, SBT Launcher, SBT-extras, etc, and then there are different ways to include and decide on repositories. Is there a 'best' way?

I'm asking because sometimes I get a little lost. The SBT documentation is very thorough and complete, but I find myself not knowing when to use build.sbt or project/build.properties or project/Build.scala or project/plugins.sbt.

Then it becomes fun, there is the Scala-IDE and SBT - What is the correct way of using them together? What comes first, the chicken or the egg?

Most importantly is probably, how do you find the right repositories and versions to include in your project? Do I just pull out a machette and start hacking my way forward? I quite often find projects that include everything and the kitchen sink, and then I realize - I'm not the only one who gets a little lost.

As a simple example, right now, I'm starting a brand new project. I want to use the latest features of SLICK and Scala and this will probably require the latest version of SBT. What is the sane point to get started, and why? In what file should I define it and how should it look? I know I can get this working, but I would really like an expert opinion on where everything should go (why it should go there will be a bonus).

I've been using SBT for small projects for well over a year now. I used SBT and then SBT Extras (as it made some headaches magically disappear), but I'm not sure why I should be using the one or the other. I'm just getting a little frustrated for not understanding how things fit together (SBT and repositories), and think it will save the next guy coming this way a lot of hardship if this could be explained in human terms.


UPDATE:

For what it's worth, I created a blank SBT project directory for new guys to get going quicker: SBT-jumpstart

by JacobusR at August 03, 2015 07:02 AM

Why the development groups were isolated from one another is a recipe for disaster? [on hold]

The author of the above article notes that “the development groups were isolated from one another - one group was doing analysis, one group was doing software design and a separate group was doing implementation”. Why would this arrangement be a recipe for disaster?

Please help me ? I want to see your real experiences to solve this problem .

by Quoc Tung at August 03, 2015 06:50 AM

Java 8 Lambdas Vs Python pros over Java before Java 8 [on hold]

As python has advantage over Java due to the fact that it needs lees number of lines of code than Java for the same task. And the difference of LOC is really huge. I am wondering after the introduction of Functional Programming(I consider it as Intuitive Programming) in Java 8(through addition of Lambdas and Stream libraries), Java may compete on this account too. Could anyone please help me to understand the comparison between Java 8 Lambdas and Stream libraries Vs Python pros over Java before java 8.

by Vineet Tyagi at August 03, 2015 06:50 AM

CompsciOverflow

Minimum spanning tree vs Shortest path

What is the difference between minimum spanning tree algorithm and a shortest path algorithm?

In my data structures class we covered two minimum spanning tree algorithms (Prim's and Kruskal's) and one shortest path algorithm (Dijkstra's).

Minimum spanning tree is a tree in a graph that spans all the vertices and total weight of a tree is minimal. Shortest path is quite obvious, it is a shortest path from one vertex to another.

What I don't understand is since minimum spanning tree has a minimal total weight, wouldn't the paths in the tree be the shortest paths? Can anybody explain what I'm missing?

Any help is appreciated.

by flashburn at August 03, 2015 06:43 AM

StackOverflow

simple scala program not executing on intelliJ [duplicate]

I have a simple scala program HelloWorld.scala

object HelloWorld {
  def main(args: Array[String]) {
    println("Hello, world!")
  }
}

I'm using scala 2.7.3 and on the terminal using scalac and scala commands it compiles and runs just fine.

I'm using an older version of scala because its compatible with Stanford's TMT library (its a machine learning library)

However, on intelliJ i keep getting a ClassNotFoundException

I've tried creating a new scala project and also tried disabling the make function inside the run configurations but i get this same error.

Exception in thread "main" java.lang.ClassNotFoundException: HelloWorld
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:191)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:122)

How can i get this to work?

by wolfgang at August 03, 2015 06:42 AM

/r/osdev

StackOverflow

Efficient PairRDD operations on DataFrame with Spark SQL GROUP BY

This question is about the duality between DataFrame and RDD when it comes to aggregation operations. In Spark SQL one can use table generating UDFs for custom aggregations but creating one those is typically noticeably less user-friendly than using the aggregation functions available for RDDs, especially if table output is not required.

Is there an efficient way to apply pair RDD operations such as aggregateByKey to a DataFrame which has been grouped using GROUP BY or ordered using ORDERED BY?

Normally, one would need an explicit map step to create key-value tuples, e.g., dataFrame.rdd.map(row => (row.getString(row.fieldIndex("category")), row).aggregateByKey(...). Can this be avoided?

by Sim at August 03, 2015 06:27 AM

Telling Swagger that a param may be a list or single value

I am using Swagger with Scala to document my REST API. I want to enable bulk operations for POST,PUT and DELETE and want the same route to accept either a single object or a collection of objects as body content.

Is there a way to tell Swagger that a param is either a list of values of type A or a single value of type A?

Something like varargs for REST.

by bennidi at August 03, 2015 06:12 AM

sbt project for spark , but not found some package in context?

i try to build a new project use sbt in IDEA Intellj .

create project and add Dependency library is sucess .

build.sbt content:

name := "spark-hello"

version := "1.0"

scalaVersion := "2.11.6"

libraryDependencies += "org.apache.spark" % "spark-core_2.11" % "1.4.1"

main code: enter image description here

as above pictures, the red section is error, I think it is strange, such as:

toInt method should be built-in function, but this not found it.

I don't know why solve it.

by Qin Dong Liang at August 03, 2015 06:05 AM

Lamer News

StackOverflow

Is there any way to input the result got from the curl via fluentd?

We are seeking the most simple way for sending alfresco's audit log to elasticsearch. I think using the alfresco supplying quiry and getting audit log would be most simple way.(since audit log data is hardly watchable on db) And this quiry processes the effect measure as json type then I'd like to download the quiry direct using fluentd and send to elasticsearch.

I roughly understood that it would ouput at elasticsearc but I wonder whether I can download 'curl commend' using quiry direct at fluentd. Otherwise, if you have other simple idea to get alfresco's audit log then kindly let me know.

by user3395249 at August 03, 2015 05:58 AM

CompsciOverflow

check the given laguage is context free laguage or not? [on hold]

Given laguage $a^mb^nc^md^n$ is context free language or not? If yes, construct and upload your PDA or NPDA here. Thank you.

by Aditya Dhanraj at August 03, 2015 05:32 AM

UnixOverflow

How do I disable the system beep in FreeBSD 10.1?

How do I disable the system beep on the console in FreeBSD 10.1?

The recommended commands don't work.

The sysctl setting:

# sysctl hw.syscons.bell=0
hw.syscons.bell: 1 -> 0
# sysctl -a | grep bell
hw.syscons.bell: 0

Backspace still results in an ear splitting beep.

Found another suggestion, to use kbdcontrol:

# kbdcontrol -b off
#

Nope, still beeps.

My system details:

An old Gateway MD-78 series (GM45 Express) laptop, without a hardware volume knob, and decidedly loud PC speaker volume.

I'm running FreeBSD 10.1.

# uname -a
FreeBSD raktop 10.1-RELEASE FreeBSD 10.1-RELEASE #0 r274401: Tue Nov 11 21:02:49 UTC 2014     root@releng1.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC  amd64

Update:

I'm running vt a.k.a. newcons, and eventually found that I could turn the beep off with:

kbdcontrol -b quiet.off

which can be put into /etc/rc.conf, to make the change permanent, as:

allscreens_kbdflags="-b quiet.off"

by rakslice at August 03, 2015 05:26 AM

TheoryOverflow

Constrained version of vertex cover in a bipartite graph

Let $G(V_1, V_2, E)$ be a bipartite graph such that degree of all the vertices in $V_1$ is bounded by some constant (say) $d$. Now, for given two positive integer $l$ and $k$, we wish to decide if there is a vertex cover $A\cup B$ with $|A|\leq l$ and $|B|\leq k$, where $A\subseteq V_1, B \subseteq V_2.$

Now, I have following two questions:

1) What is the complexity of the above problem when both $k$ and $l$ are part of input. (Intuitively, it appears to be hard.) If it is hard, then is there any PTAS known to approximate the problem.

2) What is the complexity of the problem when one of the parameters $k$ or $l$ is constant?

Pls let me know if there is any doubt about the problem.

by Ram at August 03, 2015 05:24 AM

StackOverflow

How to specify linked types for asInstanceOf

The title is not too descriptive, but I'm not aware how the used pattern is properly called. I hope all would become clear with example.

trait OneKnot[K <: Knot[K]] { this : K#One =>
  type OK = K // for example only. To show that it is actually defined
  def use(second : OK#Two)
}
trait TwoKnot[K <: Knot[K]] { this : K#Two => }

trait Knot[K <: Knot[K]] {
  type One <: OneKnot[K]
  type Two <: TwoKnot[K]
}

trait KnotExample extends Knot[KnotExample] {
  override type One = OneExample
  override type Two = TwoExample
}

trait OneExample extends OneKnot[KnotExample]
trait TwoExample extends TwoKnot[KnotExample]

object Test {
  def testByName( one : OneExample, two : TwoExample ) = one.use(two)
  def testByKnot[K <: Knot[K]]( one : K#One, two : K#Two ) = one.use(two)
  def testByCast(knot : Knot[_], one : OneKnot[_], two : TwoKnot[_]) = one.asInstanceOf[knot.One].use(two.asInstanceOf[knot.Two])
  def testByInnerCast(one : OneKnot[_], two : TwoKnot[_]) = one.use( two.asInstanceOf[one.OK#Two] )
}

Types OneExample and TwoExample is normally recognized by each other. The testByKnot method shows it. I can also call the use method with static parametrization by Knot. The types would be compatible, as shown in the testByKnot method.

But I need to discard type information to store data in a collection. e.g. Map[Knot[_], OneKnot[_]]. So I need to restore types after extraction from collection using asInstanceOf. But I failed to specify correctly to what types the cast needed.

In the last two test methods I get two corresponding errors:

NotSameType.scala:25: error: type mismatch;
 found   : knot.Two
 required: _$1#Two
  def testByCast(knot : Knot[_], one : OneKnot[_], two : TwoKnot[_]) = one.asInstanceOf[knot.One].use(two.asInstanceOf[knot.Two])
                                                                                                                      ^
NotSameType.scala:26: error: type Two is not a member of one.OK
  def testByInnerCast(one : OneKnot[_], two : TwoKnot[_]) = one.use( two.asInstanceOf[one.OK#Two] )

How the cast should be done properly?

by ayvango at August 03, 2015 05:01 AM

Will scala compiler hoist regular expressions

I wonder if this:

object Foo {
  val regex = "some complex regex".r
  def foo() {
    // use regex
  }
}

and this:

object Foo {
  def foo() {
    val regex = "some complex regex".r
    // use regex
  }
}

will have any performance difference. i.e., will scala compiler recognize that "some complex regex".r is a constant and cache it, so that it will not recompile every time?

by lyomi at August 03, 2015 04:54 AM

Lobsters

XKCD

StackOverflow

What is (functional) reactive programming?

I've read the Wikipedia article on reactive programming. I've also read the small article on functional reactive programming. The descriptions are quite abstract.

What does functional reactive programming (FRP) mean in practice? What does reactive programming (as opposed to non-reactive programming?) consist of? My background is in imperative/OO languages, so an explanation that relates to this paradigm would be appreciated.

by JtR at August 03, 2015 03:54 AM

What is the fastest way to write Fibonacci function in Scala?

I've looked over a few implementations of Fibonacci function in Scala starting from a very simple one, to the more complicated ones.

I'm not entirely sure which one is the fastest. I'm leaning towards the impression that the ones that uses memoization is faster, however I wonder why Scala doesn't have a native memoization.

Can anyone enlighten me toward the best and fastest (and cleanest) way to write a fibonacci function?

by Enrico Susatyo at August 03, 2015 03:52 AM

Scala Ordering by multiple values

I am trying to use Ordering[T] with multiple types.

case class Person(name: String, age: Int)

Person("Alice", 20)
Person("Bob", 40)
Person("Charlie", 30)

object PersonOrdering extends Ordering[Person] {
  def compare(a:Person, b:Person) =
    a.age compare b.age
}

How do I sort by both name and age?

The collection needs to remain sorted with updates.

by BAR at August 03, 2015 03:09 AM

DataTau

CompsciOverflow

Mac OS X, EclipseFP not appearing in Perspectives [on hold]

I've installed the EclipseFP plugin so I can work with Haskell in Eclipse. I am running Eclipse on my Mac. I have the most recent JRE, found somewhere that that may be the issue. The plugin is not in Windows --> Open Perspective --> Other which is where is should be. I am using Eclipse Luna.

by oliverjones at August 03, 2015 02:39 AM

Knapsack problem -- NP-complete despite dynamic programming solution?

Knapsack problems are easily solved by dynamic programming. Dynamic programming runs in polynomial time; that is why we do it, right?

I have read it is actually an NP-complete problem, though, which would mean that solving the problem in polynomial problem is probably impossible.

Where is my mistake?

by Strin at August 03, 2015 02:31 AM

Lobsters

StackOverflow

confusion re: solution to fibonnaci numbers with Scala infinite streams that used a def'd method within a def

I have been studying some posts on scala infinite streams to better wrap my head around this concept. I liked the simple solution in this post, which I have reproduced below:

def fib: Stream[Long] = {
  def tail(h: Long, n: Long): Stream[Long] = h #:: tail(n, h + n)
  tail(0, 1)
}

My initial understanding of what was going on was that we were returning a Stream[Long] object with the tail method overriden. To test out that (seemingly incorrect) hypothesis, I did the following, which would not compile:

def fib: Stream[Long] = {

  override def tail(h: Long, n: Long): Stream[Long] = h #:: tail(n, h + n)
     ^
     |
   ~error~~~

  tail(0, 1)
}

So this solution does seem to be based on an override. So, now I'm wondering.. what exactly is going on with Scala constructs that have a def of some type 'T' where the value of that block contains another def that, on first glance seems to override a method of T ?

Thanks in advance for enlightening me !

EDIT - here is the result of trying out the solution in the excellent answer from Mateusz Dymczyk :

object Foolaround  extends App {

  def fib: Stream[Long] = {
    def pail(h: Long, n: Long): Stream[Long] = h #:: pail(n, h + n)
    pail(0, 1)
  }

  var x = fib.tail.tail.tail.tail.tail.head
  println (s"results: ${x}")
}

by Chris Bedford at August 03, 2015 02:17 AM

QuantOverflow

Option platforms providing eurex products

I search an option platform providing eurex products as eurostoxx 50. Can you advice me some platforms ? Thank you in advance for your answer Julien

by julien at August 03, 2015 02:11 AM

CompsciOverflow

How to choose proper activation functions for hidden and output layers of a perceptron neural network?

As far as I know choosing an activation function for the input layer is relatively straightforward: I use Sigmoid if the input data domain is (0,1) and TANH if it is (-1,1).

But what activation functions to set for hidden and output layers? Is there any conventional logic for making thish choice reasonably? How do I know/set the domain of a neuron layer output?

by Ivan at August 03, 2015 02:07 AM

QuantOverflow

Why is that maximizing stock value, under uncertainty, is a better option than maximizing profits?

I've been trying to access the papers that state that kind of problem, but most of them need payment for access and I am on a student budget.

I know that maximizing profits=maximizing stock value in a world of certainty, but why is that maximizing stock value will be different from maximizing expected profits in a world of uncertainty?

by John Doe at August 03, 2015 02:07 AM

TheoryOverflow

Automorphism of a restricted irregular graph class

Motivation:

This query is motivated by this question . It has relation to the complexity analysis of this post.

I have been informed Highly Irregular Graph has number of automorphism $\leq n =$total number of vertices of graph. I would like to know the number of automorphism of the class of graphs described below.

Defined Graph:

A class of irregular graphs $G_{\alpha}$, where a graph of $G_{\alpha}$ is , say , $G $.

$n=|G|=$total number of vertices of $G$.

$G$ is $k$ connected irregular graph. It can be divided into total$ b $sets of vertices where 1. Each set has vertices of same degree.

  1. Each set is distinct in terms of vertex degree ,there exists no two sets which have vertices of same degree.

  2. Each set creates a regular graph (subgraph of given graph $G$).

    It is evident, that, $G_{\alpha}$ is not always a Highly Irregular Graph .

Claim : Number of graph automorphisms of irregular graph $G \in G_{\alpha} =\beta;\beta \leq n^c$. where $c$ is a constant .

Question: Is this claim always true ?

I searched , but could not find anything which can be used directly to derive an upper bound.

Any reference/ advice will help.

by Jim at August 03, 2015 02:07 AM

CompsciOverflow

Which matrix of Q values is being used here?

This question refers to this paper: Using Free Energies to Represent Q-values in a Multiagent Reinforcement Learning Task

In section 2.1, equations (5) and (6), I am wondering which Q values are being used to adjust the weights of the restricted boltzmann machine:

Option 1: the Q values generated by the original MDP

Option 2: the approximate Q values obtained by calculating the (negative of the) free energy of the RBM

Follow up question: When considering states $(s^t, a^t)$ and $(s^{t+1}, a^{t+1})$, how do we determine which values of $a^t$ and $a^{t+1}$ to use? Are these the optimal actions from the original MDP, or are these to be obtained through the alternating Gibbs sampling mentioned later on...(which doesn't make sense, since we would not have weights required for this CD)

Thanks for your help

by maybe at August 03, 2015 02:00 AM

TheoryOverflow

Is there an efficient program for generating a Sidon sequence?

I would need a Sidon sequence of about $10^9$ elements. I found math papers like [1] that explain how to generate Sidon sequences but it seems a lot of pain to write the corresponding program.

Are you aware of any existing program that generates a dense enough and large enough Sidon sequence ?

Extra: my real constraint for the sequence is $a+a′ , a \lt a′ ,~ a,a′ ∈ A$ instead of the true Sidon sequence property $a+a ′ , a ≤ a ′ , a, a ′ ∈ A$. So a Sidon sequence would do the job but I could hope for a denser sequence by tuning the program.

[1] Javier Cilleruelo, "Infinite Sidon sequences", eprint on arXiv: http://arxiv.org/abs/1209.0326

by Cédric Van Rompay at August 03, 2015 01:56 AM

StackOverflow

how to set type for jsonb[] for play-scala anorm pgsql

I am building a survey application in scala/play framework, and am using postgres9.4 and anorm. I am using jsonb as a datatype in other places but in one location I want to use jsonb[] thinking that this type is an array of jsonb values. My json structure is like the following:

[
    {"guitar":{"passion":3,
               "expertise":5,
               "willingToTeach":false,
               "lookingForOthers":false
              }
     },
     {"soccer":{"passion":3,
                "expertise":3,
                "willingToTeach":true,
                "lookingForOthers":true
                }
     }
]

Here each interest is a json structure. I have been able to add json response values to other columns in pgsql using jsonbas the data type, but when I try to use jsonb[] I get complaints: [PSQLException: Unknown type jsonb[].] In pgadmin3 it literally shows my this exact data type: jsonb[] for the column I am trying to insert into. In my anorm insert code I have tried setting the type:

val pgObject = new PGobject();
pgObject.setType("jsonb")

But then I get this error:

[PSQLException: ERROR: column "passions" is of type jsonb[] but expression is of type jsonb
  Hint: You will need to rewrite or cast the expression.
  Position: 43]

I have tried looking this up but I can't even seem to find what all string values I can use as an argument for pgObject.setType(). I am also unsure how I would go about casting the expression from jsonb to jsonb[] any other way than setting the type using the setType() method.

Any help would be greatly appreciated.

by kinghenry14 at August 03, 2015 01:40 AM

arXiv Networking and Internet Architecture

Tractable Resource Management with Uplink Decoupled Millimeter-Wave Overlay in Ultra-Dense Cellular Networks. (arXiv:1507.08979v1 [cs.IT])

The forthcoming 5G cellular network is expected to overlay millimeter-wave (mmW) transmissions with the incumbent micro-wave ($\mu$W) architecture. The overall mm-$\mu$W resource management should therefore harmonize with each other. This paper aims at maximizing the overall downlink (DL) rate with a minimum uplink (UL) rate constraint, and concludes: mmW tends to more focus on DL transmissions while $\mu$W has high priority for complementing UL, under time-division duplex (TDD) mmW operations. Such UL dedication of $\mu$W results from the limited use of mmW UL bandwidth due to high peak-to-average power ratio (PAPR) at mobile users. To further relieve this UL bottleneck, we propose mmW UL decoupling that allows a $\mu$W base station (BS) to receive mmW signals. Its impact on mm-$\mu$W resource management is provided in a tractable way by virtue of a novel closed-form mm-$\mu$W spectral efficiency (SE) derivation. In an ultra-dense cellular network (UDN), our derivation verifies mmW (or $\mu$W) SE is a logarithmic function of BS-to-user density ratio. This strikingly simple yet practically valid analysis is enabled by exploiting stochastic geometry in conjunction with real three dimensional (3D) building blockage statistics in Seoul, Korea.

by <a href="http://arxiv.org/find/cs/1/au:+Park_J/0/1/0/all/0/1">Jihong Park</a>, <a href="http://arxiv.org/find/cs/1/au:+Kim_S/0/1/0/all/0/1">Seong-Lyun Kim</a>, <a href="http://arxiv.org/find/cs/1/au:+Zander_J/0/1/0/all/0/1">Jens Zander</a> at August 03, 2015 01:30 AM

Distributed Algorithms for Finding Local Clusters Using Heat Kernel Pagerank. (arXiv:1507.08967v1 [cs.DC])

A distributed algorithm performs local computations on pieces of input and communicates the results through given communication links. When processing a massive graph in a distributed algorithm, local outputs must be configured as a solution to a graph problem without shared memory and with few rounds of communication. In this paper we consider the problem of computing a local cluster in a massive graph in the distributed setting. Computing local clusters are of certain application-specific interests, such as detecting communities in social networks or groups of interacting proteins in biological networks. When the graph models the computer network itself, detecting local clusters can help to prevent communication bottlenecks. We give a distributed algorithm that computes a local cluster in time that depends only logarithmically on the size of the graph in the CONGEST model. In particular, when the value of the optimal local cluster is known, the algorithm runs in time entirely independent of the size of the graph and depends only on error bounds for approximation. We also show that the local cluster problem can be computed in the k-machine distributed model in sublinear time. The speedup of our local cluster algorithms is mainly due to the use of our distributed algorithm for heat kernel pagerank.

by <a href="http://arxiv.org/find/cs/1/au:+Chung_F/0/1/0/all/0/1">Fan Chung</a>, <a href="http://arxiv.org/find/cs/1/au:+Simpson_O/0/1/0/all/0/1">Olivia Simpson</a> at August 03, 2015 01:30 AM

Achieving proportional fairness with a control theoretic approach in error-prone 802.11e WLANs. (arXiv:1507.08922v1 [cs.NI])

This letter proposes a control theoretic approach to achieve proportional fairness amongst access categories (ACs) in an error-prone EDCA WLAN for provision of distinct QoS requirements and priority parameters. The approach adaptively adjusts the minimum contention window of each AC to derive the station attempt probability to its optimum which leads to a proportional fair allocation of station throughputs. Evaluation results demonstrate that the proposed control approach has high accuracy performance and fast convergence speed for general network scenarios.

by <a href="http://arxiv.org/find/cs/1/au:+Chen_X/0/1/0/all/0/1">Xiaomin Chen</a>, <a href="http://arxiv.org/find/cs/1/au:+Yang_S/0/1/0/all/0/1">Shuang-Hua Yang</a> at August 03, 2015 01:30 AM

A control theoretic approach to achieve proportional fairness in 802.11e EDCA WLANs. (arXiv:1507.08920v1 [cs.NI])

This paper considers proportional fairness amongst ACs in an EDCA WLAN for provision of distinct QoS requirements and priority parameters. A detailed theoretical analysis is provided to derive the optimal station attempt probability which leads to a proportional fair allocation of station throughputs. The desirable fairness can be achieved using a centralised adaptive control approach. This approach is based on multivariable statespace control theory and uses the Linear Quadratic Integral (LQI) controller to periodically update CWmin till the optimal fair point of operation. Performance evaluation demonstrates that the control approach has high accuracy performance and fast convergence speed for general network scenarios. To our knowledge this might be the first time that a closed-loop control system is designed for EDCA WLANs to achieve proportional fairness.

by <a href="http://arxiv.org/find/cs/1/au:+Chen_X/0/1/0/all/0/1">Xiaomin Chen</a>, <a href="http://arxiv.org/find/cs/1/au:+Akinyemi_I/0/1/0/all/0/1">Ibukunoluwa Akinyemi</a>, <a href="http://arxiv.org/find/cs/1/au:+Yang_S/0/1/0/all/0/1">Shuang-Hua Yang</a> at August 03, 2015 01:30 AM

Response-Time-Optimised Service Deployment: MILP Formulations of Piece-wise Linear Functions Approximating Non-linear Bivariate Mixed-integer Functions. (arXiv:1507.08834v1 [cs.NI])

A current trend in networking and cloud computing is to provide compute resources at widely dispersed places; this is exemplified by developments such as Network Function Virtualisation. This paves the way for wide-area service deployments with improved service quality: e.g, a nearby server can reduce the user-perceived response times. But always using the nearest server can be a bad decision if that server is already highly utilised. This paper formalises the two related problems of allocating resources at different locations and assigning users to them with the goal of minimising the response times for a given number of resources to use -- a non-linear capacitated facility location problem with integrated queuing systems. To efficiently handle the non-linearity, we introduce five linear problem approximations and adapt the currently best heuristic for a similar problem to our scenario. All six approaches are compared in experiments for solution quality and solving time. Surprisingly, our best optimisation formulation outperforms the heuristic in both time and quality. Additionally, we evaluate the influence ot resource distributions in the network on the response time: Cut by half for some configurations. The presented formulations are applicable to a broader optimisation domain.

by <a href="http://arxiv.org/find/cs/1/au:+Keller_M/0/1/0/all/0/1">Matthias Keller</a>, <a href="http://arxiv.org/find/cs/1/au:+Karl_H/0/1/0/all/0/1">Holger Karl</a> at August 03, 2015 01:30 AM

Signals on Graphs: Uncertainty Principle and Sampling. (arXiv:1507.08822v1 [cs.DM])

In many applications of current interest, the observations are represented as a signal defined over a graph. The analysis of such signals requires the extension of standard signal processing tools. Building on the recently introduced Graph Fourier Transform, the first contribution of this paper is to provide an uncertainty principle for signals on graph. As a by-product of this theory, we show how to build a dictionary of maximally concentrated signals on vertex/frequency domains. Then, we establish a direct relation between uncertainty principle and sampling, which forms the basis for a sampling theorem for graph signals. Since samples location plays a key role in the performance of signal recovery algorithms, we suggest and compare a few alternative sampling strategies. Finally, we provide the conditions for perfect recovery of a useful signal corrupted by sparse noise, showing that this problem is also intrinsically related to vertex-frequency localization properties.

by <a href="http://arxiv.org/find/cs/1/au:+Tsitsvero_M/0/1/0/all/0/1">Mikhail Tsitsvero</a>, <a href="http://arxiv.org/find/cs/1/au:+Barbarossa_S/0/1/0/all/0/1">Sergio Barbarossa</a>, <a href="http://arxiv.org/find/cs/1/au:+Lorenzo_P/0/1/0/all/0/1">Paolo Di Lorenzo</a> at August 03, 2015 01:30 AM

A System Architecture for Software-Defined Industrial Internet of Things. (arXiv:1507.08810v1 [cs.NI])

Wireless sensor networks have been a driving force of the Industrial Internet of Things (IIoT) advancement in the process control and manufacturing industry. The emergence of IIoT opens great potential for the ubiquitous field device connectivity and manageability with an integrated and standardized architecture from low-level device operations to high-level data-centric application interactions. This technological development requires software definability in the key architectural elements of IIoT, including wireless field devices, IIoT gateways, network infrastructure, and IIoT sensor cloud services. In this paper, a novel software-defined IIoT (SD-IIoT) is proposed in order to solve essential challenges in a holistic IIoT system, such as reliability, security, timeliness scalability, and quality of service (QoS). A new IIoT system architecture is proposed based on the latest networking technologies such as WirelessHART, WebSocket, IETF constrained application protocol (CoAP) and software-defined networking (SDN). A new scheme based on CoAP and SDN is proposed to solve the QoS issues. Computer experiments in a case study are implemented to show the effectiveness of the proposed system architecture.

by <a href="http://arxiv.org/find/cs/1/au:+Hu_P/0/1/0/all/0/1">Peng Hu</a> at August 03, 2015 01:30 AM

Interest-Rate Modeling in Collateralized Markets: Multiple curves and credit-liquidity effects. (arXiv:1507.08779v1 [q-fin.PR])

We present a detailed analysis of interest rate derivatives valuation under credit risk and collateral modeling. We show how the credit and collateral extended valuation framework in Pallavicini et al (2011), and the related collateralized valuation measure, can be helpful in defining the key market rates underlying the multiple interest rate curves that characterize current interest rate markets. A key point is that spot Libor rates are to be treated as market primitives rather than being defined by no-arbitrage relationships. We formulate a consistent realistic dynamics for the different rates emerging from our analysis and compare the resulting model performances to simpler models used in the industry. We include the often neglected margin period of risk, showing how this feature may increase the impact of different rates dynamics on valuation. We point out limitations of multiple curve models with deterministic basis considering valuation of particularly sensitive products such as basis swaps. We stress that a proper wrong way risk analysis for such products requires a model with a stochastic basis and we show numerical results confirming this fact.

by <a href="http://arxiv.org/find/q-fin/1/au:+Bormetti_G/0/1/0/all/0/1">Giacomo Bormetti</a>, <a href="http://arxiv.org/find/q-fin/1/au:+Brigo_D/0/1/0/all/0/1">Damiano Brigo</a>, <a href="http://arxiv.org/find/q-fin/1/au:+Francischello_M/0/1/0/all/0/1">Marco Francischello</a>, <a href="http://arxiv.org/find/q-fin/1/au:+Pallavicini_A/0/1/0/all/0/1">Andrea Pallavicini</a> at August 03, 2015 01:30 AM

Multi-Tree Multicast Traffic Engineering for Software-Defined Networks. (arXiv:1507.08728v1 [cs.NI])

Although Software-Defined Networking (SDN) enables flexible network resource allocations for traffic engineering, current literature mostly focuses on unicast communications. Compared to traffic engineering for multiple unicast flows, multicast traffic engineering for multiple trees is very challenging not only because minimizing the bandwidth consumption of a single multicast tree by solving the Steiner tree problem is already NP-Hard, but the Steiner tree problem doe not considered the link capacity constraint for multicast flows and node capacity constraint to store the forwarding entries in Group Table of OpenFlow. In this paper, therefore, we first study the hardness results of scalable multicast traffic engineering in SDN. We prove that scalable multicast traffic engineering with only the node capacity constraint is NPHard and not approximable within ?, which is the number of destinations in the largest multicast group. We then prove that scalable multicast traffic engineering with both the node and link capacity constraints is NP-Hard and not approximable within any ratio. To solve the problem, we design a ?-approximation algorithm, named Multi-Tree Routing and State Assignment Algorithm (MTRSA), for the first case and extend to the general multicast traffic engineering problem. The simulation results demonstrate that the solutions obtained by the proposed algorithm are more bandwidth-efficient and scalable than the shortest-path trees and Steiner trees. Most importantly, MTRSA is computation-efficient and can be deployed in SDN since it can generate the solution on massive networks in a short time.

by <a href="http://arxiv.org/find/cs/1/au:+Shen_S/0/1/0/all/0/1">Shan-Hsiang Shen</a>, <a href="http://arxiv.org/find/cs/1/au:+Huang_L/0/1/0/all/0/1">Liang-Hao Huang</a>, <a href="http://arxiv.org/find/cs/1/au:+Hsu_H/0/1/0/all/0/1">Hsiang-Chun Hsu</a>, <a href="http://arxiv.org/find/cs/1/au:+Yang_D/0/1/0/all/0/1">De-Nian Yang</a>, <a href="http://arxiv.org/find/cs/1/au:+Chen_W/0/1/0/all/0/1">Wen-Tsuen Chen</a> at August 03, 2015 01:30 AM

Mixing HOL and Coq in Dedukti (Extended Abstract). (arXiv:1507.08721v1 [cs.LO])

We use Dedukti as a logical framework for interoperability. We use automated tools to translate different developments made in HOL and in Coq to Dedukti, and we combine them to prove new results. We illustrate our approach with a concrete example where we instantiate a sorting algorithm written in Coq with the natural numbers of HOL.

by <a href="http://arxiv.org/find/cs/1/au:+Assaf_A/0/1/0/all/0/1">Ali Assaf</a> (Inria, Ecole Polytechnique), <a href="http://arxiv.org/find/cs/1/au:+Cauderlier_R/0/1/0/all/0/1">Rapha&#xeb;l Cauderlier</a> (Inria, CNAM) at August 03, 2015 01:30 AM

Translating HOL to Dedukti. (arXiv:1507.08720v1 [cs.LO])

Dedukti is a logical framework based on the lambda-Pi-calculus modulo rewriting, which extends the lambda-Pi-calculus with rewrite rules. In this paper, we show how to translate the proofs of a family of HOL proof assistants to Dedukti. The translation preserves binding, typing, and reduction. We implemented this translation in an automated tool and used it to successfully translate the OpenTheory standard library.

by <a href="http://arxiv.org/find/cs/1/au:+Assaf_A/0/1/0/all/0/1">Ali Assaf</a> (Inria, Ecole Polytechnique), <a href="http://arxiv.org/find/cs/1/au:+Burel_G/0/1/0/all/0/1">Guillaume Burel</a> (ENSIIE/C&#xe9;dric) at August 03, 2015 01:30 AM

Checking Zenon Modulo Proofs in Dedukti. (arXiv:1507.08719v1 [cs.LO])

Dedukti has been proposed as a universal proof checker. It is a logical framework based on the lambda Pi calculus modulo that is used as a backend to verify proofs coming from theorem provers, especially those implementing some form of rewriting. We present a shallow embedding into Dedukti of proofs produced by Zenon Modulo, an extension of the tableau-based first-order theorem prover Zenon to deduction modulo and typing. Zenon Modulo is applied to the verification of programs in both academic and industrial projects. The purpose of our embedding is to increase the confidence in automatically generated proofs by separating untrusted proof search from trusted proof verification.

by <a href="http://arxiv.org/find/cs/1/au:+Cauderlier_R/0/1/0/all/0/1">Rapha&#xeb;l Cauderlier</a> (Inria), <a href="http://arxiv.org/find/cs/1/au:+Halmagrand_P/0/1/0/all/0/1">Pierre Halmagrand</a> (Inria) at August 03, 2015 01:30 AM

The Common HOL Platform. (arXiv:1507.08718v1 [cs.LO])

The Common HOL project aims to facilitate porting source code and proofs between members of the HOL family of theorem provers. At the heart of the project is the Common HOL Platform, which defines a standard HOL theory and API that aims to be compatible with all HOL systems. So far, HOL Light and hol90 have been adapted for conformance, and HOL Zero was originally developed to conform. In this paper we provide motivation for a platform, give an overview of the Common HOL Platform's theory and API components, and show how to adapt legacy systems. We also report on the platform's successful application in the hand-translation of a few thousand lines of source code from HOL Light to HOL Zero.

by <a href="http://arxiv.org/find/cs/1/au:+Adams_M/0/1/0/all/0/1">Mark Adams</a> (Proof Technologies Ltd, UK and Radboud University, Nijmegen, The Netherlands) at August 03, 2015 01:30 AM

Systematic Verification of the Modal Logic Cube in Isabelle/HOL. (arXiv:1507.08717v1 [cs.LO])

We present an automated verification of the well-known modal logic cube in Isabelle/HOL, in which we prove the inclusion relations between the cube's logics using automated reasoning tools. Prior work addresses this problem but without restriction to the modal logic cube, and using encodings in first-order logic in combination with first-order automated theorem provers. In contrast, our solution is more elegant, transparent and effective. It employs an embedding of quantified modal logic in classical higher-order logic. Automated reasoning tools, such as Sledgehammer with LEO-II, Satallax and CVC4, Metis and Nitpick, are employed to achieve full automation. Though successful, the experiments also motivate some technical improvements in the Isabelle/HOL tool.

by <a href="http://arxiv.org/find/cs/1/au:+Benzmuller_C/0/1/0/all/0/1">Christoph Benzm&#xfc;ller</a> (Freie Universit&#xe4;t Berlin, Germany), <a href="http://arxiv.org/find/cs/1/au:+Claus_M/0/1/0/all/0/1">Maximilian Claus</a> (Freie Universit&#xe4;t Berlin, Germany), <a href="http://arxiv.org/find/cs/1/au:+Sultana_N/0/1/0/all/0/1">Nik Sultana</a> (Cambridge University, UK) at August 03, 2015 01:30 AM

A framework for proof certificates in finite state exploration. (arXiv:1507.08716v1 [cs.LO])

Model checkers use automated state exploration in order to prove various properties such as reachability, non-reachability, and bisimulation over state transition systems. While model checkers have proved valuable for locating errors in computer models and specifications, they can also be used to prove properties that might be consumed by other computational logic systems, such as theorem provers. In such a situation, a prover must be able to trust that the model checker is correct. Instead of attempting to prove the correctness of a model checker, we ask that it outputs its "proof evidence" as a formally defined document--a proof certificate--and that this document is checked by a trusted proof checker. We describe a framework for defining and checking proof certificates for a range of model checking problems. The core of this framework is a (focused) proof system that is augmented with premises that involve "clerk and expert" predicates. This framework is designed so that soundness can be guaranteed independently of any concerns for the correctness of the clerk and expert specifications. To illustrate the flexibility of this framework, we define and formally check proof certificates for reachability and non-reachability in graphs, as well as bisimulation and non-bisimulation for labeled transition systems. Finally, we describe briefly a reference checker that we have implemented for this framework.

by <a href="http://arxiv.org/find/cs/1/au:+Heath_Q/0/1/0/all/0/1">Quentin Heath</a> (Inria &amp; LIX), <a href="http://arxiv.org/find/cs/1/au:+Miller_D/0/1/0/all/0/1">Dale Miller</a> (Inria &amp; LIX) at August 03, 2015 01:30 AM

Importing SMT and Connection proofs as expansion trees. (arXiv:1507.08715v1 [cs.LO])

Different automated theorem provers reason in various deductive systems and, thus, produce proof objects which are in general not compatible. To understand and analyze these objects, one needs to study the corresponding proof theory, and then study the language used to represent proofs, on a prover by prover basis. In this work we present an implementation that takes SMT and Connection proof objects from two different provers and imports them both as expansion trees. By representing the proofs in the same framework, all the algorithms and tools available for expansion trees (compression, visualization, sequent calculus proof construction, proof checking, etc.) can be employed uniformly. The expansion proofs can also be used as a validation tool for the proof objects produced.

by <a href="http://arxiv.org/find/cs/1/au:+Reis_G/0/1/0/all/0/1">Giselle Reis</a> (INRIA-Saclay) at August 03, 2015 01:30 AM

Setting Lower Bounds on Truthfulness. (arXiv:1507.08708v1 [cs.GT])

We present general techniques for proving inapproximability results for several paradigmatic truthful multidimensional mechanism design problems. In particular, we demonstrate the strength of our techniques by exhibiting a lower bound of 2-1/m for the scheduling problem with m unrelated machines (formulated as a mechanism design problem in the seminal paper of Nisan and Ronen on Algorithmic Mechanism Design). Our lower bound applies to truthful randomized mechanisms, regardless of any computational assumptions on the running time of these mechanisms. Moreover, it holds even for the wider class of truthfulness-in-expectation mechanisms. This lower bound nearly matches the known 1.58606 randomized truthful upper bound for the case of two machines (a non-truthful FPTAS exists).

Recently, Daskalakis and Weinberg show that there is a polynomial-time 2-approximately optimal Bayesian mechanism for makespan minimization for unrelated machines. We complement this result by showing an appropriate lower bound of 1.25 for deterministic incentive compatible Bayesian mechanisms.

We then show an application of our techniques to the workload-minimization problem in networks. We prove our lower bounds for this problem in the inter-domain routing setting presented by Feigenbaum, Papadimitriou, Sami, and Shenker. Finally, we discuss several notions of non-utilitarian fairness (Max-Min fairness, Min-Max fairness, and envy minimization) and show how our techniques can be used to prove lower bounds for these notions. No lower bounds for truthful mechanisms in multidimensional probabilistic settings were previously known.

by <a href="http://arxiv.org/find/cs/1/au:+Mualem_A/0/1/0/all/0/1">Ahuva Mu&#x27;alem</a>, <a href="http://arxiv.org/find/cs/1/au:+Schapira_M/0/1/0/all/0/1">Michael Schapira</a> at August 03, 2015 01:30 AM

Android Tapjacking Vulnerability. (arXiv:1507.08694v1 [cs.CR])

Android is an open source mobile operating system that is developed mainly by Google. It is used on a significant portion of mobile devices worldwide. In this paper, I will be looking at an attack commonly known as tapjacking. I will be taking the attack apart and walking through each individual step required to implement the attack. I will then explore the various payload options available to an attacker. Lastly, I will touch on the feasibility of the attack as well as mitigation strategies.

by <a href="http://arxiv.org/find/cs/1/au:+Lim_B/0/1/0/all/0/1">Benjamin Lim</a> at August 03, 2015 01:30 AM

QuantOverflow

Need historical prices of EUREX American and European style options

I am trying to get the historical price data on selected American and European style options at EUREX. I am not familiar with their system. Does any one know whether they have something like yahoo finance where I can just download market data through R? Or at least, excel files with daily prices. Thank you

by GKED at August 03, 2015 01:25 AM

CompsciOverflow

For AVL Trees why is keeping a trit (left heavy, right heavy or balanced) sufficient?

I was listening to Eric Demaine's video lecture on AVL trees and there was a claim that comes up that keeping a trit on each node (to indicate whether the node is left heavy, right heavy or balanced) should be sufficient for all AVL tree operations, so essentially we don't need to keep heights of the node for every node. Can anyone prove to me why that would be the case?

by Abdul Rahman at August 03, 2015 01:20 AM

About a step in the analysis of Quicksort by Sedgewick and Wayne [duplicate]

This question already has an answer here:

In the book Algorithms, 4th Edition by Robert Sedgewick and Kevin Wayne, when they are analyzing quicksort (page 294), they present the sequence of transformations: $$\begin{gather*} C_N = N + 1 + (C_0 + C_1 + \dots + C_{N-2} + C_{N-1})/N + (C_{N-1} + C_{N-2} + \dots + C_0)/N\\ NC_N = N(N+1) + 2(C_0 + C_1 + \dots + C_{N-2} + C_{N-1})\\ NC_N - (N-1)C_{N-1} = 2N + 2C_{N-1}\\ C_N/(N+1) = C_{N-1}/N + 2/(N+1)\\ C_N\sim 2(N+1)(1/3 + 1/4 + \dots + 1/(N+1))\end{gather*}$$

How did they get the last transformation?

It is also written that the parenthesized quantity in the last expression is the discrete estimate of the area under the curve $2/x$ from $3$ to $N$? How is it related to quicksort?

by Виталий Витренко at August 03, 2015 01:13 AM

StackOverflow

Compilation error following Play! Framework NewApplication guide

I use Fedora 22 OS and I'm trying to get started with Play! Framework. Naturally, I'm trying to build the simplest application possible so I can have a starting point.

I tried to follow the guide at the documentation (https://www.playframework.com/documentation/2.4.x/NewApplication) as close as I could, but I'm getting no success here. It fails when I call the activator inside the base folder of the project and indicates me some unresolved dependencies. I'll post the steps I've taken so I can show what I've done and the problem as clearly as possible, including some information about Java (I read somewhere I should use Oracle Java instead of OpenJDK, so I downloaded the latest Java SE JDK 7) and sbt, which I belive to be related to the process. My OS is in portuguese, so a few lines will be in portuguese, but as they refer to well known commands (basically the command alternatives), I don't think it will harm the comprehension.

su -

[root@localhost ~]# alternatives --config java

Há 3 programas que oferecem "java".

  Seleção    Comando
-----------------------------------------------
*+ 1           /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.45-36.b13.fc22.x86_64/jre/bin/java
   2           /usr/java/jdk1.7.0_79/bin/java
   3           /usr/java/default/jre/bin/java

Indique para manter a seleção atual[+] ou digite o número da seleção: 2

[root@localhost ~]# alternatives --config javac

Há 1 programa que oferece "javac".

  Seleção    Comando
-----------------------------------------------
*+ 1           /usr/java/default/bin/javac


[root@localhost ~]# alternatives --config javaws

Há 1 programa que oferece "javaws".

  Seleção    Comando
-----------------------------------------------
*+ 1           /usr/java/default/jre/bin/javaws

Indique para manter a seleção atual[+] ou digite o número da seleção: 1  

[root@localhost ~]# alternatives --config jar

Há 1 programa que oferece "jar".

  Seleção    Comando
-----------------------------------------------
*+ 1           /usr/java/default/bin/jar

Indique para manter a seleção atual[+] ou digite o número da seleção: 1

[root@localhost ~]# exit
logout

[gscofano@localhost new-project]$ java -version
java version "1.7.0_79"
Java(TM) SE Runtime Environment (build 1.7.0_79-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)

[gscofano@localhost new-project]$ javac -version
javac 1.7.0_79


[gscofano@localhost new-project]$ pwd 
/home/gscofano/Documentos/Programacao/play/play-2.4.2


[gscofano@localhost new-project]$ ls
activator-1.3.5-minimal


[gscofano@localhost new-project]$ mkdir new-project


[gscofano@localhost new-project]$ cd ./new-project


[gscofano@localhost new-project]$ sbt about

[info] Set current project to new-project (in build file:/home/gscofano/Documentos/Programacao/play/play-2.4.2/new-project/)
[info] This is sbt 0.13.1
[info] The current project is {file:/home/gscofano/Documentos/Programacao/play/play-2.4.2/new-project/}new-project 0.1-SNAPSHOT
[info] The current project is built against Scala 2.10.4
[info] 
[info] sbt, sbt plugins, and build definitions are using Scala 2.10.4


[gscofano@localhost new-project]$ ../activator-1.3.5-minimal/activator new first-app play-java

OK, application "first-app" is being created using the "play-java" template.

To run "first-app" from the command line, "cd first-app" then:
/home/gscofano/Documentos/Programacao/play/play-2.4.2/new-project/first-app/activator run

To run the test for "first-app" from the command line, "cd first-app" then:
/home/gscofano/Documentos/Programacao/play/play-2.4.2/new-project/first-app/activator test

To run the Activator UI for "first-app" from the command line, "cd first-app" then:
/home/gscofano/Documentos/Programacao/play/play-2.4.2/new-project/first-app/activator ui


[gscofano@localhost new-project]$ cd ./first-app


[gscofano@localhost first-app]$ ./activator
[info] Loading project definition from /home/gscofano/Documentos/Programacao/play/play-2.4.2/new-project/first-app/project
[info] Updating {file:/home/gscofano/Documentos/Programacao/play/play-2.4.2/new-project/first-app/project/}first-app-build...
[info] Resolving org.scala-sbt#precompiled-2_9_3;0.13.8 ...
[warn]  ::::::::::::::::::::::::::::::::::::::::::::::
[warn]  ::          UNRESOLVED DEPENDENCIES         ::
[warn]  ::::::::::::::::::::::::::::::::::::::::::::::
[warn]  :: org.scala-lang#scala-library;2.10.4: configuration not found in org.scala-lang#scala-library;2.10.4: 'master(compile)'. Missing configuration: 'compile'. It was required from com.typesafe#npm_2.10;1.1.1 compile
[warn]  :: org.scala-lang#scala-compiler;2.10.4: configuration not found in org.scala-lang#scala-compiler;2.10.4: 'master(compile)'. Missing configuration: 'compile'. It was required from com.typesafe.play#twirl-compiler_2.10;1.1.1 compile
[warn]  ::::::::::::::::::::::::::::::::::::::::::::::
[warn] 
[warn]  Note: Unresolved dependencies path:
[warn]      org.scala-lang:scala-library:2.10.4 ((sbt.Classpaths) Defaults.scala#L1203)
[warn]        +- org.scala-sbt:control:0.13.8
[warn]        +- org.scala-sbt:io:0.13.8
[warn]        +- org.scala-sbt:classpath:0.13.8
[warn]        +- org.scala-sbt:incremental-compiler:0.13.8
[warn]        +- org.scala-sbt:persist:0.13.8
[warn]        +- org.scala-sbt:compiler-integration:0.13.8
[warn]        +- org.scala-sbt:actions:0.13.8
[warn]        +- org.scala-sbt:main:0.13.8
[warn]        +- org.scala-sbt:sbt:0.13.8
[warn]        +- default:first-app-build:0.1-SNAPSHOT (sbtVersion=0.13, scalaVersion=2.10)
[warn]      org.scala-lang:scala-compiler:2.10.4
[warn]        +- org.scala-sbt:classpath:0.13.8
[warn]        +- org.scala-sbt:incremental-compiler:0.13.8
[warn]        +- org.scala-sbt:persist:0.13.8
[warn]        +- org.scala-sbt:compiler-integration:0.13.8
[warn]        +- org.scala-sbt:actions:0.13.8
[warn]        +- org.scala-sbt:main:0.13.8
[warn]        +- org.scala-sbt:sbt:0.13.8
[warn]        +- default:first-app-build:0.1-SNAPSHOT (sbtVersion=0.13, scalaVersion=2.10)
sbt.ResolveException: unresolved dependency: org.scala-lang#scala-library;2.10.4: configuration not found in org.scala-lang#scala-library;2.10.4: 'master(compile)'. Missing configuration: 'compile'. It was required from com.typesafe#npm_2.10;1.1.1 compile
unresolved dependency: org.scala-lang#scala-compiler;2.10.4: configuration not found in org.scala-lang#scala-compiler;2.10.4: 'master(compile)'. Missing configuration: 'compile'. It was required from com.typesafe.play#twirl-compiler_2.10;1.1.1 compile
    at sbt.IvyActions$.sbt$IvyActions$$resolve(IvyActions.scala:291)
    at sbt.IvyActions$$anonfun$updateEither$1.apply(IvyActions.scala:188)
    at sbt.IvyActions$$anonfun$updateEither$1.apply(IvyActions.scala:165)
    at sbt.IvySbt$Module$$anonfun$withModule$1.apply(Ivy.scala:155)
    at sbt.IvySbt$Module$$anonfun$withModule$1.apply(Ivy.scala:155)
    at sbt.IvySbt$$anonfun$withIvy$1.apply(Ivy.scala:132)
    at sbt.IvySbt.sbt$IvySbt$$action$1(Ivy.scala:57)
    at sbt.IvySbt$$anon$4.call(Ivy.scala:65)
    at xsbt.boot.Locks$GlobalLock.withChannel$1(Locks.scala:93)
    at xsbt.boot.Locks$GlobalLock.xsbt$boot$Locks$GlobalLock$$withChannelRetries$1(Locks.scala:78)
    at xsbt.boot.Locks$GlobalLock$$anonfun$withFileLock$1.apply(Locks.scala:97)
    at xsbt.boot.Using$.withResource(Using.scala:10)
    at xsbt.boot.Using$.apply(Using.scala:9)
    at xsbt.boot.Locks$GlobalLock.ignoringDeadlockAvoided(Locks.scala:58)
    at xsbt.boot.Locks$GlobalLock.withLock(Locks.scala:48)
    at xsbt.boot.Locks$.apply0(Locks.scala:31)
    at xsbt.boot.Locks$.apply(Locks.scala:28)
    at sbt.IvySbt.withDefaultLogger(Ivy.scala:65)
    at sbt.IvySbt.withIvy(Ivy.scala:127)
    at sbt.IvySbt.withIvy(Ivy.scala:124)
    at sbt.IvySbt$Module.withModule(Ivy.scala:155)
    at sbt.IvyActions$.updateEither(IvyActions.scala:165)
    at sbt.Classpaths$$anonfun$sbt$Classpaths$$work$1$1.apply(Defaults.scala:1369)
    at sbt.Classpaths$$anonfun$sbt$Classpaths$$work$1$1.apply(Defaults.scala:1365)
    at sbt.Classpaths$$anonfun$doWork$1$1$$anonfun$87.apply(Defaults.scala:1399)
    at sbt.Classpaths$$anonfun$doWork$1$1$$anonfun$87.apply(Defaults.scala:1397)
    at sbt.Tracked$$anonfun$lastOutput$1.apply(Tracked.scala:37)
    at sbt.Classpaths$$anonfun$doWork$1$1.apply(Defaults.scala:1402)
    at sbt.Classpaths$$anonfun$doWork$1$1.apply(Defaults.scala:1396)
    at sbt.Tracked$$anonfun$inputChanged$1.apply(Tracked.scala:60)
    at sbt.Classpaths$.cachedUpdate(Defaults.scala:1419)
    at sbt.Classpaths$$anonfun$updateTask$1.apply(Defaults.scala:1348)
    at sbt.Classpaths$$anonfun$updateTask$1.apply(Defaults.scala:1310)
    at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
    at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:40)
    at sbt.std.Transform$$anon$4.work(System.scala:63)
    at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
    at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
    at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17)
    at sbt.Execute.work(Execute.scala:235)
    at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
    at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
    at sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:159)
    at sbt.CompletionService$$anon$2.call(CompletionService.scala:28)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
[error] (*:update) sbt.ResolveException: unresolved dependency: org.scala-lang#scala-library;2.10.4: configuration not found in org.scala-lang#scala-library;2.10.4: 'master(compile)'. Missing configuration: 'compile'. It was required from com.typesafe#npm_2.10;1.1.1 compile
[error] unresolved dependency: org.scala-lang#scala-compiler;2.10.4: configuration not found in org.scala-lang#scala-compiler;2.10.4: 'master(compile)'. Missing configuration: 'compile'. It was required from com.typesafe.play#twirl-compiler_2.10;1.1.1 compile
Project loading failed: (r)etry, (q)uit, (l)ast, or (i)gnore?

Here is build.sbt

name := """first-app"""

version := "1.0-SNAPSHOT"

lazy val root = (project in file(".")).enablePlugins(PlayJava)

scalaVersion := "2.11.6"

libraryDependencies ++= Seq(
  javaJdbc,
  cache,
  javaWs
)

// Play provides two styles of routers, one expects its actions to be injected, the
// other, legacy style, accesses its actions statically.
routesGenerator := InjectedRoutesGenerator

build properties

#Activator-generated Properties
#Sat Aug 01 03:30:20 BRT 2015
template.uuid=4908845b-9453-410b-af0f-404c1440dff1
sbt.version=0.13.8

And plugins.sbt

// The Play plugin
addSbtPlugin("com.typesafe.play" % "sbt-plugin" % "2.4.2")

// Web plugins
addSbtPlugin("com.typesafe.sbt" % "sbt-coffeescript" % "1.0.0")
addSbtPlugin("com.typesafe.sbt" % "sbt-less" % "1.0.6")
addSbtPlugin("com.typesafe.sbt" % "sbt-jshint" % "1.0.3")
addSbtPlugin("com.typesafe.sbt" % "sbt-rjs" % "1.0.7")
addSbtPlugin("com.typesafe.sbt" % "sbt-digest" % "1.1.0")
addSbtPlugin("com.typesafe.sbt" % "sbt-mocha" % "1.1.0")

// Play enhancer - this automatically generates getters/setters for public fields
// and rewrites accessors of these fields to use the getters/setters. Remove this
// plugin if you prefer not to have this feature, or disable on a per project
// basis using disablePlugins(PlayEnhancer) in your build.sbt
addSbtPlugin("com.typesafe.sbt" % "sbt-play-enhancer" % "1.1.0")

// Play Ebean support, to enable, uncomment this line, and enable in your build.sbt using
// enablePlugins(SbtEbean). Note, uncommenting this line will automatically bring in
// Play enhancer, regardless of whether the line above is commented out or not.
// addSbtPlugin("com.typesafe.sbt" % "sbt-play-ebean" % "1.0.0")

I tried to separate the lines in this last file with blank lines. Nothing changes.

I tried to google this, but I couldn't find anything like. I'd appreciate very much if somebody could help me with this issue. And, finally, I'd like to thank all the readers who dedicated their time in reading this question.

edit: I just realized I was using /usr/default/jre/bin/javac. I fixed that to /usr/jdk1.8.0_51/jre/bin/javac when I tried Java SDK 8.

by Guilherme Scofano at August 03, 2015 01:12 AM

QuantOverflow

Given cash flows, what is the interest rate if the period is in days (10 day period)

I am presented with an investment opportunity where I am given #481,000 on day 1. Thereafter, every 10 days, I am required to give back #50,000 every for 100 days (10 * 50000 = 500000).

How do I calculate the interest rate I am paying?

I am guessing I have to use the present value of annuity problem to find out the interest rate.

So, my present value is #481,000. My "annuity" is 50000 every 10 days. First payment is due on the 10th day. Last payment is due on day 100.

Plugging the above values in wolframalpha I get .7107% for interest rate. I divide that by 10 to get per day interest rate and multiply by 365 to get 25.94%

I am surprised to see the above answer. It is a lot more than 14% which is what the rate would be if I were to pay 500000 at the end of 100 days. Is my reasoning incorrect?

by newbie at August 03, 2015 12:58 AM

Planet Theory

L-Visibility Drawings of IC-planar Graphs

Authors: Giuseppe Liotta, Fabrizio Montecchiani
Download: PDF
Abstract: An IC-plane graph is a topological graph where every edge is crossed at most once and no two crossed edges share a vertex. We show that every IC-plane graph has a visibility drawing where every vertex is an L-shape, and every edge is either a horizontal or vertical segment. As a byproduct of our drawing technique, we prove that an IC-plane graph has a RAC drawing in quadratic area with at most two bends per edge.

August 03, 2015 12:44 AM

Auditable Versioned Data Storage Outsourcing

Authors: Ertem Esiner, Anwitaman Datta
Download: PDF
Abstract: Auditability is crucial for data outsourcing, facilitating accountability and identifying data loss or corruption incidents in a timely manner, reducing in turn the risks from such losses. In recent years, in synch with the growing trend of outsourcing, a lot of progress has been made in designing probabilistic (for efficiency) provable data possession (PDP) schemes. However, even the recent and advanced PDP solutions that do deal with dynamic data, do so in a limited manner, and for only the latest version of the data. A naive solution treating different versions in isolation would work, but leads to tremendous overheads, and is undesirable. In this paper, we present algorithms to achieve full persistence (all intermediate configurations are preserved and are modifiable) for an optimized skip list (known as FlexList) so that versioned data can be audited. The proposed scheme provides deduplication at the level of logical, variable sized blocks, such that only the altered parts of the different versions are kept, while the persistent data-structure facilitates access (read) of any arbitrary version with the same storage and process efficiency that state-of-the-art dynamic PDP solutions provide for only the current version, while commit (write) operations incur around 5% additional time. Furthermore, the time overhead for auditing arbitrary versions in addition to the latest version is imperceptible even on a low-end server...

August 03, 2015 12:43 AM

On the Displacement for Covering a Unit Interval with Randomly Placed Sensors

Authors: Rafał Kapelko, Evangelos Kranakis
Download: PDF
Abstract: Consider $n$ mobile sensors placed independently at random with the uniform distribution on a barrier represented as the unit line segment $[0,1]$. The sensors have identical sensing radius, say $r$. When a sensor is displaced on the line a distance equal to $d$ it consumes energy (in movement) which is proportional to some (fixed) power $a > 0$ of the distance $d$ traveled. The energy consumption of a system of $n$ sensors thus displaced is defined as the sum of the energy consumptions for the displacement of the individual sensors.

We focus on the problem of energy efficient displacement of the sensors so that in their final placement the sensor system ensures coverage of the barrier and the energy consumed for the displacement of the sensors to these final positions is minimized in expectation. In particular, we analyze the problem of displacing the sensors from their initial positions so as to attain coverage of the unit interval and derive trade-offs for this displacement as a function of the sensor range. We obtain several tight bounds in this setting thus generalizing several of the results of [8] to any power $a >0$.

August 03, 2015 12:41 AM

Parameterized lower bound and improved kernel for Diamond-free Edge Deletion

Authors: R. B. Sandeep, Naveen Sivadasan
Download: PDF
Abstract: A diamond is a graph obtained by removing an edge from a complete graph on four vertices. A graph is diamond-free if it does not contain an induced diamond. The Diamond-free Edge Deletion problem asks to find whether there exist at most $k$ edges in the input graph whose deletion results in a diamond-free graph. The problem is known to be fixed-parameter tractable and a polynomial kernel of $O(k^5)$ vertices is found by Y. Cai [MPhil Thesis, 2012].

In this paper, we give an improved kernel of $O(k^3)$ vertices for Diamond-free Edge Deletion. We complement the result by proving that the problem is NP-complete and cannot be solved in time $2^{o(k)}\cdot n^{O(1)}$, unless Exponential Time Hypothesis fails.

August 03, 2015 12:41 AM

Bidirectional PageRank Estimation: From Average-Case to Worst-Case

Authors: Peter Lofgren, Siddhartha Banerjee, Ashish Goel
Download: PDF
Abstract: We present a new algorithm for estimating the Personalized PageRank (PPR) between a source and target node on undirected graphs, with sublinear running-time guarantees over the worst-case choice of source and target nodes. Our work builds on a recent line of work on bidirectional estimators for PPR, which obtained sublinear running-time guarantees but in an average case sense, for a uniformly random choice of target node. Crucially, we show how the reversibility of random walks on undirected networks can be exploited to convert average-case to worst-case guarantees. While past bidirectional methods combine forward random walks with reverse local pushes, our algorithm combines forward local pushes with reverse random walks. We also modify our methods to estimate random-walk probabilities for any length distribution, thereby obtaining fast algorithms for estimating general graph diffusions, including the heat kernel, on undirected networks. Whether such worst-case running-time results extend to general graphs, or if PageRank computation is fundamentally easier on undirected graphs as opposed to directed graphs, remains an open question.

August 03, 2015 12:40 AM

The Complexity of Some Combinatorial Puzzles

Authors: Holger Petersen
Download: PDF
Abstract: We show that the decision versions of the puzzles Knossos and The Hour-Glass are complete for NP.

August 03, 2015 12:40 AM

TheoryOverflow

Data Mining of self-replicators

My current (very limited) understanding of the creative process that leads to the design of self-replicators is that any particular self-replicator, like Universal Constructor, Langton's loop or Evoloop, are designed by painstaking, lengthy, meticulous design/engineering, which often involves composing units designed by previous mathematicians/hobbyists into more advanced and better behaved self-replicators. In other words, I believe that the current process of designing a self-replicator is following:

  1. Take a self-replicator somebody designed before.
  2. Get an idea of improving it, based on some theoretical research that somebody did, a gut feeling, an idea that's been floated around, a dream you had last night etc. The idea should improve the self-replicator with respect to some property that is commonly recognised to be worth improving (lower period, less fragility, sexual reproduction, etc.).
  3. Try to make it work by working through multiple versions of the design, continuously refining the idea, and doing a ton of micro-optimisation. If you succeed go to step 1, and try to improve your new replicator. If you fail, go to step 2 and pick a different idea.

This method obviously works very well, as shown by the progress in the field, and is not very different from how in general progress in mathematics works. However, I've been thinking lately of a different approach to the problem, and I couldn't find any prior art trying this approach.

The process, let's call it "Self-Replicator Data-Mining" would be following:

  1. Take the largest grid you can quickly evolve on your hardware. Size would most likely be limited by the amount of RAM you can devote to a GPU shader, which assuming to be 12 GB, gives a grid of about 100,000^2 squares.
  2. Pick a "rich" set of rules, like Brian's Brain, to get a lot of chaos (you'll have to work on a torus).
  3. Run it for about 8 months, periodically saving the state to a hard drive for backups. A back-of-envolope calculation shows that this would give you about billion steps of a simulation.
  4. In meantime, design an algorithm which can recognise a self-replicator by looking at the grid after the nth iteration. The algorithm might be very slow, because it only needs to work once. It does sound like the biggest challenge -- and probably would involve some machine learning/ pattern recognition. In principle a parent can replicate to a slightly mutated child, which would make data mining of self-replicators even more difficult.
  5. Publish your newly discoverd self-replicator(s) if you manage to find any. Or go to step 2 and pick a different set of rules, or wait for a graphics card with 120 GB of RAM, or run your simulation for 8 years, or abandon the method entirely in favour of something more fruitful.

Has anything like this ever been tried? Is it a path worth pursuing or are there any obvious flaws?

by picrin at August 03, 2015 12:34 AM

AWS

CloudWatch Logs Subscription Consumer + Elasticsearch + Kibana Dashboards

Many of the things that I blog about lately seem to involve interesting combinations of two or more AWS services and today’s post is no exception. Before I dig in, I’d like to briefly introduce all of the services that I plan to name-drop later in this post. Some of this will be review material, but I do like to make sure that every one of my posts makes sense to someone who knows little or nothing about AWS.

The last three items above have an important attribute in common — they can each create voluminous streams of event data that must be efficiently stored, index, and visualized in order to be of value.

Visualize Event Data
Today I would like to show you how you can use Kinesis and a new CloudWatch Logs Subscription Consumer to do just that. The subscription consumer is a specialized Kinesis stream reader. It comes with built-in connectors for Elasticsearch and S3, and can be extended to support other destinations.

We have created a CloudFormation template that will launch an Elasticsearch cluster on EC2 (inside of a VPC created by the template), set up a log subscription consumer to route the event data in to ElasticSearch, and provide a nice set of dashboards powered by the Kibana exploration and visualization tool. We have set up default dashboards for VPC Flow Logs, Lambda, and CloudTrail; you can customize them as needed or create other new ones for your own CloudWatch Logs log groups.

The stack takes about 10 minutes to create all of the needed resources. When it is ready, the Output tab in the CloudFormation Console will show you the URLs for the dashboards and administrative tools:

The stack includes versions 3 and 4 of Kibana, along with sample dashboards for the older version (if you want to use Kibana 4, you’ll need to do a little bit of manual configuration). The first sample dashboard shows the VPC Flow Logs. As you can see, it includes a considerable amount of information:

The next sample displays information about Lambda function invocations, augmented by data generated by the function itself:

The final three columns were produced by the following code in the Lambda function. The function is processing a Kinesis stream, and logs some information about each invocation:

exports.handler = function(event, context) {
    var start = new Date().getTime();
    var bytesRead = 0;

    event.Records.forEach(function(record) {
        // Kinesis data is base64 encoded so decode here
        payload = new Buffer(record.kinesis.data, 'base64').toString('ascii');
        bytesRead += payload.length;

        // log each record
        console.log(JSON.stringify(record, null, 2));
    });

    // collect statistics on the function's activity and performance
    console.log(JSON.stringify({ 
        "recordsProcessed": event.Records.length,
        "processTime": new Date().getTime() - start,
        "bytesRead": bytesRead,
    }, null, 2));

    context.succeed("Successfully processed " + event.Records.length + " records.");
};

There’s a little bit of magic happening behind the scenes here! The subscription consumer noticed that the log entry was a valid JSON object and instructed Elasticsearch to index each of the values. This is cool, simple, and powerful; I’d advise you to take some time to study this design pattern and see if there are ways to use it in your own systems.

For more information on configuring and using this neat template, visit the CloudWatch Logs Subscription Consumer home page.

Consume the Consumer
You can use the CloudWatch Logs Subscription Consumer in your own applications. You can extend it to add support for other destinations by adding another connector (use the Elasticsearch and S3 connectors as examples and starting points).

— Jeff;

by Jeff Barr at August 03, 2015 12:25 AM

CompsciOverflow

Constructing a PDA for the language $\{a^m b^n : m < 2n < 3m \}$

I'm having a lot of trouble constructing a PDA for the language: \begin{equation*} \{a^m b^n : m < 2n < 3m \} \end{equation*}

I know if I push a symbol for each $a$ I see, then pop 2 symbols for each $b$ I see, then I should run out to satisfy the $m < 2n$ part of the inequality. But I really don't understand how to include the requirement $2n < 3m$. I'm assuming it has something to do with clever branching based on non-determinism, but I can't wrap my head around it. Any help would really appreciated.

by Kevin G at August 03, 2015 12:21 AM

StackOverflow

Why does ZeroMQ need two ports to establish a PUB/SUB channel? [on hold]

While Redis or any other broker uses only a single port to connect to, and provide a PUB/SUB mechanism, why ZeroMQ needs two distinct ports (one for PUB, one for SUB)?

Here is a relevant StackOverflow question/answer

by ceremcem at August 03, 2015 12:08 AM

HN Daily

August 02, 2015

/r/scala

StackOverflow

Initialize companion object without accessing in Scala

If you run this simple code, you will see the following:

object A {
  println("from A")
  var x = 0
}

object B {
  println("from B")
  A.x = 1
}

object Test extends App {
  println(A.x)
}

// Result:
// from A
// 0

As you can guess, scala initialize objects lazily. Object B is not initialized and it works not as expected. My question here: what tricks can i use to initialize object B without accessing it ? The first trick i can use is extend object with some trait and use reflection to initialize object that extends specific trait. I think more elegant way is to annotate object with macro annotation:

@init
object B {
  println("from B")
  A.x = 1
}

class init extends StaticAnnotation {
  def macroTransform(annottees: Any*) = macro init.impl
}

object init {
  def impl(c: whitebox.Context)(annottees: c.Expr[Any]*): c.Expr[Any] = {
    import c.universe._

    // what should i do here ?
  }
}

But I little bit confused. How to invoke methods (in order to init) from annotated object in macro impl method ?

by Daryl at August 02, 2015 11:47 PM

How to execute on start code in scala Play! framework application?

I need to execute a code allowing the launch of scheduled jobs on start of the application, how can I do this? Thanks.

by Peter at August 02, 2015 11:40 PM

NoSuchMethodError writing Avro object to HDFS using Builder

I'm getting this exception when writing an object to HDFS:

Exception in thread "main" java.lang.NoSuchMethodError: org.apache.avro.Schema$Parser.parse(Ljava/lang/String;[Ljava/lang/String;)Lorg/apache/avro/Schema;
        at com.blah.SomeType.<clinit>(SomeType.java:10)

The line it is referencing in the generated code is this:

public class SomeType extends org.apache.avro.specific.SpecificRecordBase implements org.apache.avro.specific.SpecificRecord {
  public static final org.apache.avro.Schema SCHEMA$ = new org.apache.avro.Schema.Parser().parse ..........

And the call in my code is this:

val someTypeBuilder = SomeType.newBuilder()
      .setSomeField("some value")
      // set a whole load of other fields

Calling newBuilder() in test code causes no issues at all.

The jar is being run on an HDFS node using the hadoop jar command.

Any ideas what might be going wrong here?

by jimmy_terra at August 02, 2015 11:38 PM

/r/netsec

StackOverflow

Ansible - How to keep appending new keys to a dictionary when using set_fact module with with_items?

I want to add keys to a dictionary when using set_fact with with_items. This is a small POC which will help me complete some other work. I have tried to generalize the POC so as to remove all the irrelevant details from it.

When I execute following code it is shows a dictionary with only one key that corresponds to the last item of the with_items. It seems that it is re-creating a new dictionary or may be overriding an existing dictionary for every item in the with_items. I want a single dictionary with all the keys.

Code:

---
- hosts: localhost
  connection: local
  vars:
      some_value: 12345
      dict: {}
  tasks:
     - set_fact: {
          dict: "{
             {{ item }}: {{ some_value }}
             }"
            }
       with_items:
          - 1
          - 2
          - 3
     - debug: msg="{{ dict }}"

by Anand Patel at August 02, 2015 10:46 PM

How to run Multi threaded jobs in apache spark using scala or python?

I am facing a problem related to concurrency in spark which is stopping me from using it in production but I know there is a way out of it. I am trying to run Spark ALS on 7 million users for a billion products using order history. Firstly I am taking a list of distinct Users and then running a loop on these users to get recommendations, which is pretty slow process and will take days to get recommendations for all users. I tried doing cartesian users and products to get recommendations for all at once but again to feed this to elasticsearch I have to filter and sort records for each users and only then I can feed it to elasticsearch to be consumed by other APIs.

So please suggest me a solution which is pretty scalable in such use case and to be used in production with realtime recommendations.

Here is my code snippet in scala which will give you an idea how I am currently approaching to solve the problem:

  //    buy_values -> RDD with Rating(<int user_id>, <int product_id>, <double rating>)
  def recommend_for_user(user: Int): Unit = {
      println("Recommendations for User ID: " + user);
      // Product IDs which are not bought by user 
      val candidates = buys_values
        .filter(x => x("customer_id").toString.toInt != user)
        .map(x => x("product_id").toString.toInt)
        .distinct().map((user, _))
      // find 30 products with top rating
      val recommendations = bestModel.get
        .predict(candidates)
        .takeOrdered(30)(Ordering[Double].reverse.on(x => x.rating))

      var i = 1
      var ESMap = Map[String, String]()
      recommendations.foreach { r =>
        ESMap += r.product.toString -> bitem_ids.value(r.product)
      }
      //  push to elasticsearch with user as id
      client.execute {
        index into "recommendation" / "items" id user fields ESMap
      }.await
      // remove candidate RDD from memory
      candidates.unpersist()
  }
  // iterate on each user to get recommendations for the user [slow process]
  user_ids.foreach(recommend_for_user)

by Suraj at August 02, 2015 10:44 PM

Dave Winer

Braintrust: Updates on EC2 for Poets

Over the last few days I've posted a couple of pieces for the braintrust, asking questions about what desktop people like for Ubuntu, and soliciting advice for launching a Node app so it persists and stays out of the way. I've got some progress to report.

  1. I punted on installing a desktop at least for the first release. I never managed to get it working. Too complicated. Need to approach this again. Maybe the second time will be gold.

  2. I am running two processes on the poets machine, one that runs River4, accessible on port 80, though it's running on port 1337; and the other a background app that watches for updates from a fixed location, giving me the ability to update the River4 software while the servers are running. This was very important for the previous generation of EC2 for Poets.

  3. I am using the software suggested by Dan MacTough to daemonize a Node app. It was simple and worked the first time. I also had good luck with a suggestion from Andrew Shell for node-foreman. I'm likely to use this on my own server first, so I can get comfortable. It looks solid, well-designed and simple.

  4. On the other hand there still seems to be a role for forever. I am both deploying and developing on my production servers. Always making changes. It has to be easy for me to get a process to restart. Forever does that well, and it appears perhaps node-foreman doesn't? Not sure.

  5. I thought maybe if I learned how to run shell commands from my Node code that I could have one thing that runs at startup which launches lots of other things, but I am being reminded of how important the concept of a terminal session is on Unix. Anything you launch in your app only has an existence within your app. Just had a thought that maybe using fork instead of exec might get me the performance I want. Hmmm.

Anyway, I'll let you know how it progresses.

August 02, 2015 10:35 PM

/r/netsec

StackOverflow

How do I get a list of asset files when Play application runs from a dist?

I've been developing a simple Play! 2.3 application (using Activator) that loads its data from a set of directories on the filesystem (it's a prototype app and I don't want to fiddle with DBs right now).

Everything is working fine in development (i.e. activator run) and I can simply do this:

private val setsDir: File = new File("app/assets/m53sets")
require(setsDir.isDirectory, "Not a directory: " + setsDir)
def allSets: Seq[String] = setsDir.listFiles filter { _.isDirectory  } map { _.getName } sorted

This is however not working as soon as I deploy the app using Dokku (on a Digital Ocean droplet), because the files are all packaged in a JAR that obviously can't be accessed using the File API.

I know that one can use something like Play.resourceAsStream("foo/bar/baz.txt") when ones need to access a single file, but I need to first enumerate all the files, and I'd rather not hardcode the set of directories and files in the directories that I will load at runtime.

How can I solve this problem?

by user180940 at August 02, 2015 10:22 PM

QuantOverflow

Short put option in merton model

Can someone give me an intuitive understanding of why the Merton model models the value of the debt from the lender's point of view as a short put with a risk free bond? I'm not well versed in this so I'd appreciate answers that are not heavy on math; I'm just looking for an intuitive understanding. Thanks.

by AfterWorkGuinness at August 02, 2015 10:17 PM

StackOverflow

Swift Functional Programming - is there a better way to translate a nested for loop than two map calls

i've transformed a nested for loop into a nested map call. i was wondering if there was a more elegant way to implement it.

here is a function that takes in an Array of Items and an Array of functions (Item -> Item) and returns an array with all of the functions applied to each item:

typealias Item = Dictionary<String, String>    

func updatedItems(items: Array<Item>, fns: Array<Item -> Item>) -> Array<Item> {
    return items.map {
        item in

        var itemCopy = item
        fns.map { fn in itemCopy = fn(itemCopy) }

        return itemCopy
    }
}

by Harlan Kellaway at August 02, 2015 10:14 PM

/r/freebsd

StackOverflow

Running tests multiple times with different fixtures

If I pass use-fixtures multiple fixtures, it will close over them one after another:

(def ^:dynamic *path* nil)

(defn sun [f]
  (println "sun setup" *path*)
  (binding [*path* "sun"]
    (f))
  (println "sun cleanup"))

(defn rain [f]
  (println "rain setup" *path*)
  (binding [*path* "rain"]
    (f))
  (println "rain cleanup"))

(use-fixtures :once sun rain)

(deftest sometest1
         (println "sometest1" *path*))

(deftest sometest2
         (println "sometest2" *path*))

(run-tests)

Testing scratchpad.core
sun setup nil
rain setup sun
sometest1 rain
sometest2 rain
rain cleanup
sun cleanup

This is useful but how to do instead something like:

Testing scratchpad.core
sun setup nil
sometest1 sun
sometest2 sun
sun cleanup
rain setup nil
sometest1 rain
sometest2 rain
rain cleanup

by Pol at August 02, 2015 10:09 PM

Planet Emacsen

Grant Rettke: Easily Go To and Return From Headlines in Org-Mode

Quit using goto-line and isearch to navigate in your Org-Mode document. I didn’t want to use Helm or Imenu to do it and Org-Mode has a built in solution with org-goto. Be sure to bind the “pop” key very close-by to make it symmetrical and fast.

(define-key org-mode-map (kbd "s-u") #'org-goto)
(define-key org-mode-map (kbd "s-U") #'org-mark-ring-goto)

by Grant at August 02, 2015 10:08 PM

/r/netsec

QuantOverflow

Covariance structure of call option surface

Assume the observed call option prices $C(K_i,T_i)$ for $i = 1,\dots,N$ are disturbed by some unknown measurement noise $\epsilon$. What would an appropriate covariance structure be for $\epsilon$?

In literature I often see authors making the simplified assumption that $\epsilon_i$ are independent and identically distributed Gaussian with some scaling variance that could depend on for example the bid-ask spread. This seems very unreasonable since if one considers two options $C(K,T)$ and $C(K + \delta,T)$ then in the limit $\delta \to 0$ they should be perfectly correlated.

Has anyone read any literature that discusses these types of modelling choices in more detail?

by Lotus3000 at August 02, 2015 09:52 PM

StackOverflow

scopt: Is it possible to refactor single Config into smaller Configs?

I have been using scopt with a single case class.

case class ArgsConfig( arg1: type1 = value1, arg2: type2 = value2, ... )

I now have a large number of possible arguments which makes the class hard to read. The arguments can be grouped logically into smaller groups, for example, arguments dealing with using Spark, etc.

Is is possible to refactor the single Config into smaller Configs to allow handling of a single command line in an equivalent manner to using a single large Config.

Thanks.

by user2947133 at August 02, 2015 09:52 PM

Planet Theory

Sanjeev Arora on rethinking the graduate algorithms course

[Below is a guest post from Sanjeev Arora on his redesign of the traditional graduate algorithms course to be a better match for today’s students. –Boaz]

For the last two years I have tried new ideas in teaching algorithms at the graduate level. The course is directed at first year CS grads, but is also taken by grads from related disciplines, and many advanced undergrads. (Links to course homepage, and single file with all course materials.)

The course may be interesting to you if, like me, you are rethinking the traditional choice of topics. The following were my thoughts behind the redesign:

  • The environment for algorithms design and use has greatly changed since the 1980s. Problems tend to be less cleanly stated (as opposed to “bipartite matching” or “maximum flow”) and often involve high-dimensional and/or noisy inputs. Continuous optimization is increasingly important.
  • As the last theory course my students (grad or undergrad) might take for the rest of their lives, it should somewhat fill in holes in their undergraduate CS education: information/coding theory, economic utility and game theory, decision-making under uncertainty, cryptography (anything beyond the RSA cryptosystem), etc.
  • Programming assignments need to be brought back! CS students like hands-on learning: an algorithm becomes real only once they see it run on real data. Also, computer scientists today —whether in industry or academia—rely on subroutine libraries and scripting languages. A few lines in Matlab and Scipy can be written in minutes and run on datasets of millions or billions of numbers. No JAVA or C++ needed! Algorithms education should weave in such powerful tools. It is beneficial even for theory students play with them.

Sample programming assignments: (a) (compression via SVD) given a 512 x 512 grayscale image, treat it as a matrix and take its rank k approximation via SVD, for k=15, 30,45,60.  Use mat2gray in matlab to render this new matrix as a grayscale image and see what k suffices for realistic recovery. (b) You are given S&P stock price data for 10 years. run online gradient descent to manage a portfolio (Lecture 16), and report what returns you get with various parameter settings.

Students are allowed to do a final project in lieu of a final, and many choose to apply algorithms to some real world problem they are interested in. Sample projects are also listed on the course page.

I welcome your comments, suggestions, and links to other relevant course materials on the web!


by Boaz Barak at August 02, 2015 09:35 PM

Planet Theory

17 candidates, only 10 in the debate- what to do?


On Thursday Aug 6 there will be Republican debate among 10 of the 17 (yes 17) candidates for the republican nomination.

1) There are 17 candidates. Here is how I remember them: I think of the map of the US and go down the east coast, then over to Texas then up.  That only works for the candidates that are or were Senators or Govenors.  I THEN listthe outsiders.  Hence my order is (listing their last job in government) George Pataki (Gov-NY), Chris Christie (Gov-NY), Rick Santorum (Sen-PA), Rand Paul(Sen KT),JimGilmore(Gov-VA), Lindsay Graham (Sen-SC),Jeb Bush (Gov-FL), Marco Rubio (Sen-FL), Bobby Jindal (Gov-LA), Ted Cruz (Sen-TX), Rick Perry (Gov-TX), Mike Huckabee (Gov-AK), Scott Walker (Gov-Wisc), John Kaisch (Gov-Ohio) Donald Trump (Businessman), Ben Carson (Neurosurgeon), Carly Fiorina (Businesswomen).
9 Govs, 5 Sens, 3 outsiders.

2) Having a debate with 17 candidates would be insane. Hence they decided a while back to have the main debate with the top 10 in the average of 5 polls, and also have a debate with everyone else. There are several problems with this: (a) candidates hovering around slots 9,10,11,12 are closer together than the margin of error, (b) the polls are supposed to measure what the public wants, not dictate things, (c) the polls were likely supposd to determine who the serious candidates are, but note that Trump is leading the polls, so thats not quite right.QUESTION: Lets say that Chris Christie is at 2% with a margin of + or - 3%. Could he really be a -1%?

3) A better idea might be to RANDOMLY partition the field into two groups, one of size 8 and one of size 9, and have two debates that way.What randomizer would they use? This is small enough they really could just put slips of paper in a hat and draw them. If they had many more candidates we might use Nisan-Wigderson.

4) How did they end up with 17 candidates?

a) Being a Candidate is not a well defined notion. What is the criteria to be a candidate? Could Lance Fortnow declare that he is a candidate for the Republian Nomination (or for that matter the Democratic nomination). YES. He's even written some papers on Economics so I'dvote for him over... actually, any of the 17. RUN LANCE RUN! So ANYONE who wants to run can! And they Do! I'm not sure what they can do about this---it would be hard to define ``serious candidate'' rigorously.

b) The debate is in August but the Iowa Caucus isn't until Feb 1. So why have the debate now? I speculate that they wanted to thin out the field early, but this has the opposite effect--- LOTS of candidates now want to get into the debates.

c) (I've heard this) Campaign Finance laws have been gutted by the Supreme court, so if you just have ONE mega-wealthy donor you have enough money to run. Or you can fund yourself (NOTE- while Trump could fund himself, sofar he hasn't had to as the media is covering him so much).

d) Because there are so many, and no dominating front runner, they all think they have a shot at it. So nobody is dropping out. Having a lot of people running makes more people want to run. (All the cool kids are doing it!)


by GASARCH (noreply@blogger.com) at August 02, 2015 09:14 PM

CompsciOverflow

How can I convert a list with duplicates into a set for a reduction to the set cover problem?

I'm trying to come up with a reduction for a problem whose description is more or less identical to the first problem given here.

Here's a condensed version of the problem:
You're given a collection of refrigerator magnet letters that may contain duplicates. You have a limited vocabulary of words made of combinations of those letters (again, a word may contain the same letter twice). Can you choose words from the vocabulary to build with the magnets such that all of the magnets are used and each magnet is used in only one word?

It's suspiciously similar to the set cover problem, so I've been trying to come up with a way to convert (in polynomial time) a collection of symbols that potentially contains duplicates into a set. This would allow the collection of symbols to be used as U and the child's vocabulary to be used as S for set cover.

I've had no luck so far, so a hint or even a suggestion for a more appropriate NP-complete problem to reduce to would be greatly appreciated.

by wug at August 02, 2015 09:13 PM

Difference between $O(n^2)$ and $O(m)$ for algorithms on graphs

Given a graph $G$ with n nodes and m edges, if an algorithm solves a problem $X$ on $G$ with a complexity $O(n^2)$, while an other algorithm solves same problem $X$ on $G$ but with complexity $O(m)$, what is the most efficient ?

by Tanuzzo88 at August 02, 2015 08:59 PM

StackOverflow

How to compare two functions for equivalence, as in (λx.2*x) == (λx.x+x)?

Is there a way to compare two functions for equality? For example, (λx.2*x) == (λx.x+x) should return true, because those are obviously equivalent.

by Viclib at August 02, 2015 08:58 PM

Christian Neukirchen

02aug2015

by Christian Neukirchen (chneukirchen@gmail.com) at August 02, 2015 08:27 PM

Planet Clojure

Liberator: As easy as it gets

When I first started using liberator I had a hard time figuring out what was what….and at the same time learning clojure and trying to change my OOP brain into understanding functional code. I want to talk about liberator from that perspective and maybe it will help others too. :)

Liberator is a clojure library to create web APIs. You create your endpoints by defining resources. Liberator needs a routing library to direct traffic to ring (Ring is something like Rack in ruby), Compojure is popular (there are others – liberator’s docs mention bidi … I am going to use compojure today (but bidi looks interesting!). Lets start with a plain compojure app, type:

1
lein new compojure simpleway

It creates a boilerplate web application with ring as the server. Lets try it out:

1
lein ring server

It (usually, unless you have something else running) starts on port 3000, but watch the screen when it starts up. It will say where it started. Then hit http://localhost:3000 to see a hello world message. Ok, it works! :)

Now lets add liberator, you can find out the current version on the project homepage which at this time is 0.13.

add

1
[liberator "0.13"]

To project.clj. Be sure to use double quotes (I have to break this habit of single quoted strings I inherited from years of ruby) and put it anywhere in the list of dependencies. So it looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
(defproject simpleway "0.1.0-SNAPSHOT"
  :description "FIXME: write description"
  :url "http://example.com/FIXME"
  :min-lein-version "2.0.0"
  :dependencies [[org.clojure/clojure "1.6.0"]
                 [compojure "1.3.1"]
                 [liberator "0.13"]
                 [ring/ring-defaults "0.1.2"]]
  :plugins [[lein-ring "0.8.13"]]
  :ring {:handler simpleway.handler/app}
  :profiles
    {:dev {:dependencies [[javax.servlet/servlet-api "2.5"]
                          [ring-mock "0.1.5"]]}})

Ok thats all we need to change there. Open src/simpleway/handler.clj and add the following to the ns at top:

1
[liberator.core :refer [defresource]]

I add it as the second to last thing in the :require statement. It looks like this:

1
2
3
4
5
(ns simpleway.handler
  (:require [compojure.core :refer :all]
            [compojure.route :as route]
            [liberator.core :refer [defresource]]
            [ring.middleware.defaults :refer [wrap-defaults site-defaults]]))

Ok, lets make our first resource:

add this to your file

1
2
(defresource hello-world-resource
  :handle-ok "Hello Clojure World!")

Add it before defroutes app-routes

then change that to read like this:

1
2
3
(defroutes app-routes
  (GET "/" [] hello-world-resource)
  (route/not-found "Not Found"))

changing the GET function to use your resource instead of just spitting out a string. Uhoh .. when you visit http://localhost:3000 you’ll see

1
No acceptable resource available.

Let’s tell the hello-world-resource what media types are available

1
2
3
(defresource hello-world-resource
  :available-media-types ["text/plain"]
  :handle-ok "Hello Clojure World!")

This is something I didn’t realize at first.. in a defresource each thing (or pairs of things, always in a pair) falls into one of four categories as best I can figure out:

  • representation (either as :available-media-types (vector of strings) or :as-response (fn )
  • decision points (ends in ? or starts with [accept | if | is | method | post | put]
  • handler (starts with handle if a redirect also defines :location [url])
  • actions (:get!, :post!, :delete!, :patch!)

So now you have the simplest liberator resource which just returns text.

My thanks to Dar for his talk at AustinClojure which helped solidify some of these things :)

by Nola Stowe at August 02, 2015 08:07 PM

StackOverflow

org.slf4j.helpers.SubstituteLogger cannot be cast to ch.qos.logback.classic.Logger

I've seen some questions very similar to this one (like this one), but none of them received a good answer, or at least one that explained or solved this problem

I was able to create a very small project (basically 2 Scala classes - each with a logger - and 2 test classes) with a similar structure to my real project where I found the problem. The example is available here: project example

The problem happens when I run sbt test, resulting in the following error:

{14:43:41} (#47) ~/Desktop/logger-exp/log-exp@pedrorijo(master) $ sbt test
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=384m; support was removed in 8.0
[info] Loading global plugins from /Users/pedrorijo/.sbt/0.13/plugins
[info] Loading project definition from /Users/pedrorijo/Desktop/git/scala/logger-exp/log-exp/project
[info] Set current project to log-exp (in build file:/Users/pedrorijo/Desktop/git/scala/logger-exp/log-exp/)
[info] HelloTest:
[info] Exception encountered when attempting to run a suite with class name: HelloTest *** ABORTED ***
[info]   java.lang.ExceptionInInitializerError:
[info]   at HelloTest$$anonfun$1.apply$mcV$sp(HelloTest.scala:8)
[info]   at HelloTest$$anonfun$1.apply(HelloTest.scala:8)
[info]   at HelloTest$$anonfun$1.apply(HelloTest.scala:8)
[info]   at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
[info]   at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
[info]   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info]   at org.scalatest.Transformer.apply(Transformer.scala:22)
[info]   at org.scalatest.Transformer.apply(Transformer.scala:20)
[info]   at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
[info]   at org.scalatest.Suite$class.withFixture(Suite.scala:1122)
[info]   ...
[info]   Cause: java.lang.ClassCastException: org.slf4j.helpers.SubstituteLogger cannot be cast to ch.qos.logback.classic.Logger
[info]   at com.example.Hello$.<init>(Hello.scala:8)
[info]   at com.example.Hello$.<clinit>(Hello.scala)
[info]   at HelloTest$$anonfun$1.apply$mcV$sp(HelloTest.scala:8)
[info]   at HelloTest$$anonfun$1.apply(HelloTest.scala:8)
[info]   at HelloTest$$anonfun$1.apply(HelloTest.scala:8)
[info]   at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
[info]   at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
[info]   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info]   at org.scalatest.Transformer.apply(Transformer.scala:22)
[info]   at org.scalatest.Transformer.apply(Transformer.scala:20)
[info]   ...
SLF4J: The following set of substitute loggers may have been accessed
SLF4J: during the initialization phase. Logging calls during this
SLF4J: phase were not honored. However, subsequent logging calls to these
SLF4J: loggers will work as normally expected.
SLF4J: See also http://www.slf4j.org/codes.html#substituteLogger
SLF4J: com.example.Hello$
14:43:52.846 [pool-6-thread-4-ScalaTest-running-WorldTest] INFO  com.example.World$ - LOGGING
[info] WorldTest:
[info] - test
[info] Run completed in 456 milliseconds.
[info] Total number of tests run: 1
[info] Suites: completed 1, aborted 1
[info] Tests: succeeded 1, failed 0, canceled 0, ignored 0, pending 0
[info] *** 1 SUITE ABORTED ***
[error] Error during tests:
[error]     HelloTest
[error] (test:test) sbt.TestsFailedException: Tests unsuccessful
[error] Total time: 1 s, completed Aug 2, 2015 2:43:52 PM

but if I run each of the tests individually through:

sbt "testOnly HelloTest" and sbt "testOnly WorlTest" I get no error.

As far as I've seen this is not just a Scala issue, it seems that in Java is also possible to happen. Furthermore, I've read that during initialization the loggers are substituted by a Substitute Logger.

But I can't understand:

  1. why it only happens in tests, and when running both tests?
  2. why it happens?
  3. how to debug/solve this problem?

As I said before, I've created a repository with a working example (working, meaning with the error happening) here: project example

by pedrorijo91 at August 02, 2015 07:49 PM

/r/scala

Lobsters

/r/clojure

/r/freebsd

Is there a way to make a package use a newer version of a dependency?

I'm a Linux refugee (sort of) and I'm setting up a Vagrant FreeBSD box. So far I love the OS but I've run into a bump with pkg.

I'm using PostgreSQL and PHP so I do

$ pkg install postgresql94-server

$ pkg install php56-pgsql

The problem is that php56-pgsql depends on postgresql93-client while postgresql94-server depends on postgresql94-client. They conflict. So when php56-pqsql installs, it removes postgresql94-*.

As I would much prefer PostgreSQL 9.4, is there a way to make php56-pgsql use postgresql94-client? I realize I could solve this by going with PostgreSQL 9.3 but I don't necessarily want to do that.

submitted by Pyridin
[link] [6 comments]

August 02, 2015 07:12 PM

StackOverflow

How to set environmental variables using Ansible

I need to set the variables like JAVA_HOME and update PATH. There are a number of ways of doing this. One way is to update the /etc/environment variable and include a line for JAVA_HOME using the lineinfile module and then run the command source /etc/environment directly on the guest OS (CentOS in my case).

Another way is to execute the export command e.g.

export JAVA_HOME=/usr/java/jre1.8.0_51 export PATH=$PATH:$JAVA_HOME

Is there a cleaner way to do this as all these require manipulating files and running commands directly on the OS to update the environment variables?

by Deepak Shenoy at August 02, 2015 07:10 PM

/r/compsci

[CS1] Understanding Rebalancing AVL Trees

not sure if right subreddit, but just need help figuring something out in my first comp sci class. If your a set of values, say,

86, 25, 98, 83, 27, 90, 71, 94

how would you put those into an initially empty AVL tree. Then, when I go to rebalance the tree if I have a multiple nodes with a balancing factor, n{i}, where |n| > 1 and i some integer. How do I go about choosing an initial n{i} to begin the rebalancing?

Any help just understanding this on a conceptual level would be much appreciated. Thank you!

submitted by mathsorcerer
[link] [5 comments]

August 02, 2015 07:08 PM

StackOverflow

Ansible execute command locally and then on remote server

I am trying to start a server using ansible shell module with ipmitools and then do configuration change on that server once its up.

Server with ansible installed also has ipmitools.

On server with ansible i need to execute ipmitools to start target server and then execute playbooks on it.

Is there a way to execute local ipmi commands on server running ansible to start target server through ansible and then execute all playbooks over ssh on target server.

by Kevin Parker at August 02, 2015 07:06 PM

/r/netsec

CompsciOverflow

What randomness really is

I'm a Computer Science student and am currently enrolled in System Simulation & Modelling course. It involves dealing with everyday systems around us and simulating them in different scenarios by generating random numbers in different distributional curves, like IID, Gaussian etc. for instance. I've been working on the boids project and a question just struck me that what exactly "random" really is? I mean, for instance, every random number that we generate, even in our programming languages like via the Math.random() method in Java, essentially is generated following an "algorithm".

How do we really know that a sequence of numbers that we produce is in fact, random and would it help us, to simulate a certain model as accurately as possible?

by user1892655 at August 02, 2015 06:57 PM

Chomsky hierarchy type determined by language

I have some modified automata and the task is to give the type of Chomsky hierarchy to it. All task is between type 3 and 0 noninclusive. For regular languages there are lot of tools and I can check it without problems, Turing Machine equivalent is also easy task, and there will be no such examples.

Now the question: is it sufficient to show that automaton can accept specified language of given type? From what I checked it would be sufficient to show equivalence to for example to NPDA, so I assume that if machine handles language that at least NPDA accepts it would be sufficient.

For example if machine can accept $a^nb^n$, it is type 2. If machine can accept $a^nb^nc^nd^n$ it is type 1? If not are there better examples of such languages or what steps should I follow?

by EvilJS at August 02, 2015 06:57 PM

n log n = c. What are some good approximations of this?

I am currently looking into Big O notation and computational complexity.

Problem 1.1 in CLRS asks what seems a basic question, which is to get an intuition about how different algorithmic complexities grow with the size of the input.

The question asks:

For each function $f(n)$ and time $t$ in the following table, determine the largest size $n$ of a problem that can be solved in time $t$, assuming that the algorithm to solve the problem takes $f(n)$ microseconds.

The time periods are 1 second, 1 minute, 1 hour, 1 day, 1 month, 1 year, 1 century.

The functions $f(n)$ are seemingly common time complexities that arise in algorithms frequently, the list being:

$$ \log_2n, \quad \sqrt{n}, \quad n, \quad n \log_2 n, \quad n^2, \quad n^3, \quad 2^n \quad \text{and} \quad n!$$

Most are fairly straightforward algorithmic manipulations. I am struggling with two of these, and both for the same reason:

If $c$ is the time in microseconds, the two I am struggling with are $$ n \log_2 n = c$$ $$ n! \sim \sqrt{2\pi n} \left(\frac{n}{e}\right)^n = c$$

For $n!$ I thought of using Stirling's Approximation.

These both require the ability to solve $n \log_2 n = c$, with Stirling require a little more manipulation.

Questions

  1. As $n \log_2 n$ is not solvable using elementary functions (only Lambert W), what are some good ways to approximate $n log_2 n$? Or how do we implement Lambert W?
  2. How do we solve n! = c, necessarily approximately as n grows large. Is Stirling the right way to go, and if so how to solve $\sqrt{2\pi n} \left(\frac{n}{e}\right)^n = c$

Here is some python code I put together to complete the table with my current output:

EDIT: Based on a couple of answers, I have used a binary search method (except for lg n). I have edited the code below to reflect this:

+---------+-------------+-------------+-------------+-------------+-------------+-------------+-------------+
| f(n)    |    1 sec    |    1 min    |    1 Hour   |    1 Day    |   1 Month   |    1 Year   |  1 Century  |
+---------+-------------+-------------+-------------+-------------+-------------+-------------+-------------+
| lg n    | 2^(1.0E+06) | 2^(6.0E+07) | 2^(3.6E+09) | 2^(8.6E+10) | 2^(2.6E+12) | 2^(3.2E+13) | 2^(3.2E+15) |
| sqrt(n) |   1.0E+12   |   3.6E+15   |   1.3E+19   |   7.5E+21   |   6.7E+24   |   9.9E+26   |   9.9E+30   |
| n       |   1.0E+06   |   6.0E+07   |   3.6E+09   |   8.6E+10   |   2.6E+12   |   3.2E+13   |   3.2E+15   |
| n log n |    62746    |   2.8E+06   |   1.3E+08   |   2.8E+09   |   7.2E+10   |   8.0E+11   |   6.9E+13   |
| n^2     |     1000    |     7745    |    60000    |    293938   |   1.6E+06   |   5.6E+06   |   5.6E+07   |
| n^3     |     100     |     391     |     1532    |     4420    |    13736    |    31593    |    146645   |
| 2^n     |      19     |      25     |      31     |      36     |      41     |      44     |      51     |
| n!      |      9      |      11     |      12     |      13     |      15     |      16     |      17     |
+---------+-------------+-------------+-------------+-------------+-------------+-------------+-------------+

Python code:

import math
import decimal
from prettytable import PrettyTable

def binary_search_guess(f, t, last=1000):
    for i in range(0, last):
        guess = pow(2,i)
        if f(guess) > t:
            return binary_search_function(f, pow(2,i-1), guess, t)

    return -1 

def binary_search_function(f, first, last, target):
    found = False

    while first<=last and not found:
        midpoint = (first + last)//2
            if f(midpoint) <= target and f(midpoint+1) > target:
                found = True
            else:
                if target < f(midpoint):
                    last = midpoint-1
                else:
                    first = midpoint+1
    best_guess = midpoint

    return best_guess

def int_or_sci(x):
    if x >= math.pow(10,6):
        x = '%.1E' % decimal.Decimal(x)
    else:
        x = int(x)

    return x

def input_size_calc():
    #Create Pretty Table Header
    tbl = PrettyTable(["f(n)", "1 sec", "1 min", "1 Hour", "1 Day", "1 Month", "1 Year", "1 Century"])
    tbl.align["f(n)"] = "l" # Left align city names
    tbl.padding_width = 1 # One space between column edges and contents (default)

    #Each Time Interval in Microseconds
    tsec = pow(10,6)
    tmin = 60 * tsec
    thour = 3600 * tsec
    tday = 86400 * tsec
    tmonth = 30 * tday
    tyear = 365 * tday
    tcentury = 100 * tyear

    tlist = [tsec,tmin,thour,tday,tmonth,tyear,tcentury]
    #print tlist

    #Add rows   
    #lg n
    f = lambda x : math.log(x,2)
    fn_list = []
    for t in tlist:
        #This would take too long for binary search method
        ans = int_or_sci(t)
        fn_list.append("2^(%s)" % ans)
    tbl.add_row(["lg n",fn_list[0], fn_list[1], fn_list[2], fn_list[3], fn_list[4], fn_list[5], fn_list[6]])

    #sqrt(n)
    f = lambda x : math.pow(x,1/2.0)
    fn_list = []
    for t in tlist:
        fn_list.append(int_or_sci(binary_search_guess(f, t)))
    tbl.add_row(["sqrt(n)",fn_list[0], fn_list[1], fn_list[2], fn_list[3], fn_list[4], fn_list[5], fn_list[6]])

    #n
    f = lambda x : x
    fn_list = []
    for t in tlist:
        fn_list.append(int_or_sci(binary_search_guess(f, t)))
    tbl.add_row(["n",fn_list[0], fn_list[1], fn_list[2], fn_list[3], fn_list[4], fn_list[5], fn_list[6]])

    #n log n
    f = lambda x : x * math.log(x,2)
    fn_list = []
    for t in tlist:
        fn_list.append(int_or_sci(binary_search_guess(f, t)))
    tbl.add_row(["n log n",fn_list[0], fn_list[1], fn_list[2], fn_list[3], fn_list[4], fn_list[5], fn_list[6]])

    #n^2
    f = lambda x : math.pow(x,2)
    fn_list = []
    for t in tlist:
        fn_list.append(int_or_sci(binary_search_guess(f, t)))
    tbl.add_row(["n^2",fn_list[0], fn_list[1], fn_list[2], fn_list[3], fn_list[4], fn_list[5], fn_list[6]])

    #n^3
    f = lambda x : math.pow(x,3)
    fn_list = []
    for t in tlist:
        fn_list.append(int_or_sci(binary_search_guess(f, t)))
    tbl.add_row(["n^3",fn_list[0], fn_list[1], fn_list[2], fn_list[3], fn_list[4], fn_list[5], fn_list[6]])

    #2^n
    f = lambda x : math.pow(2,x)
    fn_list = []
    for t in tlist:
        fn_list.append(int_or_sci(binary_search_guess(f, t)))
    tbl.add_row(["2^n",fn_list[0], fn_list[1], fn_list[2], fn_list[3], fn_list[4], fn_list[5], fn_list[6]])

    #n!
    f = lambda x : math.factorial(x)
    fn_list = []
    for t in tlist:
        fn_list.append(int_or_sci(binary_search_guess(f, t)))
    tbl.add_row(["n!",fn_list[0], fn_list[1], fn_list[2], fn_list[3], fn_list[4], fn_list[5], fn_list[6]])

    print tbl

#PROGRAM BEGIN
input_size_calc()

by stats_novice_123 at August 02, 2015 06:56 PM

Planet Clojure

Bro, Do You Even FizzBuzz?!?

"Do you bite your thumb at us, sir?"
-- Shakespeare, Romeo and Juliet
Act I, Scene I, Line 43

FizzBuzz as an interview kata has been getting a lot of bad press lately.  Part of the bad press has been around having to use modulus in the solution.  I am not sure how people have been explaining FizzBuzz in interviews, but you do not have to know anything about the mod operator in order to solve the problem.

Here is a solution in Clojure which does not use modulus.



This solution is using cycles and zip.  The odds are low of someone knowing how to use cycle and zip but not knowing modulus.  The point is that you do not need to know about modulus to do the kata.

In interviews I do, I'll tell the person about modulus if they get stuck.  The point of using a kata in an interview is to make sure that person who claims to be a programmer can actually program and that you can work with the person.

Still if the whole modulus thing has you worried, try the Roman Numeral kata.

Here is a solution in C# I did few minutes before an interview on Thursday (pro-tip, always make sure you can do the kata in short amount of time before you ask someone else to do it in an interview).



Again, the goal of the interview kata should be to see if the person can actually program and to see what they are like to work with.

"No, sir, I do not bite my thumb at you, sir. But
I bite my thumb, sir.
"
-- Shakespeare, Romeo and JulietAct I, Scene I, Lines 49-50


by Mike Harris (noreply@blogger.com) at August 02, 2015 06:53 PM

/r/clojure

[CLJS] Flag to Tell Closure to Eliminate Dev Code?

The situation:

I would like to have a command that outputs to console for dev reasons, but isn't exposed in a production environment. Is there a way to construct a function that is automatically removed by the Closure compiler when optimizations is set to :advanced only?

Thanks!

submitted by Schtauffen
[link] [1 comment]

August 02, 2015 06:53 PM

Lobsters

StackOverflow

removing duplicates from list without storing list in memory

I am trying to find an efficient way to remove duplicate lines from a file without reading the entire contents of the file into memory. The file is randomly ordered. I am trying not to read it into memory because the file is too big (20GB+). Can anyone suggest a way to fix my code so it doesnt read the entire file to memory?

val oldFile="steam_out_scala.txt"
val noDupFile="nodup_steam_out.txt"

import scala.io.Source
import java.io.{FileReader, FileNotFoundException, IOException}
import java.io.FileWriter;
import scala.collection.mutable.ListBuffer

var numbers = new ListBuffer[String]()
val fw = new FileWriter(noDupFile, true) 

for (line <- Source.fromFile(oldFile).getLines()) {
    numbers+=line

}

numbers.distinct.foreach((x)=>{
    //println(x)
    fw.write(x)
})
fw.close()    

What I know about the data:

  • each line is a Long ex: 76561193756669631
  • it is not ordered, and the final result does not need to be ordered in any way
  • the list was generated using another program. A number could be repeated (0,4million]

  • by Rilcon42 at August 02, 2015 06:17 PM

    CompsciOverflow

    Fastest nth root algorithm to a lot of digits?

    What is that fastest algorithm that can calculate a lot of digits of a decimal root? For example: 10,000 digits of the 3.56th root of 60.1?

    by 4everPixelated at August 02, 2015 06:01 PM

    Fefe

    Vor ein paar Tagen gab es in Israel eine Messerstecherei. ...

    Vor ein paar Tagen gab es in Israel eine Messerstecherei. Bei einer Gay Pride Parade in Jerusalem hatte ein ultraorthodoxer Jude ein Messer gezogen und damit wild auf Teilnehmer eingestochen. Er hat sechs erwischt, eines der Opfer ist an den Wunden gestorben.

    Jetzt gab es eine Demonstration mit Tausenden von Teilnehmern in Jerusalem gegen "incitement and violence". Der Messerstecher war übrigens Wiederholungstäter und saß schon mal 10 Jahre wegen sowas. Dann kam er vor drei Monaten frei und beging sofort die nächste Bluttat. Ein harter Schlag für die Idee der Resozialisierung von Straftätern.

    Der Marsch hat sich aber nicht nur gegen diesen Messerstecher gerichtet, sondern auch gegen einen anderen Vorfall vor ein paar Tagen, als ein paar jüdische Siedler das Haus einer Palästinenser-Familie in der West Bank angezündet hatten. Dabei kam ein 18 Monate altes Kleinkind in den Flammen um.

    August 02, 2015 06:01 PM

    Lobsters

    /r/compsci

    Is a pair of include regex and exclude regex turing complete by emulating a tag system?

    1 regex is not turing complete, but can any way of using the combination of 2 regex, 1 that the input must match, and 1 that the input must not match, match only the state changes of a cyclic tag system which is turing complete?

    Rule110 is a 1d cellular automata thats turing complete as proven by its ability to compute a cyclic tag system.

    I'm not sure what kind of complexity classes are involved in regexs since theres various kinds, some that allow referring to complex structures recursively matched earlier. Whats the simplest of these that 2 of them (an include and exclude) is turing complete?

    submitted by BenRayfield
    [link] [7 comments]

    August 02, 2015 05:12 PM

    Lobsters

    QuantOverflow

    compute technical indicators from candle data

    i have a rookie question but can't find the answer anywhere so..what is the right way to compute a simple moving average when you have an array of (open,close,low,high) tuples ? From what i saw so far it is the closing price that must be taken but i'm clearly not sure, i guess it could as well be open, or (open + close)/2, or anything else..Also, can this be generalized to other indicators ?

    thank you very much.

    by Traddy at August 02, 2015 04:56 PM

    CompsciOverflow

    Do Approximation Algorithms Analyzed in the Worst Case?

    From wikipedia:

    For some approximation algorithms it is possible to prove certain properties about the approximation of the optimum result. For example, a $ρ$-approximation algorithm $A$ is defined to be an algorithm for which it has been proven that the value/cost, $f(x)$, of the approximate solution $A(x)$ to an instance $x$ will not be more (or less, depending on the situation) than a factor $ρ$ times the value, OPT, of an optimum solution.

    $$\begin{cases}\mathrm{OPT} \leq f(x) \leq \rho \mathrm{OPT},\qquad\mbox{if } \rho > 1; \\ \rho \mathrm{OPT} \leq f(x) \leq \mathrm{OPT},\qquad\mbox{if } \rho < 1.\end{cases}$$

    For a maximization problem, a $\rho$-approximation algorithm means that, for an instance $x$, $\rho \mathrm{OPT} \leq f(x) \leq \mathrm{OPT}$, for any $\rho<1$. Is this mean that in the worst case the algorithm produces a solution that is at least $\rho \mathrm{OPT}$? Or is this in the average case? Because there are some instances where the algorithm find the optimal solution.

    I mean, if the algorithm is executed, say $10000$ times (with randomly generated instances) and produces $\mathrm{OPT}$, say $9995$ times and $5$ times it produces a value that is greater than $\rho\mathrm{OPT}$. What can we say about the algorithm then?

    by Jika at August 02, 2015 04:40 PM

    StackOverflow

    extremely confused about how this "oop under-the-hood" example of a counter works

    here's the make-counter procedure and calls to it

    (define make-counter
        (let ((glob 0))
            (lambda ()
                (let ((loc 0))
                    (lambda ()
                        (set! loc (+ loc 1))
                        (set! glob (+ glob 1))
                        (list loc glob))))))
    
    > (define counter1 (make-counter))
     counter1
    
    > (define counter2 (make-counter))
     counter2
    
    > (counter1)
    (1 1)
    
    > (counter1)
    (2 2)
    
    > (counter2)
    (1 3)
    
    > (counter1)
    (3 4)
    

    i can't understand why does glob behaves as a class variable, while loc behaves as an instance variable.

    by qaz at August 02, 2015 04:32 PM

    Dave Winer

    They forgot the readers

    Peter Baker, a reporter at the NYT posted a dismissive tweet about the mess with the Hillary Clinton emails. Technology companies do this too. It's easier to narrow your world to "players" and forget who's paying your salary. And that you're talking over their heads, and missing that the real damage isn't with the Clinton organization, but with your readers.

    Let's review where we're at...

    1. Their source was anonymous.

    2. Therefore when we read the article we have no way of judging the trustworthyness of the source, we can only depend on the trust we have with the person who chose the source, the reporter.

    3. We only trust the reporter because the New York Times chose them to be the reporter of an important story.

    4. We know the Times has in the past chosen reporters who made up their own anonymous quotes, or used anonymous quotes from the people they were covering. Each of those times the apology was insufficient to erase the damage done to our trust of them.

    5. You can blame Hillary Clinton all you want, that just makes you look like you're covering something up. We have to suspect you as much as you're supposed to question the integrity of your anonymous sources, assuming the source actually exists.

    6. Every time the Times passes the buck when they are used by an anonymous source, saying they're only as good as their sources, in the future we have to assume every anonymous source is either Dick Cheney or a figment of the reporter's imagination.

    7. So it appears that the Times is bullshit top to bottom, and when pushed on that question, they don't deny it. They say everything but Mea culpa.

    8. Too bad, they used to have some self-respect, or so it seemed.

    PS: Thanks to Jay Rosen for the pointer.

    PPS: The Times really needs an editor to rep the interests of readers. They don't have one. Margaret Sullivan is repping the NYT internal line. Misses the point. It's not the mistake that the Times made, they will make mistakes. It's the weasel-like way they dealt with it. The Times readers are uniquely intelligent people, and aren't so easily fooled or dismissed. The issue, is with readers, not the Clintons.

    August 02, 2015 04:28 PM

    /r/emacs

    What gets put into a Cask file by `cask init`?

    When beginning to use Cask, the first thing to do after installing it is cask init. This creates a Cask file in your .emacs.d folder, with some dependencies.

    But how does it choose the specific dependencies it puts in this Cask file? Can I safely remove them?

    submitted by zck
    [link] [5 comments]

    August 02, 2015 04:27 PM

    Lobsters

    StackOverflow

    Play JSON: reading optional nested properties

    I have the following case classes and JSON combinators:

    case class Commit(
        sha: String,
        username: String,
        message: String
    )
    
    object Commit {
        implicit val format = Json.format[Commit]
    }
    
    case class Build(
        projectName: String,
        parentNumber: String,
        commits: List[Commit]
    )
    
    val buildReads: Reads[Build] =
        for {
            projectName <- (__ \ "buildType" \ "projectName").read[String]
            name <- (__ \ "buildType" \ "name").read[String]
            parentNumber <- ((__ \ "artifact-dependencies" \ "build")(0) \ "number").read[String]
            changes <- (__ \ "changes" \ "change").read[List[Map[String, String]]]
        } yield {
            val commits = for {
                change <- changes
                sha <- change.get("version")
                username <- change.get("username")
                comment <- change.get("comment")
            } yield Commit(sha, username, comment)
            Build(s"$projectName::$name", parentNumber, commits)
        }
    

    My JSON reads combinator for Build will handle incoming JSON such as:

    {
        "buildType": {
            "projectName": "foo",
            "name": "bar"
        },
        "artifact-dependencies": {
            "build": [{
                "number": "1"
            }]
        },
        "changes": {
            "change": [{
                "verison": "1",
                "username": "bob",
                "comment": "foo"
            }]
        }
    }
    

    However, if artifact-dependencies is missing, it will fall over. I would like this to be optional.

    Should I use readNullable? I have tried to do so, but this fails because it is a nested property.

    Does this look pragmatic, or am I abusing JSON combinators to parse my JSON into a case class?

    by Oliver Joseph Ash at August 02, 2015 04:05 PM

    QuantOverflow

    Ledoit-Wulf portfolio strategy calculation

    I am trying to implement the Lediot-Wulf portfolio strategy on a real-world stock dataset.

    library(quadprog)
    library(Rsolnp)
    
    #first I read in the data and the corresponding market rates:
    
    data<-read.table("https://dl.dropboxusercontent.com/u/22681355/data.txt")
    market.rate <-read.table("https://dl.dropboxusercontent.com/u/22681355/market.rate.txt")
    
    
    
    sample.data<-data[1:120, ]
    sample.market.rate<-market.rate[1:120]
    
    # I calculate the Ledoit-Wulf portfolio strategy:
    
    Te <- nrow(sample.data)
    s2<-var(sample.market.rate)
    estimates<-sapply(1:N, function(x) lm(sample.data[,x]~sample.market.rate ))
    slopes<-sapply(1:N,function(x) estimates[,x]$coefficients[2])
        residuals<-sapply(1:N, function(x) estimates[,x]$residuals)
    B<-as.vector(slopes)
    D<-diag(N)
    diag(D)<-diag(cov(residuals))
    
    Fe<- s2 * B %*% t(B) + D
    S.hat<-cov_shrink(Fe)
    cov.Rt<-S.hat
    inv.cov<-solve(cov.Rt)  
    one.vec<-rep(1,N)   
    weights<-as.vector(inv.cov%*%one.vec)/( t(one.vec) %*% inv.cov %*% one.vec)
    } 
    

    However, I get the following error:

    Error in solve.default(cov.Rt) : 
      system is computationally singular: reciprocal condition number = 2.08164e-18
    

    I must be doing something wrong because the result should not be computationally singular.

    Any ideas?

    by rty at August 02, 2015 03:37 PM

    CompsciOverflow

    How do binary trees use memory to store its data?

    So I know that arrays use a block on contiguous memory addresses to store data to memory, and lists (not linked lists) make use of static arrays and when data is appended to the list, if there is no room, a new static array is created elsewhere in memory larger in size so more data can be stored.

    My question is, how do binary trees use memory to store data? Is each node a memory location which points to two other memory locations elsewhere...not necessarily contiguous locations? Or are they stored in contiguous blocks of memory too like a static array or dynamic list?

    by sw123456 at August 02, 2015 03:26 PM

    Planet Clojure

    NI Open Data – Mining Prescription Data – #opendata #spark #clojure

    Moving On From The NI Assembly

    There was plenty of scope from the NI Assembly blog posts I did last time (you can read part 1 and part 2 for the background). While I received a lot of messages with “why don’t you do this” and “can you find xxxxxx subject out” it’s not something I wish to do. Kicking hornets nests isn’t really part of my job description.

    Saying that when there’s open data for the taking then it’s worth looking at. Recently the Detail Data project opened up a number of datasets to be used. Buried within is the prescriptions that GP’s or Nurse within the practice has prescribed.

    Acquiring the Data

    The details of the prescription data are here: http://data.nicva.org/dataset/prescribing-statistics (though the data would suggest it’s not really statistics, just raw CSV data), the files are large but nothing I’m worrying about in the “BIG DATA” scheme of things, this is small in relative terms. I’ve downloaded October 2014 to March 2015, that’s a good six months worth of data.

    Creating a Small Sample Test File

    When developing these kind of jobs before jumping into any code it’s worth having a look at the data itself. See how many lines of data there are, this time as it’s a CSV file I know it’ll be one object per line.

    Jason-Bells-MacBook-Pro:niprescriptions Jason$ wc -l 2014-10.csv 
     459188 2014-10.csv

    Just under half a million lines for one month, that’s okay but too much for testing. I want to knock it down to 200 for testing. The UNIX head command will sort us out nicely.

    head -n20 2014-10.csv > sample.csv

    So for the time being I’ll be using my sample.csv file for development.

    Loading Data In To Spark

    First thing I need to do is define the header row of the CSV as a set of map keys. when Spark loads the data in then I’ll use zipmap to pair the values to the keys for each row of the data.

    (def fields [:practice :year :month :vtm_nm :vmp_nm :amp_nm :presentation :strength :total-items :total-quantity :gross-cost :actual-cost :bnfcode :bnfchapter :bnfsection :bnfparagraph :bnfsub-paragraph :noname1 :noname2])

    You might have noticed the final two keys, noname1 and noname2. The reason for this is simple, there are commas on the header row but no names so I’ve forced them to have a name to keep the importing simple.

    PRACTICE,Year,Month,VTM_NM,VMP_NM,AMP_NM,Presentation,Strength,Total Items,Total Quantity,Gross Cost (<A3>),Actual Cost (<A3>),BNF Code,BNF Chapter,BNF Section,BNF Paragraph,BNF Sub-Paragraph,,
    1,2015,3,-,-,-,-,-,19,0,755.00,737.28,-,99,0,0,0,,

    With that I can now create the function that loads in the data.

    (defn load-prescription-data [sc filepath] 
     (->> (spark/text-file sc filepath)
          (spark/map #(->> (csv/read-csv %) first))
          (spark/filter #(not= (first %) "PRACTICE"))
          (spark/map #(zipmap fields %))
          (spark/map-to-pair (fn [rec]
             (let [practicekey (:practice rec)]
               (spark/tuple practicekey rec))))
          (spark/group-by-key)))

    Whereas the NI Assembly data was in JSON format so I had the keys already defined, this time I need to use the zipmap function to mix the values at the head keys together. This gives us a handy map to reference instead of relying on the element number of the CSV line. As you can see I’m grouping all the prescriptions by their GP key.

     

    Counting The Prescription Frequencies

    This function is very similar to the frequency function I used in the NI Assembly project, by mapping each prescription record and retaining the item prescribed I can then use the frequencies function to get counts for each distinct type.

    (defn def-practice-prescription-freq [prescriptiondata]
     (->> prescriptiondata
       (spark/map-to-pair (s-de/key-value-fn (fn [k v] 
         (let [freqmap (map (fn [rec] (:vmp_nm rec)) v)]
           (spark/tuple k (frequencies freqmap))))))))

    Getting The Top 10 Prescribed Items For Each GP

    Suppose I want to find out what are the top ten prescribed items for each GP location. As the function I’ve got has the frequencies with a little tweaking we can return what I need. First I’m using sort-by to sort on the function, this will give me a sort smallest to largest, using the reverse function then flips it on it’s head and gives me largest to smallest. With me only wanting ten items I then use the take function to return the first ten items in the sequence.

    (defn def-practice-prescription-freq [prescriptiondata]
     (->> prescriptiondata
        (spark/map-to-pair (s-de/key-value-fn (fn [k v] 
           (let [freqmap (map (fn [rec] (:vmp_nm rec)) v)]
               (spark/tuple k 
                            (take 10 (reverse (sort-by val (frequencies freqmap)))))))))))

    Creating The Output File Process

    So with two simple functions we have the workings of a complete Spark job. I’m going to create a function to do all the hard work for us and save us repeating lines in the REPL. This function will take in the Spark context, the file path of the raw data files (or file if I want) and an output directory path where the results will be written.

    (defn process-data [sc filepath outputpath] 
     (let [prescription-rdd (load-prescription-data sc filepath)]
           (->> prescription-rdd
                (def-practice-prescription-freq)
                (spark/coalesce 1)
                (spark/save-as-text-file outputpath))))

    What’s going on here then? First of all we load the raw data in to a Spark Pair RDD and then using the thread last function we calculate the item frequencies and then reduce all the RDD’s down to a single RDD with the coalesce function. Finally we output everything to our output path. First of all I’ll test it from the REPL with the sample data I created earlier.

    nipresciptions.core> (process-data sc "/Users/Jason/work/data/niprescriptions/sample.csv" "/Users/Jason/Desktop/testoutput/")
    nil

    Looking at the file part-00000 in the output directory you can see the output.

    (1,(["Blood glucose biosensor testing strips" 4] ["Ostomy skin protectives" 4] ["Macrogol compound oral powder sachets NPF sugar free" 3] ["Clotrimazole 1% cream" 2] ["Generic Dermol 200 shower emollient" 2] ["Chlorhexidine gluconate 0.2% mouthwash" 2] ["Clarithromycin 500mg modified-release tablets" 2] ["Betamethasone valerate 0.1% cream" 2] ["Alendronic acid 70mg tablets" 2] ["Two piece ostomy systems" 2]))

    So we know it’s working okay…. now for the big test, let’s do it against all the data.

    Running Against All The Data

    First things first, don’t forget to remove sample.csv file if it’s in your data directory or it will get processed with the other raw files.

    $rm sample.csv

    Back to the REPL and this time my input path will just be the data directory and not a single file, this time all files will be processed (Oct 14 -> Mar 15).

    nipresciptions.core> (process-data sc "/Users/Jason/work/data/niprescriptions/" "/Users/Jason/Desktop/output/")

    This will take a lot longer as there’s much more data to process. When it does finish have a look at the part-00000 file again.

    (610,(["Gluten free bread" 91] ["Blood glucose biosensor testing strips" 62] ["Isopropyl myristate 15% / Liquid paraffin 15% gel" 29] ["Lymphoedema garments" 27] ["Macrogol compound oral powder sachets NPF sugar free" 25] ["Ostomy skin protectives" 24] ["Gluten free mix" 21] ["Ethinylestradiol 30microgram / Levonorgestrel 150microgram tablets" 20] ["Gluten free pasta" 20] ["Carbomer '980' 0.2% eye drops" 19]))
    (625,(["Blood glucose biosensor testing strips" 62] ["Gluten free bread" 38] ["Gluten free pasta" 27] ["Ispaghula husk 3.5g effervescent granules sachets gluten free sugar free" 24] ["Macrogol compound oral powder sachets NPF sugar free" 20] ["Isopropyl myristate 15% / Liquid paraffin 15% gel" 20] ["Isosorbide mononitrate 25mg modified-release capsules" 18] ["Alginate raft-forming oral suspension sugar free" 18] ["Isosorbide mononitrate 50mg modified-release capsules" 18] ["Oxycodone 40mg modified-release tablets" 16]))
    (661,(["Blood glucose biosensor testing strips" 55] ["Gluten free bread" 55] ["Macrogol compound oral powder sachets NPF sugar free" 24] ["Salbutamol 100micrograms/dose inhaler CFC free" 20] ["Colecalciferol 400unit / Calcium carbonate 1.5g chewable tablets" 19] ["Venlafaxine 75mg modified-release capsules" 18] ["Isosorbide mononitrate 25mg modified-release capsules" 18] ["Isosorbide mononitrate 60mg modified-release tablets" 18] ["Alginate raft-forming oral suspension sugar free" 18] ["Venlafaxine 150mg modified-release capsules" 18]))
    (17,(["Blood glucose biosensor testing strips" 55] ["Gluten free bread" 35] ["Macrogol compound oral powder sachets NPF sugar free" 29] ["Colecalciferol 400unit / Calcium carbonate 1.5g chewable tablets" 24] ["Gluten free biscuits" 22] ["Ispaghula husk 3.5g effervescent granules sachets gluten free sugar free" 21] ["Diclofenac 1.16% gel" 19] ["Sterile leg bags" 19] ["Glyceryl trinitrate 400micrograms/dose pump sublingual spray" 19] ["Ostomy skin protectives" 18]))

    There we are, GP 610 prescribed 91 loaves of Gluten Free Bread over the six month period. The blood glucose testing strips are also high on the agenda but that would come as no surprise for any one who is diabetic.

    So Which GP’s Are Prescribing What?

    The first number in the raw data is the GP id. In the DetailData notes for the prescription data I read:

    “Practices are identified by a Practice Number. These can be cross-referenced with the GP Practices lists.

    As with the NI Assembly data I can load in the GP listing data and join the two by their key. Sadly on this occasion though I can’t, the data just isn’t there on the page. I’m not sure if it’s broken or removed on purpose. Shame but I’m not going to create a scene.

    Resources

    DetailData Prescription Datahttp://data.nicva.org/dataset/prescribing-statistics

    Github Repo for this projecthttps://github.com/jasebell/niprescriptiondata

    *** Note: I spelt “prescriptions” wrong in the Clojure project but as this is a throwaway kinda a thing I won’t be altering it…. ***


    by Jason Bell at August 02, 2015 03:09 PM

    Dave Winer

    Google, open web are natural allies

    In 1998, when Google started, the blogging world was just getting started too, and the two fed on each other. We put knowledge on our blogs, they indexed them. So now we could find each other, and they had stuff worth finding.

    In the last corner-turn, as our ideas have moved into silos, Twitter and Facebook, it's harder to create and find value. On Twitter because it's hard to pack much value into a 140-char soundbite, and on Facebook, the focus has been more social than knowledge-building.

    But there are still unsolved problems in the open web, areas where Google has built new expertise, that has not yet been applied to the open web.

    For example...

    Categorization

    Google has technology that can tell you who is in a picture and what they're doing. They can organize the photos by all kinds of attributes, time, location, other meaning.

    We've been trying to add good categorization to blogs, over the years, and it's impossible to get most people to actually do it, even though we can make good, easy tools. I myself won't use them. I'm too scattered, my time is spread too thin to be able to do all the organizing that I feel I should.

    Sometimes I think that's all that lies between me and a good book. The kind of thing that a robot could do for me, I think. And Google I'm pretty sure has that software.

    Give me, a writer, tools to study my own writing, and to create new meta-documents that help others find the value. I think we're not very far from having some amazing tools here. There are more writing revolutions to come.

    A social graph for the open web

    Why should the only social graph belong to Facebook? Why aren't there dozens or hundreds of graphs on the open web, owned by no one? Facebook is just one possible graph. When we're done, there will be lots of graphs.

    1. I have a braintrust, the people I do development with. I'm always looking for people to add to the group, but I'm very selective about it. This will never be a large group.

    2. I have a list of favorite movies. If just ten of my friends had equivalent lists, I would be able to find new ideas for things to watch. Same with podcasts and TV series for binge-watching.

    3. I love to talk NBA. I have lots of friends who are into it, but no place to connect the dots.

    4. Cross-pollination. I often think it would be great to get all my friends in a room so I could introduce them to each other. A structure that just facilitated these intros would make a great UI for a new graph.

    These are the kinds of problems made for individual creative people, the kinds of people we enabled with blogging software. But because the leadership turned to silos, we never got to really explore them. This is why it would be a good idea for Google to realize that our interests are aligned, and that they could show some leadership. Would be easy for them.

    We have the tools, we know how to structure the data, none of it is far from being realized, we just need leadership. If Google would embrace OPML, for example, or a format like it (attributed hierarchies with inclusion), a lot of new stuff would happen very quickly. We already have lots of tools ready.

    August 02, 2015 03:04 PM

    StackOverflow

    Getting an error message while trying to build Play Scala project in Intellij Idea Ultimate version 14.0.4

    While creating Scala play project via Intellij Idea ultimate version 14.0.4 I'm facing with error which is below. I'have tried all alternatives to solve this problem like updating the plugin, building project in command prompt and importing it to Idea etc, but nothing helped me. Even tried some advices from stackoverflow also did not help. So any other way that might help to solve this problem? I'll be appreciated.
    Error:Error while importing SBT project: ... at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:42) at sbt.std.Transform$$anon$4.work(System.scala:64) at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:237) at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:237) at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:18) at sbt.Execute.work(Execute.scala:244) at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:237) at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:237) at sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:160) at sbt.CompletionService$$anon$2.call(CompletionService.scala:30) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [error] (*:update) sbt.ResolveException: unresolved dependency: sbt-run-support-210#sbt-run-support-210_2.10;0.1-SNAPSHOT: not found Invalid response. Invalid response. Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=384M; support was removed in 8.0

    See complete log in C:\Users\admins.IntelliJIdea14\system\log\sbt.last.log

    by user1364513 at August 02, 2015 03:03 PM

    TheoryOverflow

    SVM - running time for detecting if data is linearly separable?

    If my understanding is correct, one way to check if a set of $m$ data points is linearly separable is to use support vector machines to find a maximum margin hyperlane for separating the data; the data is linearly separable if and only if such a hyperplane exists.

    If the data consists of $m$ pairs $(x_1, y_1), \ldots, (x_m,y_m)$, with $x_i \in \{0,1\}^n$ and $y_i \in \{0,1\}$, do we know what the worst case running time is for finding whether or not a maximum margin hyperlane exists?

    I can't seem to find a straightforward answer to this. Section 1.4.2 of this says something about complexity and the number of support vectors, but the section also begins by mentioning a lower bound.

    Also, this seems to cite a running time of $O(max(m, n) \times min(m, n)^2)$, at least for the running time of the optimization part of the problem.

    These texts are not clear to me. Can someone break down the running time for this algorithm? Is the worst case running time known? Can SVMs detect if the $m$ points are linearly separable in polynomial time?

    p.s. I am aware that linear programming can solve this in polynomial time ($\text{poly}(mn)$ I think?), but I would like to know how support vector machines behave here.

    Edit: The Bottou, Lin chapter seems to come from this book.

    by Fequish at August 02, 2015 02:45 PM

    /r/emacs

    $EMACSPATH in Emacs 24.5?

    after upgrading, the EMACSPATH environment variable is not being set when I open Emacs (on OSX: from the dock, or from a terminal).

    Context: my setup is that I have multiple apps (work.app, notes.app, and more), which are all just Emacs.app copied and renamed. the initialization loads different settings depending on the EMACSPATH.

    e.g.

    (getenv "EMACSPATH") ;; "/Applications/Work.app/Contents/MacOS/bin" (string-match "Work\\.app" (getenv "EMACSPATH")) 

    my version is:

     emacs-version GNU Emacs 24.5.1 (x86_64-apple-darwin13.4.0, NS apple-appkit-1265.21) of 2015-04-10 on builder10-9.porkrind.org 

    (download from http://emacsformacosx.com/ if relevant)

    with export | grep EMACS, I don't see any other environment variables being set.

    to keep away the "XY problem", any way of finding out the executable path of the current events from within Emacs would work.

    P.S. I checked the sidebar, but should I post this on stack exchange instead?

    submitted by sambocyn
    [link] [9 comments]

    August 02, 2015 02:37 PM

    StackOverflow

    Ambiguous reference to overloaded function in Scala

    I have written a lambda code that tracks login to AWS console, and send notification to user email about it.

    The initial code I have written was in Java, and worked. After converting the code to Scala, I came to the following code:

    class SNSHandler {
    
      private val creds: AWSCredentials = new BasicAWSCredentials("xxx", "999/xyz12345")
      private val eventType: String = "ConsoleLogin"
      private val topicArn: String = "arn:aws:sns:us-east-1:1111111111:CTInterestingEvents"
    
      def processLoginRecord(loginRecord: String, lambdaLogger: LambdaLogger) = {
        val userName = JsonPath.read(loginRecord.asInstanceOf[Object], "$.userIdentity.type").asInstanceOf[String] match {
          case "Root" => "Root"
          case _ => JsonPath.read(loginRecord.asInstanceOf[Object], "$.userIdentity.userName")
        }
        val accountId = JsonPath.read(loginRecord.asInstanceOf[Object], "$.userIdentity.accountId")
    
        new AmazonSNSClient(creds).publish(topicArn, "This is an auto notification message.\nUser " + userName +
          " has logged in to AWS account id " + accountId + ".\n You are receiving this email because someone has subscribed your" +
          " email address to this event.")
      }
    
      def processCloudTrailBulk(event: String, logger: LambdaLogger) = {
        JsonPath.read(event.asInstanceOf[Object], "$.Records[?(@.eventName == '" + eventType + "' && @.responseElements.ConsoleLogin == 'Success')]").
          asInstanceOf[java.util.List[String]].asScala.map(loginRecord => processLoginRecord(loginRecord, logger))
      }
    
      def processS3File(bucketName: String, file: String, logger: LambdaLogger) = {
        Source.fromInputStream(new GZIPInputStream(new AmazonS3Client(creds).
          getObject(new GetObjectRequest(bucketName, file)).getObjectContent),"UTF-8").getLines().
          foreach(line => processCloudTrailBulk(line,logger))
      }
    
      def processSNSRecord(notification: SNSRecord, logger: LambdaLogger) =  {
        val bucketName: String = JsonPath.read(notification.getSNS.getMessage.asInstanceOf[Object], "$.s3Bucket")
        logger.log("Notifications arrived.\nBucket: " + bucketName)
        JsonPath.read(notification.getSNS.getMessage.asInstanceOf[Object], "$.s3ObjectKey[*]").asInstanceOf[java.util.List[String]].
          asScala.map(file => processS3File(bucketName,file,logger))
      }
    
      def handleCloudTrailIncoming(event: SNSEvent, context: Context) = {
        event.getRecords.asScala.map(record => processSNSRecord(record,context.getLogger))
      }
    }
    

    Now, the addition of .asInstanceOf[Object] to the first param of every 'read' call wasn't there initially, however I had the famous compiler error of ambiguous reference to overloaded function, and after taking a look at: ambiguous reference to overloaded definition, from a Java library I added it, and indeed my code now compiles.

    The problem is however that in runtime, the read now fails to detect the fields, and I get the following error:

    Property ['s3Bucket'] not found in path $: com.jayway.jsonpath.PathNotFoundException com.jayway.jsonpath.PathNotFoundException: Property ['s3Bucket'] not found in path $ at com.jayway.jsonpath.internal.token.PropertyPathToken.evaluate(PropertyPathToken.‌​java:41) ........
    

    by Yaniv Donenfeld at August 02, 2015 02:08 PM

    Play Framework - Get content from other site

    I woundering it is possible to take content from other sites using play framework.

    For example in php it is possible by using curl .

    Example I have rute :

    GET /test   controller.Aplicatin.getContent
    

    and when I put in browser localhost:9000/test then it show content from example http://google.pl

    And it is possible to send post data to other sites nad get content ?

    For example by typing localhost:9000/test I wana send post username, password to gmail and return content from login page.

    by user3123906 at August 02, 2015 01:52 PM

    Fred Wilson

    Family Time

    We’ve had my entire family at our beach house this weekend. That includes my 87 year old father, my 19 month old nephew, and a dozen other Wilsons in between.

    So I’ve got a house full of folks and not a lot of time to post this morning. So if there’s any action here today, it will be in the comments.

    I hope you are enjoying your summer weekend. I am.

    by Fred Wilson at August 02, 2015 01:50 PM

    CompsciOverflow

    How to find a symmetric predecessor / successor

    Lets say we have a binary tree $T$ and we want to insert key $k$. Now, assuming $k\not\in T$, how do we find the symmetric predecessor/successor of $k$?

    Does this relate to (pre/in/post)-order traversals in some way?

    by ndrizza at August 02, 2015 01:49 PM

    QuantOverflow

    Derivation of Stochastic Vol PDE

    A couple questions regarding stochastic vol PDE derivation. Following Gatheral, a general stochastic vol model is given by \begin{align*} dS(t) & = \mu(t) S(t) dt + \sqrt{v(t)}S(t) dW_1, \\ dv(t) & = \alpha(S,v,t) dt + \eta \beta(S,v,t)\sqrt{v(t)} dZ_2, \\ dZ_1dZ_2 = \rho dt \end{align*} To price an option on a stock whose price process follows $S$, we construct a portfolio consisting of the option whose price is $V(S,v,t)$, short $\Delta$ shares of the stock and short $\Delta_1$ units of some other asset whose value $V_1$ depends on volatility.

    First Question: Is this "other asset" absolutely anything such that $V_1 = V_1(v)$? E.g., another option on $S$, or some other option on another stock, or another stock, or...?

    The value $\Pi$ of this portfolio is $$ \Pi = V - \Delta S - \Delta_1 V_1. $$

    We then derive the SDE satisfied by $\Pi$, select $\Delta$ and $\Delta_1$ to make the portfolio riskless, argue that $d\Pi = r\Pi dt$ else arbitrage, and finally get $$ \frac{\frac{\partial V}{\partial t} + \frac{1}{2}vS^2\frac{\partial^2 V}{\partial S^2} + \rho \eta v \beta S \frac{\partial^2 V}{\partial v \partial S} + \frac{1}{2}\eta^2v\beta^2\frac{\partial^2 V}{\partial v^2} + rS\frac{\partial V}{\partial S} - rV}{\frac{\partial V}{\partial v}} \\ = \frac{\frac{\partial V_1}{\partial t} + \frac{1}{2}vS^2\frac{\partial^2 V_1}{\partial S^2} + \rho \eta v \beta S \frac{\partial^2 V_1}{\partial v \partial S} + \frac{1}{2}\eta^2v\beta^2\frac{\partial^2 V_1}{\partial v^2} + rS\frac{\partial V_1}{\partial S} - rV_1}{\frac{\partial V_1}{\partial v}} $$ Since the LHS only depends explicitly on $t,v,S,V$ and the RHS only on $t,v,S,V_1$, they must each be a function of only $t,v,S$, say $f(t,v,S)$. In particular, the price $V$ of the option must satisfy $$ \frac{\partial V}{\partial t} + \frac{1}{2}vS^2\frac{\partial^2 V}{\partial S^2} + \rho \eta v \beta S \frac{\partial^2 V}{\partial v \partial S} + \frac{1}{2}\eta^2v\beta^2\frac{\partial^2 V}{\partial v^2} + rS\frac{\partial V}{\partial S} - rV = \frac{\partial V}{\partial v}f(t,v,S). $$

    Then, for whatever reason, we choose $f = -(\alpha - \varphi \beta)$, and as Gatheral states following eqn (3), "...$\varphi(S,v,t)$ is called the market price of volatility risk because it tells us how much of the expected return of $V$ is explained by the risk (i.e. standard deviation) of $b$ in the CAPM framework."

    Second question: How does this market price of risk ($\varphi$) relate to the market price of risk I'm familiar with in the Black-Scholes model, $\frac{\mu - r}{\sigma}$? More importantly, how did they (Heston?) settle on $f = -(\alpha - \varphi \beta)$?

    by bcf at August 02, 2015 01:45 PM

    StackOverflow

    Program with many functions?

    I have a 700-line program in Python that has 9 functions and more may be added. Is this enough to make a new file with those functions and just import them in the main body where the functions are called?

    Or should I just make two separate classes, one with all the functions and a separate main class?

    Basically, when is the point where you should make a module or make different classes? I am quite new to writing lengthy and complex programs, and right now my program is just a bunch of functions, calculations and executions.

    As an aside, this program will be used for scientific data analysis.

    by Syd at August 02, 2015 01:40 PM

    TheoryOverflow

    Integer multiplication when one integer is fixed

    Let $A$ be a fixed positive integer of size $n$ bits. One is allowed to pre-process this integer as appropriate. Given another positive integer $B$ of size $n$ bits, what is the complexity of multiplication $AB$?

    (Note: There are results where matrix-vector multiplication can be reduced from $O(n^{2})$ to $O(\frac{n^{2}}{\log n})$ if the matrix is fixed. Are there any results analogous to this for the case of integer multiplication?)

    Update

    $\mathsf{FFT}$ techniques already provide $O(n^{1+\epsilon})$ solution to this problem. None of proposed solution below in answer section even consider this solution and provide only a sub-quadratic solution that is inferior to what is known to be just above linear.

    $\mathsf{Conjecture}$:

    Given fixed $c\in\Bbb N$, fixed positive $A\in\Bbb Z$ of $2cn$ bits, if $B$ is input, then following four operations can be done in $\Theta(cn)$ complexity (fully linear in both space and time):

    $1.$ Computing $AB$ where $B$ of $2cn$ bits.

    $2.$ Computing $\frac{A}B$ where $B$ of $cn$ bits, with promise $B|A$.

    $3.$ Computing ${A}\bmod B$ where $B$ of $cn$ bits.

    $4.$ Computing $\mathsf{GCD}({A},B)$ where $B$ of $2cn$ bits.

    by Turbo at August 02, 2015 01:37 PM

    /r/compsci

    CompsciOverflow

    0-1 (zero-one) linear programming solver (open source) [on hold]

    Is any zero-one linear programming solver available - open source software? I know open source integer linear programming software projects (i.e. lpsolve, glpk) but it seems to me that they are not targeted to zero-one case. Of course, zero-one programs are special case of general linear programs but one can expect optimization for zero-one case. Thanks in advance!

    by TomR at August 02, 2015 01:29 PM

    Turing machine that accepts when a string of x's is followed by the same number of y's

    I need to draw a machine's states that accepts (writes a 1) when it reads a string of x's that is followed by the same number of y's and would reject (writes a 0) for anything else. It has to work for all possible input tapes. For example, it would accept xxxyyy but would reject xxyyy and xx. I cannot figure out how to keep track of the number of x's that are read in, and then count the y's that follow without creating an infinite chain. This is what I have come up with for the Turing Machine:

    start: State 0, read: x, next: State 1, write: 0, move: right; start: State 1, read: y, next: halt, write: 1, move: neutral

    by CS student at August 02, 2015 01:25 PM

    Planet Theory

    Four Weddings And A Puzzle


    An unusual voting problem?

    “Four Weddings” is a reality based TV show that appears in America on the cable channel TLC. Yes a TV show: not a researcher, not someone who has recently solved a long-standing open problem. Just a TV show.
    22258_4W_302_Group_1

    Today I want to discuss a curious math puzzle that underlines this show.

    The show raises an interesting puzzle about voting schemes:

    How can we have a fair mechanism when all the voters have a direct stake in the outcome?

    So let’s take a look at the show, since I assume not all of you are familiar with it. I do admit to watching it regularly—it’s fun. Besides the American version there are many others including a Finnish version known as “Neljät Häät” (Four Weddings), a German version called “4 Hochzeiten und eine Traumreise” (4 Weddings and one dream journey), and a French version called “4 Mariages pour 1 Lune de Miel” (4 Weddings for 1 Honeymoon). The last two remind me most of the 1994 British movie “Four Weddngs and a Funeral” but there is no real connection.

    There is keen interest worldwide, it seems, in weddings as they are a major life event. And of course, they are filled with lots of beautifully dressed people, lots of great displays of food and music, and lots of fun.

    The Show

    Like many shows, “Four Weddings” is based on a British show—do all good shows originate in the UK? Four brides, initially strangers, meet and then attend each others’ weddings. Each then scores the others weddings on various aspects: bridal gown, venue, food, and so on. Then the bride with the highest score wins a dream honeymoon. Of course there is the small unstated issue that the honeymoon, no matter how exotic, happens well after the actual wedding. Oh well.

    The scoring method varies from season to season and also from country to country. But higher scores are better, and the brides get a chance on camera to explain why they scored how they did. A typical comment might be: I loved the venue, the food, but the music was terrible.

    You get to see four different weddings, which is the main attraction in watching the show. Usually each wedding is a bit out there: you see weddings with unusual themes, with unusual venues, and other unusual features. If you are not ready to have an interesting wedding, to spend some extra time in making it special, then you have little chance of winning.

    The Puzzle

    The puzzle to me is really simple: why would the brides rate each other fairly? They all want to win the honeymoon, the prize, so why ever give high ratings? Indeed.

    There have been some discussions on the web on what makes the scoring work. Some have noticed that the most expensive weddings usually win.

    Clearly the game-theoretic optimal move seems to be to give all the other brides low scores and hope the others act fairly. The trouble with this method is that you look bad—and who wants to look bad on a TV show that millions might see? Can we make a model that accounts for this? It does not have to embrace possible psychological factors at all—it just has to do well at predicting the observed ratings on the show.

    A Solution Idea

    I have thought about this somewhat and have a small idea. Perhaps some of you who are better at mechanism design could work out a scoring method that actually works. My idea is to penalize a score that is much lower than the others. A simple version could be something like this: Suppose the brides are Alice, Betty, Carol, and Dawn. If Alice’s wedding gets scores like this:

    Betty: 7
    Carol: 6
    Dawn: 3,

    then perhaps we deduct a point from Dawn’s total. She clearly is too low on Alice. Can we make some system like this really work?

    A Related Game?

    I have been discussing with Ken and his student Tamal Biswas some further applications of their work on chess to decision making. Their latest paper opens with a discussion of “level-{k} thinking” and the “11-20 money game” introduced in a recent paper by Ayala Arad and Ariel Rubinstein.

    In the game each player independently selects a number {n} between 11 and 20 and instantly receives {\$n}. In addition, if one player chose a number exactly $1 below the other’s number, that person receives a bonus of $20 more. Thus if one player chooses the naively maximizing value $20, the other can profit by choosing $19 instead. The first player however could sniff out that strategy by secretly choosing $18 instead of $20. If the second player thinks and suspects that, the $19 can be revised down to $17. And so on in what sometimes becomes a race for the bottom, although the Nash equilibrium assigns non-zero probability only to the values $15 through $20.

    In this game the level {k} of thinking what the opponent might do is simply represented by the value {k = 20 - n}. There is now a rich literature of studies of how real human players deviate from the Nash equilibrium, though they come closer to it under conditions of severe time pressure. The connection sought by Ken and Tamal relates {k} to search depth in chess—that is, to how many moves a player looks ahead.

    Ken does not know whether anyone has intensively treated the extension to {m > 2} players. The following seems to be the most relevant way to define this:

    1. If the lowest player is $1 below the second-lowest, then the lowest player gets the $20 bonus, else nobody does.
    2. If the lowest is lower by 2 or more then that person gets nothing—not even the original {\$n}.

    It would be interesting to study this with {m=4} and compare the results to the observed behavior in the show “Four Weddings.” Could something like this be going on? Or are the brides simply being true to their own standards and gushing with admiration where merited? It could be interesting either way, whether they match or deviate from the projections of a simple game-theoretic model with scoring like this.

    Open Problems

    What is the right scoring method here? Is it possible to find one?


    by rjlipton at August 02, 2015 01:19 PM

    StackOverflow

    Combine two key-value collections with Spark efficiently

    I have the following key-value pairs lists (like a hashmap, but not exactly inside the spark context):

    val m1 = sc.parallelize(List(1 -> "a", 2 -> "b", 3 -> "c", 4 -> "d"))
    val m2 = sc.parallelize(List(1 -> "A", 2 -> "B", 3 -> "C", 5 -> "E"))
    

    I want to get something like this and do if efficiently in parallel (don't even know if it's possible

    List(1 -> (Some("a"), Some("A")), 2 -> (Some("b"), Some("B")), 3 -> (Some("c"), Some("C")), 4 -> (Some("d"), None), 5 -> (None, Some("E")))
    

    Or at least

    List(1 -> ("a","A"), 2 -> ("b","B"), 3 -> ("c","C"))
    

    How to achieve this? As I understand - I don't have efficient way to get values from the "maps" by key - these are not hashmaps really.

    by pkozlov at August 02, 2015 01:08 PM

    DragonFly BSD Digest

    Lazy Reading for 2015/08/02

    Be ready for the latent craziness in some of the links for this Lazy Reading episode.

    Your off-topic movie link of the week: The Fabulous World of Jules Verne.  (via an internet cult.)  Originally titled Invention For Destruction and released by a Czech director, then subtitled to English.  Looks like a strange mix of steampunk content and Monty Python-style animation.  That may seem only mildly interesting until you notice it was filmed in 1958.

    by Justin Sherrill at August 02, 2015 01:00 PM

    QuantOverflow

    Regression model when samples are small and not correlated

    I received this question during an onsite interview for a quant job and I'm still scratching my head on how to solve this problem. Any help would be appreciated.


    Mr Quant thinks that there is a linear relationship between past and future intraday returns. So he would like to test this idea. For convenience, he decided to parameterize return in his data set using a regular time grid dt where $d=0, …, D-1$ labels date and $t=0, …, T-1$ intraday time period. For example, if we split day into 10 minute intervals then $T = 1440 / 10$. His model written on this time grid has the following form:

    $y_{d,t}$ $=$ $\beta_t$ * $x_{d,t}$ + $\epsilon_{d,t}$

    where $y_{d,t}$ is a return over the time interval $(t,t+1)$ and $x_{d,t}$ is a return over the previous time interval, $(t–1,t)$ at a given day $d$. In other words, he thinks that previous 10-minute return predicts future 10-minute return, but the coefficient between them might change intraday.

    Of course, to fit $\beta_t$ he can use $T$ ordinary least square regressions, one for each “$t$”, but:

    (a) his data set is fairly small $D$=300, $T$=100;

    (b) he thinks that signal is very small, at best it has correlation with the target of 5%.

    He hopes that some machine learning method that can combine regressions from nearby intraday times can help.

    How would you solve this problem? Data provided is an $x$ matrix of predictors of size $300\times100$ and a $y$ matrix of targets of size $300\times100$.

    by cogolesgas at August 02, 2015 12:50 PM

    Fefe

    infra-talk

    Time for Tea – 10 Ways to Test a Kettle

    As a true Englishman, tea is of vital importance to my day, and having the right tools to make it is a serious undertaking. With one kettle having gone to the Great Kitchen in the Sky (it refused to reach boiling point, and one cannot have lukewarm tea) it was time for a new one.

    Whilst looking for a new one I realized it would make a good question for a tester: How would they test a kettle?

    Here are some ideas:

    1. Water Quality: How does the kettle cope with different water quality? Is it being used in a suburban environment with treated water, or will it be used in places that might be using well water with no water softener?
    2. Noise: Does it make a noise when boiling, and if so, is it too loud, too irritating? Or maybe noise is a good thing to alert the user that it’s time to get up from their desk and make the tea.
    3. Looks: Maybe the kettle is hardly ever used, so how it looks in the kitchen is more important than how it works. Maybe you’re a realtor doing staging for open houses and only want something that looks great.
    4. Temperature: The water should be boiling to ensure maximum tea flavor. Do you assume that just because the kettle switches off that this means the water is at 100C?
    5. Safety: If there is no water in the kettle, does it still heat up?
    6. Performance: How quickly does it boil?
    7. Cost: How much are you allowed to spend on the kettle?
    8. Load: Is it going to be used in an office where it could be in constant use from 9-5, or will it be at home where it’s main use will be 7am on weekdays and 9am at the weekend?
    9. Maintenance: Can the filter be easily removed for cleaning?
    10. Usability

    Looking at everyday items and wondering how they are tested can give you ideas about designing and testing your product.

    The post Time for Tea – 10 Ways to Test a Kettle appeared first on Atomic Spin.

    by Phil Kirkham at August 02, 2015 12:00 PM

    StackOverflow

    Functional programming in javascript - add(a)(b)(c)

    I trying to wrap my head around functional programming in js.

    I understand add(3)(5) would be:

    function add(x) {
        return function(y) {
            return x + y;
        };
    }
    

    How would I change this function so add(3)(5)(7)(8) returns 23 or add(1)(2)(3) returns 6?

    by wwwuser at August 02, 2015 11:40 AM

    /r/netsec

    StackOverflow

    scala override java class method

    I have a java class with following method:

    class JavaContextObject {
    ...
    public String[] getContext(int index, String[] tokens, String[] preds, Object[] additionalContext)
    ...
    
    }
    

    and I want to override this method in my scala class I have tried the following signatures:

    override def getContext(index: Int, tokens: Array[String], preds: Array[String], additionalContext: Array[AnyRef]): Array[String] = ???
    

    or

    override def getContext(index: Int, tokens: Array[String], preds: Array[String], additionalContext: Array[Object]): Array[String] = ???
    

    and receive the following compilation error that I dont understand:

    class ScalaContextObject needs to be abstract, since method getContext in trait BeamSearchContextGenerator of type (x$1: Int, x$2: Array[String], x$3: Array[String], x$4: Array[Object])Array[String] is not defined
    (Note that Array[T with Object] does not match Array[String]: their type parameters differ)
      class ScalaContextObject {
    

    where BeamSearchContextGenerator - is trait that JavaContextObject has already implemented

    public interface BeamSearchContextGenerator<T> {
      public String[] getContext(int index, T[] sequence, String[] priorDecisions, Object[] additionalContext);
    }
    

    My question is what this error means and how to avoid it ?

    by Oleg Golovanov at August 02, 2015 10:46 AM

    QuantOverflow

    Deriving the definition of stochastic integrals with respect to Ito processes from first principles

    When I first encountered the definition of integrals with respect to Ito processes (Shreve's Stochastic Calculus for Finance Vol II), I didn't think twice. However, I wanted to see if the definition could be derived.

    In the rest of this post $\bar{f}$ is such that $\bar{f}'=f$ and $t_{j}^{f}$ is such that $t_{j}\leq t_{j}^{f}\leq t_{j+1}$ according to the mean-value theorem, for some process $f$.

    Consider the process $$X(t)=X(0)+\int_{0}^{t}\Delta(u)\;dW(u)+\int_{0}^{t}\Theta(u)\;du$$ where the processes $\Theta_{t}(\omega)$ and $\Delta_{t}(\omega)$ are adapted to the Brownian motion $W_{t}(\omega)$. Then if $\Gamma_{t}(\omega)$ is adapted to $X_{t}(\omega)$, we define $$\int_{0}^{t}\Gamma(u)\;dX(u)=\int_{0}^{t}\Gamma(u)\Delta(u)\;dW(u)+\int_{0}^{t}\Gamma(u)\Theta(u)\;du,$$ which is of course what we would expect by writing (formally)

    $$\int_{0}^{t}\Gamma(u)(\Delta(u)\;dW(u)+\Theta(u)\;du)=\int_{0}^{t}\Gamma(u)\Delta(u)\;dW(u)+\int_{0}^{t}\Gamma(u)\Theta(u)\;du.$$

    It seems this should be derivable (rather easily) from the original definition of the Ito stochastic integral. However, when I tried to do this, I failed:

    $$\begin{align*} \int_{0}^{t}\Gamma(u)\;dX(u)&=\lim_{n\to\infty}\sum_{j\in\Pi_{n}}\Gamma(t_{j})\left(\int_{t_{j}}^{t_{j+1}}\Delta(u)\;dW(u)+\int_{t_{j}}^{t_{j+1}}\Theta(u)\;du\right)\\ &=\lim_{n\to\infty}\sum_{j\in\Pi_{n}}\Gamma(t_{j})\cdot\int_{t_{j}}^{t_{j+1}}\Delta(u)\;dW(u)+\lim_{n\to\infty}\sum_{j\in\Pi_{n}}\Gamma(t_{j})\cdot\int_{t_{j}}^{t_{j+1}}\Theta(u)\;du\\ &=\lim_{n\to\infty}\sum_{j\in\Pi_{n}}\Gamma(t_{j})\cdot\int_{t_{j}}^{t_{j+1}}\Delta(u)\;dW(u)+\lim_{n\to\infty}\sum_{j\in\Pi_{n}}\Gamma(t_{j})(\bar{\Theta}(t_{j+1})-\bar{\Theta}(t_{j}))\\ &=\lim_{n\to\infty}\sum_{j\in\Pi_{n}}\Gamma(t_{j})\cdot\int_{t_{j}}^{t_{j+1}}\Delta(u)\;dW(u)+\lim_{n\to\infty}\sum_{j\in\Pi_{n}}\Gamma(t_{j})\Theta(t_{j}^{\Theta})(t_{j+1}-t_{j})\\ &=\lim_{n\to\infty}\sum_{j\in\Pi_{n}}\Gamma(t_{j})\cdot\int_{t_{j}}^{t_{j+1}}\Delta(u)\;dW(u)+\int_{0}^{t}\Gamma(u)\Theta(u)\;du. \end{align*}$$ where $t_{j}\leq t_{j}^{\Theta}\leq t_{j+1}$ and $\bar{\Theta}'=\Theta.$

    At this point it would appear nothing can be done with the first sum since we don't have a mean-value theorem for Ito stochastic integrals (as quantified by Ito's lemma); i.e. there does not in general exist a $t_{j}^{\Delta}\in[t_{j},t_{j+1}]$ such that $$\int_{t_{j}}^{t_{j+1}}\Delta(u)\;dW(u)=\Delta(t_{j}^{\Delta})(W(t_{j+1}-W(t_{j})).$$

    And in any event it doesn't really matter, since even if we had this result, it would not be known a priori whether the resulting sum converges (or whether it converges to the correct stochastic integral) due to the sensitivity of the limiting sums with respect to the sampling point used.

    To get a more concrete sense of the difficulty, consider the special case $\Delta(u)=f(W(u))$.

    Then $$\int_{t_{j}}^{t_{j+1}}\Delta(u)\;dW(u)=\bar{f}(W(t_{j+1}))-\bar{f}(W(t_{j}))-\frac{1}{2}\int_{t_{j}}^{t_{j+1}}f'(W(u))\;du,$$

    and the first sum becomes $$ \begin{array}{l} \lim_{n\to\infty}\sum_{j\in\Pi_{n}}\Gamma(t_{j})\left(\bar{f}(W(t_{j+1}))-\bar{f}(W(t_{j}))-\frac{1}{2}\int_{t_{j}}^{t_{j+1}}f'(W(u))\;du\right)\\ \;\;\;\;=\lim_{n\to\infty}\sum_{j\in\Pi_{n}}\Gamma(t_{j})\left(\bar{f}(W(t_{j+1}))-\bar{f}(W(t_{j}))\right)-\lim_{n\to\infty}\frac{1}{2}\sum_{j\in\Pi_{n}}\Gamma(t_{j})\left(\int_{t_{j}}^{t_{j+1}}f'(W(u))\;du\right)\\ \;\;\;\;=\lim_{n\to\infty}\sum_{j\in\Pi_{n}}\Gamma(t_{j})f(W(t_{j}^{f}))(W(t_{j+1})-W(t_{j}))-\lim_{n\to\infty}\frac{1}{2}\sum_{j\in\Pi_{n}}\Gamma(t_{j})f'(W(t_{j}^{f'}))(t_{j+1}-t_{j})\\ \;\;\;\;=\lim_{n\to\infty}\sum_{j\in\Pi_{n}}\Gamma(t_{j})\Delta(t_{j}^{f})(W(t_{j+1})-W(t_{j}))-\frac{1}{2}\int_{0}^{t}\Gamma(u)f'(W(u))\;du.\end{array} $$

    Putting this together yields

    $$\int_{0}^{t}\Gamma(u)\;dX(u)=\lim_{n\to\infty}\sum_{j\in\Pi_{n}}\Gamma(t_{j})\Delta(t_{j}^{f})(W(t_{j+1})-W(t_{j}))-\frac{1}{2}\int_{0}^{t}\Gamma(u)f'(W(u))\;du+\int_{0}^{t}\Gamma(u)\Theta(u)\;du,$$ and it's hard to see how this ends up being equal to the original definition.

    One would need to somehow show that the mean-value sampling $\Delta(t_{j}^{f})$ in the first sum results in

    $$\lim_{n\to\infty}\sum_{j\in\Pi_{n}}\Gamma(t_{j})\Delta(t_{j}^{f})(W(t_{j+1})-W(t_{j}))=\int_{0}^{t}\Gamma(u)\Delta(u)\;dW(u)+\frac{1}{2}\int_{0}^{t}\Gamma(u)f'(W(u))\;du.$$

    And in any event, this is just a special case of the process $\Delta_{t}(\omega)$.

    I have no doubt the issue can be resolved analytically; however, while the difficulties remain unresolved, it tends to make (in my mind) the definition somewhat artificial.

    by Taylor Martin at August 02, 2015 10:45 AM

    StackOverflow

    What is the mutual exclusion and visibility of change in the context of concurrency and how they are related

    What is the mutual exclusion and visibility of change in the context of concurrency and how they are related. If possible, please with intuitive yet comprehensive and detailed explanations and examples!!!

    by Humoyun at August 02, 2015 10:40 AM

    Using scala higher-kinded types for Executor and Commands

    I am not very experienced with Scala, but I was trying to overcome the limitation of Java Generics. The idea is to have a Command trait which is very generic: def exec:T . There are Commands which should only be compatible to specific devices.

    trait Command[T] {
        def exec:T
    }
    
    trait Device1Command[T] extends Command[T]{}
    trait Device2Command[T] extends Command[T]{}
    

    In AbstractCommandExecutor I use higher-kinded types to provide an generic implementation for any commands:

    abstract class AbstractCommandExecutor[C[T] <: Command[T]] {
      def exec[T](cmd: C[T]):T={
           cmd.exec
      }
    }
    

    Then I have concrete executors which are device-specific and only allow to execute compatible Commands:

    class Device1Executor extends AbstractCommandExecutor[Device1Command]{}
    class Device2Executor extends AbstractCommandExecutor[Device2Command]{}
    

    Some commands:

    class Device1Command1 extends Device1Command[String]{
      override def exec() = "testcmd1"
    }
    
    class Device1Command2 extends Device1Command[Int]{
      override def exec() = 2345
    }
    
    class Device2Command2 extends Device2Command[String]{
      override def exec() = "device2"
    }
    

    Then I have a test object:

    object Test extends App{
    
      val exec1 = new Device1Executor
      val exec2 = new Device2Executor
    
      println(exec1.exec(new Device1Command1))   // OK 
      println(exec1.exec(new Device1Command2))   // OK
      //println(exec1.exec(new Device2Command2)) // does not compile, as expected
    
      println(exec2.exec(new Device2Command2))   // OK
    }
    

    The program seems to work, it outputs

    testcmd1
    2345
    device2
    

    Does this make any sense? Is it maybe a hack, abusing this kind of generics and this should be done in another way? Or is this approach OK?

    by user140547 at August 02, 2015 10:09 AM

    Scala forward reference of nested recursive function

    I have this really simple method definition with a nested recursive function:

    def bar(arr : Array[Int]) : Int = {
      val foo : Int => Int = (i: Int) => if(i == 0) 0 else i + foo(i-1)
      foo(3)
    }
    

    But I get this error:

    <console>:36: error: forward reference extends over definition of value foo
         val foo : Int => Int = (i: Int) => if(i == 0) 0 else i + foo(i-1)
                                                                  ^
    

    If I just put the val foo: ... = ... line by itself, and not nested within a def, everything works

    by CJ Cobb at August 02, 2015 10:02 AM

    Scala - Safely get element from multidimensional array

    val arr = Array.fill[String](6, 6)("dark")
    

    Unsafe get:

     arr(9)(9)
    >java.lang.ArrayIndexOutOfBoundsException: 9
    

    I use something like that (but ugly):

    arr.lift(2).flatMap(_.lift(2))
    
    >res0: Option[String] = Some(dark)
    

    Is there better way?

    by EnverOsmanov at August 02, 2015 09:46 AM

    TheoryOverflow

    what can be said about complexity of "typical" supercomputing programs/ applications? any NP hard?

    supercomputers have risen dramatically in their computational powers last few decades due to Moore's law & also increasing parallelism technology in hardware and software. many different types of analysis algorithms are run now for many diverse areas of science.

    what can be said about the complexity of typical supercomputing jobs/ applications? are they "mostly" in P? are there examples of major supercomputing projects that tackle NP hard problems or harder? is there some published study/ survey of the complexity of supercomputing problems?

    my (rough) understanding is that maybe "most" are in P. for example "grid" calculations for 3d volumes are typically something like O(n3) where n is the grid distance/ length. molecular dynamics simulations have O(n2) calculations where n is the number of particles. many other calculations are done on matrices which are typically O(n2). etc. (not sure about fluid dynamics simulations.) PageRank might be O(n) or at least Ptime.

    (this question is partly motivated by discussion on Aaronsons blog/ comments "introducing some British people to P vs NP" where there is questions about using supercomputers for theorem proving in the comments etc.)

    by vzn at August 02, 2015 09:35 AM

    StackOverflow

    What's the relation between "generics" and "higher-order types"?

    From this question: What is a higher kinded type in Scala?, I understand what is higher-order types(also first-order type and proper type).

    But there is still a question: What's the relation between generics and "higher-order types"?

    I know Java supports generics, which is like the first-order type in Scala.

    Which of the following are correct?

    1. In Scala, only the first-order type is generics
    2. In Scala, first-order and higher-order types are both belong to generics
    3. In Java, the generics just mean first-order type, it's not complete
    4. generics is a common term means we can "abstract" on types, no matter first-order or higher-order

    by Freewind at August 02, 2015 09:16 AM

    How to sum/combine each value in List of List in scala

    Given the following Scala List:

    val l = List(List("a1", "b1", "c1"), List("a2", "b2", "c2"), List("a3", "b3", "c3"))
    

    How can I get:

    List("a1a2a3","b1b2b3","c1c2c3")
    

    is it possible to use zipped.map(_ + _) on list that have more than two lists ? or there are any other way to solve this?

    by zunzelf at August 02, 2015 09:12 AM

    UnixOverflow

    lost FreeBSD efi loader after upgrading windows 10

    How to return option to load FreeBSD, after it was lost next to upgrading windows 10. I use uefi-FreeBSD10, and didn't created another one ESP.

    by kAldown at August 02, 2015 09:09 AM

    StackOverflow

    Can compile Scala programs but can't run them

    I am able to compile scala programs using scalac in terminal but I get the warning.

    Charless-Macintosh:src Charles$ scalac hello.scala
    Charless-Macintosh:src Charles$ scala HelloWorld
    No such file or class on classpath: HelloWorld
    

    Is this to do with .profile on scala. I'm pretty confused as to what is happening. Many thanks

    by Mark at August 02, 2015 08:53 AM

    /r/emacs

    StackOverflow

    How to parse > character in Clojure Instaparse?

    I am trying to parse the > character in Clojure Instaparse. I have tried |> and |\> but the parser doesn't seem to recognize any of these. Does anyone know the correct syntax?

    by Zubair at August 02, 2015 08:49 AM

    Play framework and scala

    I am new to web development and i am trying to use play framework, however when i try to use the command: avtivator new to create a new project i receive the following error:

        $ activator new
    java.lang.ExceptionInInitializerError
            at activator.ActivatorCli$$anonfun$apply$1.apply$mcI$sp(ActivatorCli.sca
    la:21)
            at activator.ActivatorCli$$anonfun$apply$1.apply(ActivatorCli.scala:19)
            at activator.ActivatorCli$$anonfun$apply$1.apply(ActivatorCli.scala:19)
            at activator.ActivatorCli$.withContextClassloader(ActivatorCli.scala:179
    )
            at activator.ActivatorCli$.apply(ActivatorCli.scala:19)
            at activator.ActivatorLauncher.run(ActivatorLauncher.scala:28)
            at xsbt.boot.Launch$$anonfun$run$1.apply(Launch.scala:109)
            at xsbt.boot.Launch$.withContextLoader(Launch.scala:129)
            at xsbt.boot.Launch$.run(Launch.scala:109)
            at xsbt.boot.Launch$$anonfun$apply$1.apply(Launch.scala:36)
            at xsbt.boot.Launch$.launch(Launch.scala:117)
            at xsbt.boot.Launch$.apply(Launch.scala:19)
            at xsbt.boot.Boot$.runImpl(Boot.scala:44)
            at xsbt.boot.Boot$.main(Boot.scala:20)
            at xsbt.boot.Boot.main(Boot.scala)
    Caused by: java.lang.RuntimeException: BAD URI: file://c:/Users/Euaggelos/Docume
    nts/Weather-fi2/Weather-fi2
    
            at activator.properties.ActivatorProperties.uriToFilename(ActivatorPrope
    rties.java:106)
            at activator.properties.ActivatorProperties.ACTIVATOR_HOME_FILENAME(Acti
    vatorProperties.java:113)
            at activator.properties.ActivatorProperties.ACTIVATOR_TEMPLATE_LOCAL_REP
    O(ActivatorProperties.java:179)
            at activator.UICacheHelper$.<init>(UICacheHelper.scala:31)
            at activator.UICacheHelper$.<clinit>(UICacheHelper.scala)
            ... 15 more
    Caused by: java.lang.IllegalArgumentException: URI has an authority component
            at java.io.File.<init>(File.java:423)
            at activator.properties.ActivatorProperties.uriToFilename(ActivatorPrope
    rties.java:101)
            ... 19 more
    Error during sbt execution: java.lang.ExceptionInInitializerError
    

    I have windows 8.1 64 bit, i want to be able to use Intellije, i have installed java, sbt and scala

    java version "1.7.0_71"
    Java(TM) SE Runtime Environment (build 1.7.0_71-b14)
    Java HotSpot(TM) 64-Bit Server VM (build 24.71-b01, mixed mode)
    
    Scala code runner version 2.11.4 -- Copyright 2002-2013, LAMP/EPFL
    

    Any suggestions are welcome.

    by Angelo Uknown at August 02, 2015 08:13 AM

    Order of parameters to foldright and foldleft in scala

    Why does the foldLeft take

    f: (B, A) => B
    

    and foldRight take

    f: (A, B) => B
    

    foldLeft could have been written to take f: (A, B) => B. I am trying to understand the reasoning for the difference in the order of parameters.

    by rufus16 at August 02, 2015 07:18 AM

    disable parallelExecution in a scala gradle project

    I am learning scala in eclipse scala ide and the project is build using gradle. Just wondering, how to disable parallelExecution of scala tests.

    Solution described in below stackoverflow questions is using sbt build.

    How to run subprojects tests (including setup methods) sequentially when testing

    How to turn off parallel execution of tests for multi-project builds?

    Why does scalatest mix up the output?

    Note : I am running my test in scala ide using ide support (Run as Scala Test) .

    by rits at August 02, 2015 07:17 AM

    What is the difference between s"foo $bar" and "foo %s".format(bar) in scala

    I am new to scala. I used to write formatted string in scala in this way -> "foo %s".format(bar). But recently I have found code that writes formatted string in this way -> s"foo $bar". I was just wondering is there a major difference between the two? Thanks in advance.

    by eddard.stark at August 02, 2015 06:58 AM

    how to install spec2 with sbt

    I am very new to sbt, and I'm trying to add dependencies spec2 with spec2.

    I've added the following code in build.sbt as spec2 official site said:

    libraryDependencies += "org.specs2" %% "specs2-core" % "3.6.4" % "test"
    
    scalacOptions in Test ++= Seq("-Yrangepos")
    

    But It didn't involve the spec2 as dependencies for my project after running 'sbt compile'

    by Angle Tom at August 02, 2015 06:41 AM

    CompsciOverflow

    Why is pure literal elimination absent in DPLL-based algorithms like Chaff?

    I'm looking into various SAT-solvers and trying to understand how they work and why they are designed in certain ways. (But I'm not in a university at the moment and I do not know anyone who is a professor. So I'm posting here hoping that someone can help me out. I'd really appreciate.)

    In Chaff, BCP (Boolean Constraint Propagation) is implemented differently from the original DPLL: it does it by watching two literals at a time (a technique slightly different from one initially suggested in SATO: An Efficient Propositional Prover) according to the 2001 paper, Chaff: Engineering an Efficient SAT Solver. There is, however, no mention of pure literal elimination in this paper.

    In The Complexity of Pure Literal Elimination, Jan Johannsen wrote

    The current best implementations of DLL-type SAT solvers, like Chaff or BerkMin sacrifice this heuristic in order to gain efficiency in unit propagation.

    where "this heuristic" is referring to pure literal elimination. My understanding of what pure literal elimination does is that it

    1. searches for all single-polar (or pure) literals
    2. assigns a boolean value to them such that each yields True
    3. in which case we can now delete all the clauses containing them

    Here is my question:

    How is the sacrifice necessary? Is there a good reason why pure literal elimination is absent in DPLL-based algorithms like Chaff? Can't we just do pure literal elimination in each decision level (or at least do it at the start before branching)?

    by Arch Wilhes at August 02, 2015 05:55 AM

    StackOverflow

    returning dynamic typed values in applydynamic scala

    I'm trying to use apply dynamic for getters and setters.

    I need to store the values along with types and return the same typed values (instead of returning Any) runtime.

    This is my code where I'm using applydynamic.

      val data = new scala.collection.mutable.LinkedHashMap[String, Any]()
    
      def applyDynamic(methodName: String)(args: Any*): Any = {
    
        var (callType, fieldName) = methodName.toLowerCase().splitAt(3)
        callType match {
          case "set" => return set(fieldName, args.head)
          case "get" => return get(fieldName)
          case "has" => return has(fieldName)
          case _     => throw new RuntimeException("Unknown Call Type")
        }
      }
    

    I've read this article . I was thinking of storing the TypeTags. But I'm not sure how to return appropriate type even if I have corresponding TypeTag in applyDynamic.

    Can anyone help me with the same?

    by Yoda at August 02, 2015 05:46 AM

    Scala under MSys2 - failure to initialize terminal

    My environment is Windows 10 x64/Scala 2.11.7/Msys2 latest.

    When running Scala from MSys2 console, I see the following:

    $ scala
    [ERROR] Terminal initialization failed; falling back to unsupported
    java.lang.NoClassDefFoundError: Could not initialize class org.fusesource.jansi.internal.Kernel32
        at org.fusesource.jansi.internal.WindowsSupport.getConsoleMode(WindowsSupport.java:50)
        at jline.WindowsTerminal.getConsoleMode(WindowsTerminal.java:204)
        at jline.WindowsTerminal.init(WindowsTerminal.java:82)
        at jline.TerminalFactory.create(TerminalFactory.java:101)
        at jline.TerminalFactory.get(TerminalFactory.java:158)
        at jline.console.ConsoleReader.<init>(ConsoleReader.java:229)
        at jline.console.ConsoleReader.<init>(ConsoleReader.java:221)
        at jline.console.ConsoleReader.<init>(ConsoleReader.java:209)
        at scala.tools.nsc.interpreter.jline.JLineConsoleReader.<init>(JLineReader.scala:61)
        at scala.tools.nsc.interpreter.jline.InteractiveReader.<init>(JLineReader.scala:33)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at scala.tools.nsc.interpreter.ILoop$$anonfun$scala$tools$nsc$interpreter$ILoop$$instantiate$1$1.apply(ILoop.scala:865)
        at scala.tools.nsc.interpreter.ILoop$$anonfun$scala$tools$nsc$interpreter$ILoop$$instantiate$1$1.apply(ILoop.scala:862)
        at scala.tools.nsc.interpreter.ILoop.scala$tools$nsc$interpreter$ILoop$$mkReader$1(ILoop.scala:871)
        at scala.tools.nsc.interpreter.ILoop$$anonfun$15$$anonfun$apply$8.apply(ILoop.scala:875)
        at scala.tools.nsc.interpreter.ILoop$$anonfun$15$$anonfun$apply$8.apply(ILoop.scala:875)
        at scala.util.Try$.apply(Try.scala:192)
        at scala.tools.nsc.interpreter.ILoop$$anonfun$15.apply(ILoop.scala:875)
        at scala.tools.nsc.interpreter.ILoop$$anonfun$15.apply(ILoop.scala:875)
        at scala.collection.immutable.Stream.map(Stream.scala:418)
        at scala.tools.nsc.interpreter.ILoop.chooseReader(ILoop.scala:875)
        at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1$$anonfun$apply$mcZ$sp$2.apply(ILoop.scala:916)
        at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:916)
        at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:911)
        at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:911)
        at scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:97)
        at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:911)
        at scala.tools.nsc.MainGenericRunner.runTarget$1(MainGenericRunner.scala:74)
        at scala.tools.nsc.MainGenericRunner.run$1(MainGenericRunner.scala:87)
        at scala.tools.nsc.MainGenericRunner.process(MainGenericRunner.scala:98)
        at scala.tools.nsc.MainGenericRunner$.main(MainGenericRunner.scala:103)
        at scala.tools.nsc.MainGenericRunner.main(MainGenericRunner.scala)
    
    Welcome to Scala version 2.11.7 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_79).
    Type in expressions to have them evaluated.
    Type :help for more information.
    
    scala>
    

    When running it from cmd.exe, it works as expected. To debug the issue, I tried the following Scala program:

    object Test extends App {
        println(org.fusesource.jansi.internal.WindowsSupport.getConsoleMode)
    }
    

    When run from Msys2, it produces the following error:

    java.lang.UnsatisfiedLinkError: Could not load library. Reasons: [no jansi in java.library.path]
        at org.fusesource.hawtjni.runtime.Library.doLoad(Library.java:182)
        at org.fusesource.hawtjni.runtime.Library.load(Library.java:140)
        at org.fusesource.jansi.internal.Kernel32.<clinit>(Kernel32.java:37)
        at org.fusesource.jansi.internal.WindowsSupport.getConsoleMode(WindowsSupport.java:50)
        at Test$.delayedEndpoint$Test$1(Test.scala:5)
        at Test$delayedInit$body.apply(Test.scala:1)
        at scala.Function0$class.apply$mcV$sp(Function0.scala:34)
        at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
        at scala.App$$anonfun$main$1.apply(App.scala:76)
        at scala.App$$anonfun$main$1.apply(App.scala:76)
        at scala.collection.immutable.List.foreach(List.scala:381)
        at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
        at scala.App$class.main(App.scala:76)
        at Test$.main(Test.scala:1)
        at Test.main(Test.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at scala.reflect.internal.util.ScalaClassLoader$$anonfun$run$1.apply(ScalaClassLoader.scala:70)
        at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
        at scala.reflect.internal.util.ScalaClassLoader$URLClassLoader.asContext(ScalaClassLoader.scala:101)
        at scala.reflect.internal.util.ScalaClassLoader$class.run(ScalaClassLoader.scala:70)
        at scala.reflect.internal.util.ScalaClassLoader$URLClassLoader.run(ScalaClassLoader.scala:101)
        at scala.tools.nsc.CommonRunner$class.run(ObjectRunner.scala:22)
        at scala.tools.nsc.ObjectRunner$.run(ObjectRunner.scala:39)
        at scala.tools.nsc.CommonRunner$class.runAndCatch(ObjectRunner.scala:29)
        at scala.tools.nsc.ObjectRunner$.runAndCatch(ObjectRunner.scala:39)
        at scala.tools.nsc.MainGenericRunner.runTarget$1(MainGenericRunner.scala:65)
        at scala.tools.nsc.MainGenericRunner.run$1(MainGenericRunner.scala:87)
        at scala.tools.nsc.MainGenericRunner.process(MainGenericRunner.scala:98)
        at scala.tools.nsc.MainGenericRunner$.main(MainGenericRunner.scala:103)
        at scala.tools.nsc.MainGenericRunner.main(MainGenericRunner.scala)
    

    What I tried and it didn't work:

    • Some threads on the internet mention that it might be caused by missing VC2008 runtime, so I made sure that I have it both for x64 and x86.
    • Extracting jansi.dll from scala/lib/jline-2.12.1.jar and putting it in my working directory (with "." included in java.library.path).
    • Starting with a fresh MSys2 home directory.
    • running bash --login -i from cmd.exe (same error when running scala)
    • trying 32-bit and 64-bit JREs

    One more thing: the issue doesn't affect sbt; for example, running "sbt console" gives me a working Scala command line, albeit of version 2.10.4.

    by kirillkh at August 02, 2015 05:31 AM

    /r/emacs

    An emacs package for general utility functions

    Community maintained libraries, that provide general utility functions like https://github.com/magnars/dash.el, s.el and f.el for emacs-lisp programming have become quite successful with the advent of package.el and melpa.

    There does not seem to be an equivalent for more general, high-level utility functions in melpa or anywhere else. To clarify the distinction I am making, here are a few names of functions (check out my half baked implementations in my dotfiles) that might belong in such a package:

    (defun add-string-to-kill-ring (string) (defun copy-buffer-file-name () (defun copy-buffer-file-path () (defmacro suppress-messages (defun flatten-imenu-index (index) (defun imenu-prefix-flattened (defun fill-or-unfill-paragraph (&optional unfill region) (defun shell-command-on-region-replace (start end command) (defun eval-and-replace () 

    I imagine that many of you have implemented functions that do things that are very similar (whether in spirit or in function) to the ones that I have listed above. Maybe you've even lamented, like I often do as I write these functions, that someone has almost certainly implemented the exact function that you are about to write, and has probably done a better job than you can/have time to. Is there any good reason that such a package does not exist? It seems that the community could benefit from having a way to share simple solutions to these problems. It seems like the way these types of code snippets are currently shared is through the (exceptionally disorganized, but nonetheless useful) emacs-wiki and through stack overflow. I think that having a community maintained/somewhat centralized place for these types of functions would be really useful, although I suppose there is always the risk that it would turn into an unwieldy monolith.

    If any of you have any interest in this type of thing, please let me know. I'd be happy to start such a package, but I'm not sure if I have enough clout in the emacs/elisp community to get something like this moving.

    submitted by IvanMalison
    [link] [16 comments]

    August 02, 2015 04:46 AM

    CompsciOverflow

    The meaning of discount factor on reinforcement learning

    After reading of the google deepmind achievements on Atari's games, I am trying to understand the q-learning and q-networks, but I am little bit confused. The confusion arise in the concept of the discount factor. Brief summary of what I understand. A deep convolutional neural network is used to estimate the value of the optimal expected value of an action. The network has to minimize the loss function $$ L_i=\mathbb{E}_{s,a,r}\left[(\mathbb{E}_{s'}\left[y|s,a\right]-Q(s,a;\theta_i))^2\right] $$ where $\mathbb{E}_{s'}\left[y|s,a\right]$ is $$ \mathbb{E}\left[r+\gamma max_{a'} Q(s',a';\theta^-_i)\right|s,a] $$ Where $Q$ is a cumulative score value and $r$ is the score value for the action choose. $s,a$ and $s',a'$ are respectively the state and the action choose at the time $t$ and the state and the action at the time $t'$. The $\theta^-_i$ are the weights of the network at the previous iteration. The $\gamma$ is a discount factor that take into account the temporal difference of the score values. The $i$ subscript is the temporal step. The problem here is to understand why $\gamma$ does not depends on $\theta$.

    From the mathematical point of view $\gamma$ is the discount factor and represents the likelihood to reach the state $s'$ from the state $s$.

    I guess that the network actually learn to rescale the $Q$ according to the true value of $\gamma$, so why not letting $\gamma=1$?

    by emanuele at August 02, 2015 04:37 AM

    StackOverflow

    Ansible: Perform Cleanup on Task Failure

    I'm currently writing an Ansible play that follows this general format and is run via a cron job:

    pre_tasks:
      -Configuration / package installation
    
    tasks:
      -Work with installed packages
    
    post_tasks:
      -Cleanup / uninstall packages
    

    The problem with the above is that sometimes a command in the tasks section fails, and when it does the post_tasks section doesn't run, leaving the system in a messy state. Is it possible to force the commands in post_tasks to run even if a failure or fatal error occurs?

    My current approach is to apply ignore_errors: yes to everything under the tasks section, and then apply a when: conditional to each task to individually check if the prior command succeeded.

    This solution seems like a hack, but it gets worse because even with ignore_errors: yes set, if a Fatal error is encountered for a task the entire play will still immediately fail, so I have to also run a cron'd bash script to manually check on things after reach play execution.

    All I want is a guarantee that even if tasks fails, post_tasks will still run. I'm sure there is a way to do this without resorting to bash script wrappers.

    by Mark at August 02, 2015 04:34 AM

    Lobsters

    StackOverflow

    Cannot access variable value of Parameterised constructor variable value

    I have a parameterized constructor like below.

    public abc(string c)
    {
       a=c;
    }
    

    Then i have Button Event Handler like below.

    private void btnConnect_Click(object sender, EventArgs e)
    {
       MessageBox.Show(c);
    }
    

    So when i do this , When Message box appears it shows nothing it was blank. What is the error ? I have debugged the code i found that constructor has value but message box is not getting value it is null.

    by Jahanzaib Niazi at August 02, 2015 04:22 AM

    Why can't I run all Java JUnit Tests from Package Explorer? - Scala Plugin Issue

    Edit 3

    I installed a new version of eclipse (Mars - 4.5.0), and everything works. However, when I reinstall Scala via the Eclipse Marketplace, the issue reappeared. So perhaps it's something with the scala plugin?


    EDIT 2

    I was playing around with it more and found that if I delete certain packages, the functionality returns. Specifically, if I delete packages functional and io, the ability to run the whole project's testing returns. Renaming the packages does not help, only deleting them. Furthermore, if I add JUnit tests in those packages, I am still unable to run that test via the package explorer by running the whole package.


    I'm having an issue with a particular Java project in Eclipse. When I attempt to run all JUnit tests from the project explorer (via [right click on project folder] --> run as --> JUnit Test), I get the error message a lot of people seem to be seeing:

    Problem Launching JUnit Tests: No tests found with test runner 'JUnit 4'

    Clicking OK on the message brings up the Run configurations dialogue.

    What's strange is that the problem seems very isolated to this project at full project scope. I am able to do the following without trouble:

    • Run any single test within this project by opening it and clicking the green run button at top.
    • Run any single test within this project by right clicking on the class within the project explorer and selecting run as JUnit Test
    • Run all tests within any package within this project by the same method.
    • Run all tests within any other project by the same method.

    I've tried the standard stuff mentioned in similar posts, nothing seems to work. Specifically, I've tried:

    • Restarting eclipse
    • Restarting my computer
    • Cleaning and rebuilding the project
    • Deleting the project and recloning from git, then re-adding to eclipse
    • Adding the @RunWith annotation to my test cases
    • Making sure all of my test cases start with "test"
    • Using JUnit3 instead
    • Deleting all JUnit run configurations and recreating this run configuration

    Additionally, I distinctly remember this functionality working a little while ago for this project, but don't remember exactly when. So I must have added/changed/deleted something that has caused the error to appear.


    EDIT @ Durron597's suggestions: None of the suggestions worked, unfortunately. I also tried deleting every JUnit run configuration and trying the create configuration process again, still no luck.

    My eclipse version is: Luna Service Release 2 (4.4.2)

    My JUnit version is: 4.11

    JUnit preferences screenshot: JUnit preferences screenshot


    Here's the code from one test:

    package common;
    
    import static org.junit.Assert.*;
    
    import java.util.ArrayList;
    import java.util.HashSet;
    
    import org.junit.Test;
    
    public class UtilTest {
    
        @Test
        public void testWrappers(){
            short[] s = {1,2,3,4};
            Short[] s2 = Util.boxArr(s);
            short[] s3 = Util.unboxArr(s2);
    
            assertEquals(s.length, s2.length);
            assertEquals(s.length, s3.length);
            for(int i = 0; i < s.length; i++){
                assertEquals(s[i], s2[i].shortValue());
                assertEquals(s[i], s3[i]);
            }
    
            int[] i = {1,2,3,4, Integer.MAX_VALUE, Integer.MIN_VALUE};
            Integer[] i2 = Util.boxArr(i);
            int[] i3 = Util.unboxArr(i2);
    
            assertEquals(i.length, i2.length);
            assertEquals(i.length, i3.length);
            for(int x = 0; x < s.length; x++){
                assertEquals(i[x], i2[x].intValue());
                assertEquals(i[x], i3[x]);
            }
    
            long[] l = {1,2,3,4, Integer.MAX_VALUE, Integer.MIN_VALUE, Long.MAX_VALUE, Long.MIN_VALUE};
            Long[] l2 = Util.boxArr(l);
            long[] l3 = Util.unboxArr(l2);
    
            assertEquals(l.length, l2.length);
            assertEquals(l.length, l3.length);
            for(int x = 0; x < s.length; x++){
                assertEquals(l[x], l2[x].longValue());
                assertEquals(l[x], l3[x]);
            }
    
            float[] f = {1,2,3,4, 0.4f, 0.1f, Float.MAX_VALUE, Float.MIN_NORMAL};
            Float[] f2 = Util.boxArr(f);
            float[] f3 = Util.unboxArr(f2);
    
            assertEquals(f.length, f2.length);
            assertEquals(f.length, f3.length);
            for(int x = 0; x < s.length; x++){
                assertEquals(f[x], f2[x].floatValue(), 0.00001);
                assertEquals(f[x], f3[x], 0.00001);
            }
    
    
            double[] d = {1,2,3,4, 0.4, 0.1, Float.MAX_VALUE, Float.MIN_VALUE, Double.MAX_VALUE, Double.MIN_NORMAL};
            Double[] d2 = Util.boxArr(d);
            double[] d3 = Util.unboxArr(d2);
    
            assertEquals(d.length, d2.length);
            assertEquals(d.length, d3.length);
            for(int x = 0; x < s.length; x++){
                assertEquals(d[x], d2[x].doubleValue(), 0.00001);
                assertEquals(d[x], d3[x], 0.00001);
            }
    
            char[] c = {1,2,3,4, 'a', 'b', '.', Character.MAX_VALUE, Character.MIN_VALUE};
            Character[] c2 = Util.boxArr(c);
            char[] c3 = Util.unboxArr(c2);
    
            assertEquals(c.length, c2.length);
            assertEquals(c.length, c3.length);
            for(int x = 0; x < s.length; x++){
                assertEquals(c[x], c2[x].charValue());
                assertEquals(c[x], c3[x]);
            }
        }
    
        @Test
        public void testRandElement(){
            assertTrue(null==Util.randomElement(null));
    
            HashSet<Integer> s = new HashSet<>();
            assertTrue(null==Util.randomElement(s));
    
            for(int i = 0; i < 10; i++){
                s.add(i);
            }
    
            HashSet<Integer> s2 = new HashSet<>();
            while(! s2.equals(s)){
                Integer i = Util.randomElement(s);
                s2.add(i);
                assertTrue(s.contains(i));
            }
        }
    
        @Test
        public void testPermute(){
            ArrayList<Integer[]> a = Util.permute(new Integer[]{1,2,3,4});
            assertEquals(a.size(), 24);
    
            for(Integer[] i : a){
                assertEquals(i.length, 4);
                assertEquals(10, i[0] + i[1] + i[2] + i[3]);
            }
    
            HashSet<Integer[]> s = new HashSet<>(a);
            assertEquals(s.size(), 24);
    
            a = Util.permute(new Integer[]{});
            assertEquals(a.size(), 1);
        }
    
    }
    

    The whole project is at https://github.com/Mshnik/UsefulThings if that helps as well.

    by Mshnik at August 02, 2015 03:56 AM

    Building Gold linker in FreeBSD

    I followed the steps on http://llvm.org/docs/GoldPlugin.html#lto-how-to-build to build the gold plugin on FreeBSD but !

    Heres a link to the screenshot of the error:http://postimg.org/image/anlpuufbl/

    This is the error message that it shows and so I am also unable to get ld-new. I checked and no CFLAGS were set in etc/make.conf.

    How to proceed with the installation? I am using the deault clang version supplied with FreeBSD 10.1.

    by matrixaliser at August 02, 2015 03:39 AM

    Halfbakery

    StackOverflow

    :body-params vs :form-params in compojure-api

    What's the difference between using :body-params and :form-params in practice when creating an API using compojure-api? For example:

    (POST* "/register" []
        :body-params [username :- String,
                      password :- String]
        (ok)))
    

    vs

    (POST* "/register" []
        :form-params [username :- String,
                      password :- String]
        (ok)))
    

    Which one should be used for an API that's going to be consumed by a single-page clojurescript app and/or native mobile apps?

    by Pablo at August 02, 2015 02:49 AM

    Concise Way of Constructing Create Class with Many Fields?

    Given a case class:

    scala> case class Foo(a: String, b: String, c: String, d: String, 
                           e: String, f: Int, g: String, h: String, i: Int)
    defined class Foo
    

    I have a function: f: Foo => Int, and I need to test that f(Foo) == 1.

    Is there a concise way, for testing purposes, to create a case class (without defining each field)?

    I considered making the field lazily evaluated and then using ???, but:

    scala> case class Bar(a: => Int)
    <console>:1: error: `val' parameters may not be call-by-name
    case class Foo(a: => Int)
                      ^
    

    EDIT

    I'm requesting an answer that does not include default parameters.

    by Kevin Meredith at August 02, 2015 02:46 AM

    /r/compsci

    Halfbakery

    Planet Emacsen

    Timo Geusch: Adding TLS support to Emacs 24.5 on Windows

    The Windows build of Emacs 24.5 doesn’t ship with SSL and TLS support out of the box. Normally that’s not that much of a problem until you are trying to access marmalade-repo or have org2blog talk to your own blog via SSL/TLS. Adding SSL and TLS support to the Windows builds of Emacs is easy.… Read More »

    The post Adding TLS support to Emacs 24.5 on Windows appeared first on The Lone C++ Coder's Blog.

    by Timo Geusch at August 02, 2015 01:58 AM

    /r/scala

    QuantOverflow

    Q regarding amortization of 500,000 loans

    I am brand new to this forum. I asked this question on the main StackOverflow site and it was suggested that I ask here.

    My task is to find a method to quickly calculate the monthly cash flow on nearly 500,000 loans. However, this problem cannot be solved with simple amortization schedules. The loans have a variety of attributes like periodic reset dates, caps, floors and balloon dates. Some are variable, some fixed rate and nearly all of them are aged to some degree (months or years). After running the amortization schedules, I need to input different sets of prepayment speed assumptions, then run it all again… 20 more times!

    I am currently using Excel on a smaller data-set, but Excel doesn't have the capacity to perform these tasks quickly. A recent test of only 10,000 loans took nearly 5 minutes.

    For starters, I’d like to know: Is anyone aware of any existing companies that already do this? Alternatively, If I decide to build something from scratch, what programming language would be most appropriate given the size. My best estimate is between 5 and 10 billion calculations, maybe more.

    Thanks in advance for any and all replies.

    by jim at August 02, 2015 01:39 AM

    StackOverflow

    IntelliJ IDEA reports errors in routes

    1. I've created a project by activator new play-scala-intro play-scala-intro
    2. In IDEA I've clicked File -> Import project from Existing sources and selected SBT.

    enter image description here

    This is default project structure: dfdfdf

    I've also tried all tips from here (suggested for for Play 2.2 — 2.3, whereas I have Play 2.4). But I haven't tried to add target/scala-*/classes_managed to sources because my project doesn't contain this folder.

    Versions

    • I have Idea 14 Ultimate with installed "Scala" and "Playframework support" plugins.
    • Play Framework: addSbtPlugin("com.typesafe.play" % "sbt-plugin" % "2.4.1")
    • scalaVersion := "2.11.6"

    by Jofsey at August 02, 2015 01:33 AM

    Ambiguous implicit in Play

    I don't know where to begin debugging this. Play 2.3.5 with Slick and SecureSocial.

    My routes were working fine until I wrote the first one accepting a parameter:

    GET        /activity/            controllers.ActivitiesController.show(id: Int)
    

    As soon as that route was added the compiler pointed to it and errors as such:

    ambiguous implicit values: [error]  both method wrapJava in object HandlerInvokerFactory of     type => 
    play.core.Router.HandlerInvokerFactory[play.mvc.Result]
    [error]  and method wrapJavaPromise in object HandlerInvokerFactory of type => 
    play.core.Router.HandlerInvokerFactory[play.libs.F.Promise[play.mvc.Result]]
    [error]  match expected type play.core.Router.HandlerInvokerFactory[T]
    

    by blaha at August 02, 2015 01:21 AM

    Assign an anonymous function to a scala method?

    I may be way off here - but would appreciate insight on just how far ..

    In the following getFiles method, we have an anonymous function being passed as an argument.

    def getFiles(baseDir: String, filter: (File, String) => Boolean ) = {
     val ffilter = new FilenameFilter {
       // How to assign to the anonymous function argument 'filter' ?
       override def accept(dir: File, name: String): Boolean = filter  
     }
     ..
    

    So that override is quite incorrect: that syntax tries to evaluate the filter() function which results in a Boolean.

    Naturally we could simply evaluate the anonymous function as follows:

     override def accept(dir: File, name: String): Boolean = filter(dir, name)  
    

    But that approach does not actually replace the method .

    So: how to assign the accept method to the filter anonymous function?

    Update the error message is

    Error:(56, 64) type mismatch;
     found   : (java.io.File, String) => Boolean
     required: Boolean
           override def accept(dir: File, name: String): Boolean = filter // { filter(dir, name) }
    

    Another update Thinking more on this - and am going to take a swag : Dynamic languages like python and ruby can handle assignment of class methods to arbitrary functions. But scala requires compilation and thus the methods are actually static. A definitive answer on this hunch would be appreciated.

    by javadba at August 02, 2015 12:46 AM

    QuantOverflow

    How to get around flat likelihood function when calibrating GBM parameters

    (Hope this is the correct place for this question - I posted it first on stackoverflow:)

    I want to calibrate jointly the drift mu and volatility sigma of a geometric brownian motion,

    log(S_t) = log(S_{t-1}) + (mu - 0.5*sigma^2)*Deltat + sigma*sqrt(Deltat)*Z_t
    

    where Z_t is a standard normally distributed random variable, and am testing this by generating data x = log(S_t) via

    x(1) = 0;
    for i = 2:N
      x(i) = x(i-1) + (mu-0.5*sigma^2)*Deltat + sigma*sqrt(Deltat)*randn;
    end
    

    and my (log-)likelihood function

    function LL = LL(x, pars)
    mu    = pars(1);
    sigma = pars(2);
    Nt = size(x,2);
    LL = 0;
    for j = 2:Nt
      LH_j = normpdf(x(j), x(j-1)+(mu-0.5*sigma^2)*Deltat, sigma*sqrt(Deltat));
      LL = LL + log(LH_j);
    end  
    

    which I maximize using fmincon (because sigma is constrained to be positive), with starting values 0.15 and 0.3, true values 0.1 and 0.2, and N = Nt = 1000 or 100000 generated points over one year (=> Deltat = 0.0001 or 0.000001).

    Calibrating the volatility alone yields a nice likelihood function with a maximum around the true parameter, but for small Deltat (less than say 0.1) calibrating both mu and sigma persistently shows a (log-)likelihood surface being very flat in mu (at least around the true parameter); I would expect also a maximum there; for a reason I think it should be possible to calibrate a GBM model to a data series of 100 stock prices in a year, making the average of Deltat = 0.01.

    Any sharing of experience or help is greatly appreciated (thoughts passing through my mind: the likelihood function is not right / this is a normal behaviour / too few data points / data generation is not correct / ...?).
    Thanks!

    by Futurist at August 02, 2015 12:39 AM

    /r/emacs

    Lobsters

    HN Daily

    August 01, 2015

    /r/compsci

    Has anyone here used the Brookshear computer science textbook? 12th edition to be precise.

    I'm using this book at the moment and unfortunately they haven't included the solutions to the chapter tests. So I really have no idea if I'm doing anything correctly. How can I find out if I'm right? is there a solutions manual somewhere?

    submitted by ImperviousSeahorse
    [link] [comment]

    August 01, 2015 11:36 PM

    StackOverflow

    scala append to a mutable LinkedList

    Please check this

    import scala.collection.mutable.LinkedList
    
    var l = new LinkedList[String]
    
    l append LinkedList("abc", "asd")
    
    println(l)
    // prints 
    // LinkedList()
    

    but

    import scala.collection.mutable.LinkedList
    
    var l = new LinkedList[String]
    
    l = LinkedList("x")
    l append LinkedList("abc", "asd")
    
    println(l)
    // prints 
    // LinkedList(x, abc, asd)
    

    Why does the second code snippet works but the first one doesnt? This is on Scala 2.10

    by weima at August 01, 2015 11:31 PM

    What is the difference between body and body-params in compojure-api?

    In compojure-api I noticed this two ways of specifying the API of a resource:

    (POST* "/register" []
        :body [user UserRegistration]
        (ok)))
    

    and

    (POST* "/register" []
        :body-params [username :- String,
                      password :- String]
        (ok)))
    

    What are the difference between these two? What are the implications of using one vs the other?

    by Pablo at August 01, 2015 11:26 PM

    CompsciOverflow

    What are the conditions that make the A* algorithm optimal over the other unidirectional search algorithms

    I was wondering as what are the specific conditions which make the A* algorithm - optimal in terms of the node expansion over the other Unidirectional algorithms:

    When the same heuristic information is given to all the algorithms.

    by skn at August 01, 2015 10:44 PM

    StackOverflow

    Java component not updating most of the time

    I think I may have found a platform issue with openjdk-1.8.0_45 under ubuntu 15.04. The following program compiles and runs:

    import java.awt.*;
    import java.awt.event.*;
    import java.util.*;
    import javax.swing.*;
    import javax.swing.Timer;
    
    public class TimerTime extends JPanel implements ActionListener
    {
        private JLabel timeLabel;
    
        public TimerTime()
        {
            timeLabel = new JLabel( String.valueOf(System.currentTimeMillis());
            add( timeLabel );
    
            Timer timer = new Timer(50, this);
            timer.setInitialDelay(1);
            timer.start();
        }
    
        @Override
        public void actionPerformed(ActionEvent e)
        {
            //System.out.println(e.getSource());
            timeLabel.setText(String.valueOf(System.currentTimeMillis()));
        }
    
        private static void createAndShowUI()
        {
            JFrame frame = new JFrame("TimerTime");
            frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
            frame.add( new TimerTime() );
            frame.setLocationByPlatform( true );
            frame.pack();
            frame.setVisible( true );
        }
    
        public static void main(String[] args)
        {
            EventQueue.invokeLater(new Runnable()
            {
                public void run()
                {
                    createAndShowUI();
                }
            });
        }
    }
    

    The time visually updates about twice per second until any kind of input happens to it - the mouse moves over it, it gets dragged, etc. Then it will get updated at the proper rate, or at least much faster. Does this happen on your system? Can anyone provide an explanation or theory for this behavior? If I put prints each time timeLabel's setText() is called, I can see that it's being called approximately 20 times per second, but despite this the actual on-screen window is only being updated twice per second until other input events happen.

    by Reepca at August 01, 2015 10:44 PM

    QuantOverflow

    How to calculate implied volatility smile of basket using correlations?

    For a basket, the realized volatility can be calculated using:

    $$\sqrt{\sigma_1^2 + \sigma_2^2 + 2 \sigma_1 \sigma_2 \rho}$$

    If I have the volatility surface of two underlyings S1,S2 (strike space).

    And for each point I calculate the vols using above formula, how accurate is the approximation? I can extend this to multiple assets using simple cholesky transformation.

    Correlation used is historical correlation, and not implied correlation.

    by user139258 at August 01, 2015 10:39 PM

    StackOverflow

    Starting a transaction with JDBC in Clojure without a block/function

    Is it possible to start a transaction in Clojure using JDBC without having to encase the code in a block? Obviously I'd have to call another function to end the transaction later one.

    by Pablo at August 01, 2015 10:18 PM

    Halfbakery

    QuantOverflow

    Derivation using Ito's Lemma of price process

    define q(t) as the log price minus a linear trend

    $$ q(t) = logP(t) - \mu t $$

    assume teh log price process = Equation 1: $$ dq(t) = - \Theta q(t) dt + \sigma dW(t) $$

    Can you show that the solution to Equation 1 is: $$ lnP(t+h) -lnP(t) = \mu h + (exp(-h \Theta) - 1) lnP(t) + \sigma \int^t_{t+h} exp(-\Theta(t-u)dW(u)) $$

    by joesyc at August 01, 2015 09:49 PM

    StackOverflow

    Clojure map. Pass function multiple parameters

    I'm looking for a way how to use map function in more custom way. If there is a different function for what I'm trying to achieve, could you please let me know this.

    ;lets say i have addOneToEach function working as bellow
    
    (defn plusOne[singleInt]
       (+ 1 singleInt))
    
    (defn addOneToEach[intCollection] ;[1 2 3 4]
       (map plusOne intCollection))   ;=>(2 3 4 5)
    
    ;But in a case I would want to customly define how much to add 
    
    (defn plusX[singleInt x]
       (+ x singleInt))
    
    (defn addXToEach[intCollection x] ;[1 2 3 4]
       ;how do I use plusX here inside map function?
       (map (plusX  ?x?) intCollection))   ;=>((+ 1 x) (+ 2 x) (+ 3 x) (+ 4 x))
    

    I'm not looking for a function that adds x to each in the collection, but a way to pass extra arguments to the function that map is using.

    by Nabuska at August 01, 2015 09:45 PM

    Lobsters

    StackOverflow

    Spark forcing log4j

    I have trivial spark project in Scala and would like to use logback, but spark/hadoop seems to be forcing log4j on me.

    1. This seems at odds with my understanding of the purpose of slf4j; is it not a oversight in spark/hadoop?

    2. Do I have to give up on logback and use log4j, or is there a workaround?

    In build.sbt I tried exclusions ...

    "org.apache.spark" %% "spark-core" % "1.4.1" excludeAll(
        ExclusionRule(name = "log4j"),
        ExclusionRule(name = "slf4j-log4j12")
    ),
    "org.slf4j" % "slf4j-api" % "1.7.12",
    "ch.qos.logback" % "logback-core" % "1.1.3",
    "ch.qos.logback" % "logback-classic" % "1.1.3"
    

    ... but this results in an exception ...

    Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/log4j/Level
        at org.apache.hadoop.mapred.JobConf.<clinit>(JobConf.java:354)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:344)
        at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:1659)
        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:91)
        at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75)
        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
        at org.apache.hadoop.security.Groups.<init>(Groups.java:55)
        at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:182)
        at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:235)
        at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:214)
        at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:669)
        at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:571)
        at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2162)
        at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2162)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.util.Utils$.getCurrentUserName(Utils.scala:2162)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:301)
        at spike.HelloSpark$.main(HelloSpark.scala:19)
        at spike.HelloSpark.main(HelloSpark.scala)
    Caused by: java.lang.ClassNotFoundException: org.apache.log4j.Level
        at java.net.URLClassLoader$1.run(URLClassLoader.java:372)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:360)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        ... 20 more
    

    by Pengin at August 01, 2015 09:19 PM

    Lobsters

    StackOverflow

    Slick 3 insert not inserting but no error

    I'm trying to get a hang of Slick by doing a small test. I'm trying to do an insert. The test runs, no errors, but when I check the db, no record has been inserted.

    What am I doing wrong?

    Here's my test code:

    Note: I disabled the first 'flatMap' because when I wanted to test the second insert method, and that code was not executed when the first flatmap function was enabled.

    Both insert methods do not insert a new record. The first query for all items does work. The 'Test id:xx' lines are printed to console.

    object TestSlick extends App {
    
      import slick.driver.PostgresDriver.api._
      import concurrent.ExecutionContext.Implicits.global
      import concurrent.duration._
    
      val config = ConfigFactory.load()
      val username = config.getString("app.database.jdbc.username")
      val password = config.getString("app.database.jdbc.password")
      val url: String = config.getString("app.database.jdbc.url")
    
      val db = Database.forURL(url, username, password)
    
      try {
        import Tables._
    
        val res = db.run(headlines.result).map(_.foreach {
          case HeadLineRow(id, _, _, _, _, companyId, text, from, days, end, user) =>
            println(s"Test id:$id")
        }).flatMap { _ =>
    //      println("Inserting....")
    //      val ts = Timestamp.valueOf(LocalDateTime.now())
    //      val insertAction: DBIO[Option[Int]] = (headlines returning headlines.map(_.id)) +=
    //        HeadLineRow(None, 100, 100, "tekst", ts, 5, ts, None, None, None, None)
    //
    //      db.run(insertAction.transactionally.map(
    //        newId => println(s"New id: $newId"))
    //      )
    //    }.flatMap { _ =>
          println("Inserting....(2)")
          val ts = Timestamp.valueOf(LocalDateTime.now())
          val insertAction = headlines.map(p => p) += HeadLineRow(None, 1921, 65, "tekst2", ts, 5, ts, None, None, None, None)
    
          db.run(insertAction.transactionally.map(
            r => println(s"Insert result: ${r}"))
          )
        }
    
        Await.ready(res, 30 seconds);
    
      } finally db.close()
    }
    

    And my table (generated using Slick's generator and then adjusted a bit (auto-inc id, swapped some properties around))

    package com.wanneerwerkik.db.slick

    // AUTO-GENERATED Slick data model
    /** Stand-alone Slick data model for immediate use */
    object Tables extends {
      val profile = slick.driver.PostgresDriver
    } with Tables
    
    /** Slick data model trait for extension, choice of backend or usage in the cake pattern. (Make sure to initialize this late.) */
    trait Tables {
      val profile: slick.driver.JdbcProfile
      import profile.api._
      import slick.model.ForeignKeyAction
      import slick.collection.heterogeneous._
      import slick.collection.heterogeneous.syntax._
      // NOTE: GetResult mappers for plain SQL are only generated for tables where Slick knows how to map the types of all columns.
      import slick.jdbc.{GetResult => GR}
    
      /** DDL for all tables. Call .create to execute. */
      lazy val schema = Array(headlines.schema).reduceLeft(_ ++ _)
      @deprecated("Use .schema instead of .ddl", "3.0")
      def ddl = schema
    
      /**
       * Entity class storing rows of table 'head_line_bar'
       *  @param id Database column id SqlType(int4), PrimaryKey
       *  @param createdBy Database column created_by SqlType(int4), Default(None)
       *  @param createdOn Database column created_on SqlType(timestamp), Default(None)
       *  @param updatedBy Database column updated_by SqlType(int4), Default(None)
       *  @param updatedOn Database column updated_on SqlType(timestamp), Default(None)
       *  @param companyId Database column company_id SqlType(int4), Default(None)
       *  @param contentType Database column content_type SqlType(varchar), Length(255,true), Default(None)
       *  @param fromDate Database column from_date SqlType(timestamp), Default(None)
       *  @param numberofdays Database column numberofdays SqlType(int4), Default(None)
       *  @param uptoEndDate Database column upto_end_date SqlType(timestamp), Default(None)
       *  @param userId Database column user_id SqlType(int4), Default(None)
       */
      case class HeadLineRow(
          id: Option[Int],
          userId: Int,
          companyId: Int,
          contentType: String,
          fromDate: java.sql.Timestamp,
          numberofdays: Int,
          uptoEndDate: java.sql.Timestamp,
          createdBy: Option[Int] = None,
          createdOn: Option[java.sql.Timestamp] = None,
          updatedBy: Option[Int] = None,
          updatedOn: Option[java.sql.Timestamp] = None
      )
    
      /** GetResult implicit for fetching HeadLineBarRow objects using plain SQL queries */
      implicit def GetResultHeadLineRow(implicit e0: GR[Int], e1: GR[Option[Int]], e2: GR[Option[java.sql.Timestamp]], e3: GR[Option[String]]): GR[HeadLineRow] = GR{
        prs => import prs._
        HeadLineRow.tupled((<<?[Int], <<[Int], <<[Int], <<[String], <<[java.sql.Timestamp], <<[Int], <<[java.sql.Timestamp], <<?[Int], <<?[java.sql.Timestamp], <<?[Int], <<?[java.sql.Timestamp]))
      }
      /**
       * Table description of table head_line_bar.
       * Objects of this class serve as prototypes for rows in queries.
       */
      class Headlines(_tableTag: Tag) extends Table[HeadLineRow](_tableTag, "head_line_bar") {
        def * = (id, userId, companyId, contentType, fromDate, numberofdays, uptoEndDate, createdBy, createdOn, updatedBy, updatedOn) <> (HeadLineRow.tupled, HeadLineRow.unapply)
        /** Maps whole row to an option. Useful for outer joins. */
        def ? = (Rep.Some(id), userId, companyId, contentType, fromDate, numberofdays, uptoEndDate, createdBy, createdOn, updatedBy, updatedOn).shaped.<>({r=>import r._; _1.map(_=> HeadLineRow.tupled((_1.get, _2, _3, _4, _5, _6, _7, _8, _9, _10, _11)))}, (_:Any) =>  throw new Exception("Inserting into ? projection not supported."))
    
        /** Database column id SqlType(int4), PrimaryKey */
        val id: Rep[Option[Int]] = column[Option[Int]]("id", O.PrimaryKey, O.AutoInc)
        /** Database column user_id SqlType(int4), Default(None) */
        val userId: Rep[Int] = column[Int]("user_id")
        /** Database column company_id SqlType(int4), Default(None) */
        val companyId: Rep[Int] = column[Int]("company_id")
        /** Database column content_type SqlType(varchar), Length(255,true), Default(None) */
        val contentType: Rep[String] = column[String]("content_type", O.Length(255,varying=true))
        /** Database column from_date SqlType(timestamp), Default(None) */
        val fromDate: Rep[java.sql.Timestamp] = column[java.sql.Timestamp]("from_date")
        /** Database column numberofdays SqlType(int4), Default(None) */
        val numberofdays: Rep[Int] = column[Int]("numberofdays")
        /** Database column upto_end_date SqlType(timestamp), Default(None) */
        val uptoEndDate: Rep[java.sql.Timestamp] = column[java.sql.Timestamp]("upto_end_date")
        /** Database column created_by SqlType(int4), Default(None) */
        val createdBy: Rep[Option[Int]] = column[Option[Int]]("created_by", O.Default(None))
        /** Database column created_on SqlType(timestamp), Default(None) */
        val createdOn: Rep[Option[java.sql.Timestamp]] = column[Option[java.sql.Timestamp]]("created_on", O.Default(None))
        /** Database column updated_by SqlType(int4), Default(None) */
        val updatedBy: Rep[Option[Int]] = column[Option[Int]]("updated_by", O.Default(None))
        /** Database column updated_on SqlType(timestamp), Default(None) */
        val updatedOn: Rep[Option[java.sql.Timestamp]] = column[Option[java.sql.Timestamp]]("updated_on", O.Default(None))
      }
      /** Collection-like TableQuery object for table HeadLineBar */
      lazy val headlines = new TableQuery(tag => new Headlines(tag))
    
    }
    

    Log output is too big to paste here so I put it in this gist.

    As suggested I added a readLine to wait for the result, but it was already output the same stuff. I also added a completion handler on the Future to print it's Success or Failure. Apparently it fails with a RejectedExecutionException. Why?

    Failure: java.util.concurrent.RejectedExecutionException: Task slick.backend.DatabaseComponent$DatabaseDef$$anon$2@2e4db0df rejected from java.util.concurrent.ThreadPoolExecutor@43760a50[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 1]
    

    by Joost den Boer at August 01, 2015 09:17 PM

    CompsciOverflow

    "other" space on phone and computer [on hold]

    I'm constantly struggling with the storage on my Mac book air, every time I move things to my external harddrive it makes no difference at all because the space I freed is immidialaty taken over by "Others". I recently bought an SD memory card for my phone, everything was fine, then I put it in my computer to move the files to my external harddrive and when I put it back into my phone, same thing there. I'm beginning to think this is a virus since my SD card was fine until it came in contact with my computer. Please help.

    by Paxillus Molin at August 01, 2015 09:06 PM

    StackOverflow

    removing duplicate lines from file without storing file contents

    I am trying to remove duplicate lines from a text file using scala, but the output file from my program returns nothing. If I simply print the unique lines (rather then writing them to a file) it returns every line. I am trying to avoid storing the contents of either file in memory because I expect each file to be rather large (10GB+) and I thought I might run out of memory.

    val oldFile="temptest.txt"//"steam_out_scala.txt"
    val noDupFile="nodup_steam_out.txt"
    
    import scala.io.Source
    import java.io.{FileReader, FileNotFoundException, IOException}
    import java.io.FileWriter;
    
    val fw = new FileWriter(noDupFile, true) 
    
    try {
        for (line <- Source.fromFile(oldFile).getLines()) {
            if(!lineExists(line))
            {
            println(line)
            fw.write(line+"\n")
            }
        }
    }catch {
      case ex: FileNotFoundException => println("Couldn't find that file.")
      case ex: IOException => println("Had an IOException trying to read that file")
    }
    
    def lineExists(test:String):Boolean={
    
        try{
        for (line <- Source.fromFile(noDupFile).getLines()) {
          if(line==test)
            return true
        }
        } catch {
          case ex: FileNotFoundException => println("2Couldn't find that file.")
          case ex: IOException => println("2Had an IOException trying to read that file")
        }
        return false
    }
    

    by Rilcon42 at August 01, 2015 08:41 PM

    Lobsters

    /r/clojure

    Planet Clojure

    Glow: Syntax Highlighting for Clojure Source Code

    Glow: Syntax Highlighting for Clojure Source Code

    Today I'm happy to announce the release of Glow, a very small library for syntax highlighting strings of Clojure source code.

    Motivation

    You may be thinking to yourself: why did you write this? After all, Ultra and Whidbey already provide syntax highlighting at the REPL, and you can pretty-print most EDN and Clojure objects using the underlying pretty-printing engines, Fipp and Puget.

    The short answer is that I really, really wanted a way to get a syntax-highlighted drop-in solution for clojure.repl/source which I could inject into Ultra. The current functionality (in ultra.repl) relies on the (somewhat questionable) use of read-string to accomplish syntax highlighting via Puget, which unfortunately also does macro expansion and hides metadata.

    Most of the time, macro expansion isn't a problem, but if I want to quickly look at the source code for something (e.g., a macro), I don't want to be looking at what could potentially be an extremely large macro-expanded form.

    Similarly, I don't want any attached metadata to be hidden from me either. I want to look at the code as it was written.

    So.

    Usage

    Let's say you've got a Clojure file with the following contents:

    (ns sample)
    
    (defn func
     [^Throwable x & y]
     (conj {} [:a (+ 1.1 x)]))
    
    (def variable
      @(future
         (if-let [x 5]
           true
           "false")))
    

    Usage is quite straightforward:

    Glow: Syntax Highlighting for Clojure Source Code

    Obviously I think the default colorscheme is great, but there's no accounting for taste. Glow supports whatever colorscheme you feel like:

    Glow: Syntax Highlighting for Clojure Source Code

    Go nuts.

    Contributions Wanted!

    I'm interested in making the regular expressions in Glow faster and more robust. Frankly, I'm not an expert in regexes (he said, putting it mildly), and some of them could use a more critical eye.

    To that end, I'm actively soliciting pull requests to expand the regular expression test coverage and to detect and resolve failure cases.


    ...and, that's it! Shoo, go have fun. As always, feel free to reach out at @venantius if you've got questions, comments, etc.

    ~ V

    by Venantius at August 01, 2015 08:10 PM

    Planet Clojure

    Glow: Syntax Highlighting for Clojure Source Code

    Glow: Syntax Highlighting for Clojure Source Code

    Today I'm happy to announce the release of Glow, a very small library for syntax highlighting strings of Clojure source code.

    Motivation

    You may be thinking to yourself: why did you write this? After all, Ultra and Whidbey already provide syntax highlighting at the REPL, and you can pretty-print most EDN and Clojure objects using the underlying pretty-printing engines, Fipp and Puget.

    The short answer is that I really, really wanted a way to get a syntax-highlighted drop-in solution for clojure.repl/source which I could inject into Ultra. The current functionality (in ultra.repl) relies on the (somewhat questionable) use of read-string to accomplish syntax highlighting via Puget, which unfortunately also does macro expansion and hides metadata.

    Most of the time, macro expansion isn't a problem, but if I want to quickly look at the source code for something (e.g., a macro), I don't want to be looking at what could potentially be an extremely large macro-expanded form.

    Similarly, I don't want any attached metadata to be hidden from me either. I want to look at the code as it was written.

    So.

    Usage

    Let's say you've got a Clojure file with the following contents:

    (ns sample)
    
    (defn func
     [^Throwable x & y]
     (conj {} [:a (+ 1.1 x)]))
    
    (def variable
      @(future
         (if-let [x 5]
           true
           "false")))
    

    Usage is quite straightforward:

    Glow: Syntax Highlighting for Clojure Source Code

    Obviously I think the default colorscheme is great, but there's no accounting for taste. Glow supports whatever colorscheme you feel like:

    Glow: Syntax Highlighting for Clojure Source Code

    Go nuts.

    Contributions Wanted!

    I'm interested in making the regular expressions in Glow faster and more robust. Frankly, I'm not an expert in regexes (he said, putting it mildly), and some of them could use a more critical eye.

    To that end, I'm actively soliciting pull requests to expand the regular expression test coverage and to detect and resolve failure cases.


    ...and, that's it! Shoo, go have fun. As always, feel free to reach out at @venantius if you've got questions, comments, etc.

    ~ V

    by Venantius at August 01, 2015 08:10 PM

    StackOverflow

    List Buffer Prepend: error: value ++=: is not a member of Seq[Int]

    I'm getting an error that ListBuffer doesn't have a method ++=: for appending. Even thought, it's in the doc.

    scala> val lb = new ListBuffer[Int]
    lb: scala.collection.mutable.ListBuffer[Int] = ListBuffer()
    
    scala> lb ++= Seq(1,2,3)
    res20: lb.type = ListBuffer(1, 2, 3)
    
    scala> lb ++=: Seq(4,5)
    <console>:10: error: value ++=: is not a member of Seq[Int]
                  lb ++=: Seq(4,5)
    

    From doc:

    def ++=:(xs: TraversableOnce[A]): ListBuffer.this.type
    

    http://www.scala-lang.org/api/2.11.5/index.html#scala.collection.mutable.ListBuffer

    by Sayat Stb at August 01, 2015 08:01 PM

    Spark on Emr and job (jar) submission from the master node:

    So I am running (or trying to) a compiled (fat jar) spark/scala program from the master node of an EMR cluster on aws. I have compiled the jar in my dev environment with all the same dependencies as my prod environment. And I am deploying with the spark-submit script:

    SPARK_JAR=./spark/lib/spark-assembly-1.2.1-hadoop2.4.0.jar \
    ./spark-submit \
    --deploy-mode cluster \
    --verbose \
    --master yarn-cluster \
    --class sparkSQLProcessor \
    --driver-memory 1g \
    --executor-memory 1g \
    --executor-cores 1 \
    --num-executors 1 \
    /home/hadoop/Spark-SQL-Job.jar args1 args2
    

    The issue that I am running into is that I am getting this config issue: (or I assume it is)

    Exception in thread "main" java.io.FileNotFoundException: File file:/home/hadoop/.versions/spark-1.2.1.a/bin/spark/lib/spark-assembly-1.2.1-hadoop2.4.0.jar does not exist
        at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:516)
        at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:729)
        at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:506)
        at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:407)
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337)
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:289)
        at org.apache.spark.deploy.yarn.ClientBase$class.copyFileToRemote(ClientBase.scala:102)
        at org.apache.spark.deploy.yarn.Client.copyFileToRemote(Client.scala:35)
        at org.apache.spark.deploy.yarn.ClientBase$$anonfun$prepareLocalResources$3.apply(ClientBase.scala:182)
        at org.apache.spark.deploy.yarn.ClientBase$$anonfun$prepareLocalResources$3.apply(ClientBase.scala:176)
        at scala.collection.immutable.List.foreach(List.scala:318)
        at org.apache.spark.deploy.yarn.ClientBase$class.prepareLocalResources(ClientBase.scala:176)
        at org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:35)
        at org.apache.spark.deploy.yarn.ClientBase$class.createContainerLaunchContext(ClientBase.scala:308)
        at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:35)
        at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:80)
        at org.apache.spark.deploy.yarn.ClientBase$class.run(ClientBase.scala:501)
        at org.apache.spark.deploy.yarn.Client.run(Client.scala:35)
        at org.apache.spark.deploy.yarn.Client$.main(Client.scala:139)
        at org.apache.spark.deploy.yarn.Client.main(Client.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:358)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
    

    by Category_Theory at August 01, 2015 07:54 PM

    How to catch slick postgres exceptions for duplicate key value violations

    My table has a unique index on a pair of columns in my postgresql database.

    I want to know how I can catch a duplicate key exception when I am inserting:

    def save(user: User)(implicit session: Session): User = {
      val newId = (users returning users.map(_id) += user
      user.copy(id = newId)
    }
    

    My logs show this exception:

    Execution exception[[PSQLException: ERROR: duplicate key value violates unique constraint "...."
    

    I haven't really used exceptions much in scala either yet also.

    by Blankman at August 01, 2015 07:29 PM

    Planet Emacsen

    Irreal: Come Over to the Dark Side

    and achieve great power.

    by jcs at August 01, 2015 07:27 PM

    Lobsters

    /r/freebsd

    Fefe

    Markus Kompa gibt dem Don Recht. Er glaubt auch, dass ...

    Markus Kompa gibt dem Don Recht. Er glaubt auch, dass sie die Ermittlungen nur pro forma losgetreten haben, um sich ihr gesamtes Arsenal der elektronischen Kriegsführung freizuschalten, über Vorratsdatenspeicherung bis Bundestrojaner.

    August 01, 2015 07:01 PM

    StackOverflow

    Using streams/ functional programming for multiple arrays in Java 8

    I have 2 arrays, y & z I want to return an integer array where result[i] = y[i] - z[i]

    Here's the code:

    static int[] join(int[] y, int[] z) {
        int[] result = new int[Math.min(y.length, z.length)];
        for(int i = 0; i < result.length; ++i) {
            result[i] = y[i] - z[i];
        }
        return result;
    }
    

    However I want to do the same thing using Java's 8 functional programming techniques such as streams. However all the stream functions I'm aware of only work for one list at a time.

    How can I do this?

    Edit Also how can I do the same thing that I mentioned but instead I want to return a boolean array where: result[i] = y[i] == 5 || z[i] == 10

    by Yahya Uddin at August 01, 2015 06:41 PM

    /r/compsci

    TheoryOverflow

    Number of Automorphisms of a graph for graph isomorphism

    Let $G$ and $H$ be two $r$-regular connected graphs of size $n$. Let $A$ be the set of permutations $P$ such that $PGP^{-1}=H$. If $G=H$ then $A$ is the set of automorphisms of $G$.

    What is the best known upper-bound on the size of of $A$?
    Are there any results for particular graph classes (not containing complete/cycle graphs)?


    Note: Constructing the automorphism group is at least as difficult (in terms of its computational complexity) as solving the graph isomorphism problem. In fact, just counting the automorphisms is polynomial-time equivalent to graph isomorphism, c.f. R. Mathon, "A note on the graph isomorphism counting problem".

    by Jim at August 01, 2015 06:23 PM

    CompsciOverflow

    Algorithm to solve an optimization problem for non-homogenous poisson process with unknow distribution

    Jobs arrive at an M/M/1 type server according to an non-homogenous Poisson process with rate parameter $\lambda_k$. Where $\lambda_k$ and $\mu_k$ denotes the arrival rate and service rate at $k_{th}$ interval respectively. The range of $\lambda_k$ is given $\lambda_{min}\leq\lambda_k\leq\lambda_{max}$. The arrival rate at $k_{th}$ interval can be measured at the end of the interval (that means in the beginning of the $(k+1)_{th}$ interval).

    The optimization problem is to find $\mu_{k+1}$ after each $k_{th}$ interval in order to

    minimize $J = \sum_{k=0}^{\infty}\frac{\mu_k}{\lambda_k}$.

    subject to $\mu_k \leq \mu_{max}$

    Consider that $\mu_{max}\geq \lambda_{max}$

    The distribution of $\lambda_k$ is not known. I don't know if there is any possible way to find the distribution. To maintain the stability of the queue, at each $k_{th}$ interval, $\mu_k\geq \lambda_k$. How to solve this problem with dynamic programming? Can anyone please provide an algorithm to solve this optimization problem.

    by precision at August 01, 2015 06:19 PM

    StackOverflow

    Using Slick with shapeless HList

    Slick's support for HList is generally a great thing. Unfortunately, it comes with its own implementation that does barely provide any useful operations. I'd therefore like to use the shapeless HList instead. This is supposed to be "trivial", but I have no idea how to get this right. Searching the web I found no evidence that somebody managed to accomplish this task.

    I assume that it's enough to implement a ProvenShape (as advertised here), but since I fail to understand the concept of Slick's (Proven)Shapes, I did not manage implement this.

    I'm basically aiming to boil this

    class   Users( tag: Tag )
    extends Table[Long :: String :: HNil]( tag, "users" )
    {
        def id = column[Long]( "id", O.PrimaryKey, O.AutoInc )
    
        def email = column[String]( "email" )
    
        override def * = ( id, email ) <>[TableElementType, ( Long, String )](
            _.productElements,
            hlist => Some( hlist.tupled )
        )
    }
    

    down to

    class   Users( tag: Tag )
    extends Table[Long :: String :: HNil]( tag, "users" )
    {
        def id = column[Long]( "id", O.PrimaryKey, O.AutoInc )
    
        def email = column[String]( "email" )
    
        override def * = id :: email :: HNil
    }
    

    by Taig at August 01, 2015 05:57 PM

    Planet FreeBSD

    FreeBSD 10.2-RC2 Now Available

    The second RC build of the 10.2-RELEASE cycle is now available.

    Installation images are available for the amd64, i386, ia64, powerpc, powerpc64, and sparc64 architectures.

    FreeBSD/arm SD card images are available for the BEAGLEBONE, CUBOX-HUMMINGBOARD, GUMSTIX, RPI-B, PANDABOARD, and WANDBOARD kernels.

    FreeBSD 10.2-RC2 is also available on several third-party hosting providers.

    See the PGP-signed announcement email for installation image checksums and more information.

    by Glen Barber at August 01, 2015 05:57 PM

    /r/netsec

    QuantOverflow

    Rebalancing portfolio weights

    I have a matrix of returns and weights for every time period.

    returns<-rbind(c(-0.05,0.04,0.37),c(0.15,0.02,-0.07))
    weights<-rbind(c(0.5,0.1,0.4),c(0.4,0.2,0.4))
    

    I would like to rebalance the weights at the end of every time period:

    To do so I first calculate percent change every month in the returns:

    ones <- matrix(1,ncol=ncol(returns),nrow=nrow(returns))
    add <- returns
    percentChange <- ones+add
    

    then I calculate the total change:

    percentChangeSums <- rowSums(percentChange*weights) 
    

    then I calculate the weights after accounting for the returns:

    WeightsBefore <- weights * percentChange
    

    I calculate how much I should invest in the shares to have the original weights I wanted to maintain:

    ShareAfter <- percentChangeSums * weights 
    

    Just to check that I still have the original weights:

    WeightsAfter <- ShareAfter/percentChangeSums
    
    
    rebalanced.weights <- ShareAfter
    

    My goal is to do this without using any built in functions (e.g. the ones in the PerformanceAnalytics package).

    The problem is that I get different results from the built in functions (Return.portfolio()).

    What am I missing?

    Update:

    Based on XY's comment I modified my code:

    return <- rbind(c(-5,4,37),c(15,2,-7))
    
    weights <- rbind(c(0.5,0.1,0.4), c(0.5,0.1,0.4),c(0.5,0.1,0.4))
    
    N=3
    add <- return/100
    ones <- matrix(1,ncol=N,nrow=nrow(return))
    ShareAfter <- matrix(NA,ncol=N,nrow=nrow(return))
    firstPeriodReturns <- weights[1,]*add[1,]
    percentChange <- ones+add
    
    
    WeightsBefore <- weights[1,] * percentChange[1,]
    percentChangeSums <- sum(WeightsBefore)
    
    ShareAfter[1,] <- percentChangeSums * weights[1,] 
    
    for (i in 2:nrow(return)){
    
      WeightsBefore <- ShareAfter[i-1,] * percentChange[i,]
      percentChangeSums <- sum(WeightsBefore)
    
      ShareAfter[i,] <- percentChangeSums * weights[i,]   
    
    }
    

    I still do not get the desired result. Can someone point me to whats missing?

    Please note again I want to do this manually, not with a package.

    by hrt at August 01, 2015 05:44 PM

    StackOverflow

    Play Scala JSON: combine properties

    I have the following case class:

    case class User(name: String).

    I am trying to implement a JSON Reads converter for it, so I can do the following:

    val user = userJson.validate[User]

    … but the incoming JSON has slightly different structure:

    { "firstName": "Bob", "lastName": "Dylan" }.

    How can I implement my JSON Reads converter to combine the JSON fields firstName and lastName into a name property on my class?

    by Oliver Joseph Ash at August 01, 2015 05:41 PM

    /r/compsci

    Green Programmer Survey Report

    Dear programmers,

    To aide programmers in producing power efficient applications, we need to understand what programmers generally know or do not know about software power consumption. By understanding the mental environment of programmers, we can better design tools and solutions that address their needs to reduce software power consumption.

    On 2013-09-04, we invited this group to participate in our Green Programmer Survey.

    Thank you all who contributed to the research.

    Through the survey we try to answer four research questions:

    [RQ1] Are programmers aware of software energy consumption?

    [RQ2] What do programmers know about reducing the energy consumption of software?

    [RQ3] What is the level of knowledge that programmers possess about energy consumption?

    [RQ4] What do programmers think causes spikes in software energy consumption?

    The research results will be published in IEEE Software in the near future.

    If you are interested, the raw data summary can be found online.

    A preprint version of the paper for peer review can be found in PeerJ.

    If you have subscription to IEEE Xplore, the IEEE early access version of What do programmers know about software energy consumption? is also available.

    Once again, thank you very much for your help.

    Best regards,

    Green Programming Team

    submitted by green_prgm
    [link] [18 comments]

    August 01, 2015 05:24 PM

    Lobsters

    TheoryOverflow

    Parameterized complexity from P to NP-hard and back again

    I'm looking for examples of problems parametrized by a number $k \in \mathbb{N}$, where the problem's hardness is non-monotonic in $k$. Most problems (in my experience) have a single phase transition, for example $k$-SAT has a single phase transition from $k \in \{1,2\}$ (where the problem is in P) to $k \ge 3$ (where the problem is NP-complete). I'm interested in problems where there are phase transitions in both directions (from easy to hard and vice-versa) as $k$ increases.

    My question is somewhat similar to the one asked at Hardness Jumps in Computational Complexity, and in fact some of the responses there are relevant to my question.

    Examples I am aware of:

    1. $k$-colorability of planar graphs: In P except when $k=3$, where it is NP-complete.
    2. Steiner tree with $k$ terminals: In P when $k=2$ (collapses to shortest $s$-$t$ path) and when $k=n$ (collapses to MST), but NP-hard "in between". I don't know if these phase transitions are sharp (e.g., P for $k_0$ but NP-hard for $k_0+1$). Also the transitions of $k$ depend on the size of input instance, unlike my other examples.
    3. Counting satisfying assignments of a planar formula modulo $n$: In P when $n$ is a Mersenne prime number $n=2^k-1$, and #P-complete for most(?)/all other values of $n$ (from Aaron Sterling in this thread). Lots of phase transitions!
    4. Induced subgraph detection: The problem is not parametrized by an integer but a graph. There exist graphs $H_1 \subseteq H_2 \subseteq H_3$ (where $\subseteq$ denotes a certain kind of subgraph relation), for which determining whether $H_i \subseteq G$ for a given graph $G$ is in P for $i\in \{1,3\}$ but NP-complete for $i=2$. (from Hsien-Chih Chang in the same thread).

    by mikero at August 01, 2015 05:05 PM

    Lambda the Ultimate

    Running Probabilistic Programs Backwards

    I saw this work presented at ESOP 2015 by Neil Toronto, and the talk was excellent (slides).

    Running Probabilistic Programs Backwards
    Neil Toronto, Jay McCarthy, David Van Horn
    2015

    Many probabilistic programming languages allow programs to be run under constraints in order to carry out Bayesian inference. Running programs under constraints could enable other uses such as rare event simulation and probabilistic verification---except that all such probabilistic languages are necessarily limited because they are defined or implemented in terms of an impoverished theory of probability. Measure-theoretic probability provides a more general foundation, but its generality makes finding computational content difficult.

    We develop a measure-theoretic semantics for a first-order probabilistic language with recursion, which interprets programs as functions that compute preimages. Preimage functions are generally uncomputable, so we derive an abstract semantics. We implement the abstract semantics and use the implementation to carry out Bayesian inference, stochastic ray tracing (a rare event simulation), and probabilistic verification of floating-point error bounds.

    (also on SciRate)

    The introduction sells the practical side of the work a bit better than the abstract.

    Stochastic ray tracing [30] is one such rare-event simulation task. As illus- trated in Fig. 1, to carry out stochastic ray tracing, a probabilistic program simulates a light source emitting a single photon in a random direction, which is reflected or absorbed when it hits a wall. The program outputs the photon’s path, which is constrained to pass through an aperture. Millions of paths that meet the constraint are sampled, then projected onto a simulated sensor array.

    The program’s main loop is a recursive function with two arguments: path, the photon’s path so far as a list of points, and dir, the photon’s current direction.

    simulate-photon path dir :=
      case (find-hit (fst path) dir) of
        absorb pt −→ (pt, path)
        reflect pt norm −→ simulate-photon (pt, path) (random-half-dir norm)
    

    Running simulate-photon (pt, ()) dir, where pt is the light source’s location and dir is a random emission direction, generates a photon path. The fst of the path (the last collision point) is constrained to be in the aperture. The remainder of the program is simple vector math that computes ray-plane intersections.

    In contrast, hand-coded stochastic ray tracers, written in general-purpose languages, are much more complex and divorced from the physical processes they simulate, because they must interleave the advanced Monte Carlo algorithms that ensure the aperture constraint is met.

    Unfortunately, while many probabilistic programming languages support random real numbers, none are capable of running a probabilistic program like simulate-photon under constraints to carry out stochastic ray tracing. The reason is not lack of engineering or weak algorithms, but is theoretical at its core: they are all either defined or implemented using [density functions]. [...] Programs whose outputs are deterministic functions of random values and programs with recursion generally cannot denote density functions. The program simulate-photon exhibits both characteristics.

    Measure-theoretic probability is a more powerful alternative to this naive probability theory based on probability mass and density functions. It not only subsumes naive probability theory, but is capable of defining any computable probability distribution, and many uncomputable distributions. But while even the earliest work [15] on probabilistic languages is measure-theoretic, the theory’s generality has historically made finding useful computational content difficult. We show that measure-theoretic probability can be made computational by

    1. Using measure-theoretic probability to define a compositional, denotational semantics that gives a valid denotation to every program.
    2. Deriving an abstract semantics, which allows computing answers to questions about probabilistic programs to arbitrary accuracy.
    3. Implementing the abstract semantics and efficiently solving problems.

    In fact, our primary implementation, Dr. Bayes, produced Fig. 1b by running a probabilistic program like simulate-photon under an aperture constraint.

    August 01, 2015 04:49 PM

    QuantOverflow

    How to calculate the JdK RS-Ratio

    Anyone have a clue how to calculate the JdK RS-Ratio?

    Let's say I want to compare the Relative strength for these:

    EWA iShares MSCI Australia Index Fund EWC iShares MSCI Canada Index Fund EWD iShares MSCI Sweden Index Fund EWG iShares MSCI Germany Index Fund EWH iShares MSCI Hong Kong Index Fund EWI iShares MSCI Italy Index Fund EWJ iShares MSCI Japan Index Fund EWK iShares MSCI Belgium Index Fund EWL iShares MSCI Switzerland Index Fund EWM iShares MSCI Malaysia Index Fund EWN iShares MSCI Netherlands Index Fund EWO iShares MSCI Austria Index Fund EWP iShares MSCI Spain Index Fund EWQ iShares MSCI France Index Fund EWS iShares MSCI Singapore Index Fund EWU iShares MSCI United Kingdom Index Fund EWW iShares MSCI Mexico Index Fund EWT iShares MSCI Taiwan Index Fund EWY iShares MSCI South Korea Index Fund EWZ iShares MSCI Brazil Index Fund EZA iShares MSCI South Africa Index Fund

    Each of them should be compared to the SP500 (SPY index). Calculate the relative strength of each of them to SPY and have it normalized (I think it is the only solution)

    More info on the concept. http://www.mta.org/eweb/docs/pdfs/11symp-dekempanaer.pdf

    enter image description here

    enter image description here

    by Donedge at August 01, 2015 04:46 PM

    Stochastic Calculus Rescale Exercise

    I have the following system of SDE's

    $ dA_t = \kappa_A(\bar{A}-A_t)dt + \sigma_A \sqrt{B_t}dW^A_t \\ dB_t = \kappa_B(\bar{B} - B_t)dt + \sigma_B \sqrt{B_t}dW^B_t $

    If $\sigma_B > \sigma_A$ I would consider the volatility $B_t$ to be more volatile than $A_t$ because

    $ d\langle A_\bullet\rangle_t = \sigma_A^2 B_t dt$ and $ d\langle B_\bullet\rangle_t = \sigma_B^2 B_t dt$

    Now, if I rescale the process $B$ by $\sigma_A^2$ and define $\sigma_A^2B =\tilde{B}$, I get the an equivalent system of SDE's

    $ dA_t = \kappa_A(\bar{A}-A_t)dt + \sqrt{\tilde{B}_t}dW^A_t \\ d\tilde{B}_t = \kappa_B(\sigma_A^2\bar{B} - \tilde{B}_t)dt + \sigma_A\sigma_B \sqrt{\tilde{B}_t}dW^B_t $

    But now the claim "If $\sigma_B > \sigma_A$ I would consider the volatility $\tilde{B}_t$ to be more volatile than $A_t$" does not hold anymore. Consider $1>\sigma_B>\sigma_A$ and

    $ d\langle A_\bullet\rangle_t = \tilde{B}_t dt$ and $ d\langle \tilde{B}_\bullet\rangle_t = \sigma_A^2\sigma_B^2 \tilde{B}_t dt$.

    In this case the volatility $\tilde{B}$ of $A$ is more volatile than $A$ only if $\sigma_A^2\sigma_B^2>1$, which is completely different from the condition above ($\sigma_B > \sigma_A$).

    What went wrong? Is there some error in the rescalling?

    by Phun at August 01, 2015 04:39 PM

    TheoryOverflow

    Expressive power of computer languages: it's all about the syntax/logic?

    When analyzing the theoretical expressive power of programming languages (not the verbosity of the programming languages or how concise programs are) are there further criteria besides the class of formal grammar in the Chomsky's hierarchy and the type of logic chosen (for example first/second/higher-order, type logic or whatever other logic formalism)?

    Is there an analysis of expressive power of data structures? Are there other constrains that limit the expressiveness of a computer language?

    It seems that all these formalisms deal only with how we combine ideas, but not how well we can express an idea using a programming language. For example, they don't enter into the question of the limitations of an imaginary programming language with only a binary and integer primitive type.

    by Denise Radi at August 01, 2015 04:36 PM

    QuantOverflow

    Expected Shortfall alternative formulation

    Define:

    $$q_\alpha(F_L)=F^{\leftarrow}(\alpha)=\inf\lbrace{x\in \mathbb{R}\mid F_L(x)\geq \alpha\rbrace}=VaR_\alpha(L)$$

    I want to prove that:

    $$ES_\alpha = \frac{1}{1-\alpha}\mathbb{E}[\mathbb{1}_{\lbrace{ L\geq q_\alpha(L)\rbrace}}\cdot L] \overset{!!!}{=}\mathbb{E}[L\mid L\geq q_\alpha(L)] $$

    I get stuck as:

    $$\mathbb{E}[\mathbb{1}_{\lbrace{ L\geq q_\alpha(L)\rbrace}}\cdot L]= \mathbb{E}[\mathbb{E}[\mathbb{1}_{\lbrace{ L\geq q_\alpha(L)\rbrace}}\cdot L\mid L\geq q_\alpha(L)]] = \mathbb{E}[\mathbb{1}_{\lbrace{ L\geq q_\alpha(L)\rbrace}}\cdot\mathbb{E}[L\mid L\geq q_\alpha(L)]\ ]$$

    Now I would like to use that $\Pr(L\geq q_\alpha(L) \ )=1-\alpha$, but I don't know how to proceed.

    by mastro at August 01, 2015 04:33 PM

    StackOverflow

    Play Scala JSON Reads converter: mapping nested properties

    I have the following case class:

    case class User(name: String, age: String)
    

    I am trying to implement a JSON Reads converter for it, so I can do the following:

    val user = userJson.validate[User]
    

    … but the incoming JSON has slightly different structure:

    { "age": "12", "details": { "name": "Bob" } }
    

    How can I implement my JSON Reads converter?

    by Oliver Joseph Ash at August 01, 2015 04:30 PM

    TheoryOverflow

    Graph classes with a "jump property"

    Let us say that a graph class has the jump property, if it either contains all $n$-vertex graphs for every large enough $n$, or else the fraction of $n$-vertex graphs that belong to the class approaches 0, as $n\rightarrow\infty$.

    The answers to an earlier question (see here) show that every hereditary graph class has this jump property.

    Question: What are some other (nontrivial) examples of graph classes with this property?

    Note: It would be good to see natural examples, that is, classes that arise from independent considerations, not created just for the sake of the jump property.

    by Andras Farago at August 01, 2015 04:27 PM

    StackOverflow

    Java SSL: Invalid service principal name

    On my game's Java server I ran 'sudo yum update' and now I am getting the following error when trying to connect via my game client:

    [2015-07-26 01:58:12] 3837 [Thread-2] INFO com.jayavon.game.client.KisnardOnline - Socket class: class sun.security.ssl.SSLSocketImpl
    [2015-07-26 01:58:12] 3837 [Thread-2] INFO com.jayavon.game.client.KisnardOnline -    Remote address = /54.165.60.189
    [2015-07-26 01:58:12] 3837 [Thread-2] INFO com.jayavon.game.client.KisnardOnline -    Remote port = 34215
    [2015-07-26 01:58:12] 3837 [Thread-2] INFO com.jayavon.game.client.KisnardOnline -    Local socket address = /192.168.1.4:59805
    [2015-07-26 01:58:12] 3837 [Thread-2] INFO com.jayavon.game.client.KisnardOnline -    Local address = /192.168.1.4
    [2015-07-26 01:58:12] 3837 [Thread-2] INFO com.jayavon.game.client.KisnardOnline -    Local port = 59805
    [2015-07-26 01:58:12] 3837 [Thread-2] INFO com.jayavon.game.client.KisnardOnline -    Need client authentication = false
    [2015-07-26 01:58:17] 9260 [Thread-2] INFO com.jayavon.game.client.KisnardOnline -    Cipher suite = SSL_NULL_WITH_NULL_NULL
    [2015-07-26 01:58:17] 9260 [Thread-2] INFO com.jayavon.game.client.KisnardOnline -    Protocol = NONE
    [2015-07-26 01:58:17] 9260 [Thread-2] FATAL com.jayavon.game.client.an - (SSLSocket) factory.createSocket
    javax.net.ssl.SSLException: Connection has been shutdown: javax.net.ssl.SSLException: java.io.IOException: Invalid service principal name: host/54.165.60.189
        at sun.security.ssl.SSLSocketImpl.checkEOF(Unknown Source)
        at sun.security.ssl.SSLSocketImpl.checkWrite(Unknown Source)
        at sun.security.ssl.SSLSocketImpl.startHandshake(Unknown Source)
        at sun.security.ssl.SSLSocketImpl.startHandshake(Unknown Source)
        at com.jayavon.game.client.an.run(Unknown Source)
        at java.lang.Thread.run(Unknown Source)
    Caused by: javax.net.ssl.SSLException: java.io.IOException: Invalid service principal name: host/54.165.60.189
        at sun.security.ssl.Alerts.getSSLException(Unknown Source)
        at sun.security.ssl.SSLSocketImpl.fatal(Unknown Source)
        at sun.security.ssl.SSLSocketImpl.fatal(Unknown Source)
        at sun.security.ssl.SSLSocketImpl.handleException(Unknown Source)
        at sun.security.ssl.SSLSocketImpl.startHandshake(Unknown Source)
        at sun.security.ssl.SSLSocketImpl.getSession(Unknown Source)
        at com.jayavon.game.client.KisnardOnline.a(Unknown Source)
        ... 2 more
    Caused by: java.io.IOException: Invalid service principal name: host/54.165.60.189
        at sun.security.ssl.krb5.KerberosClientKeyExchangeImpl.getServiceTicket(Unknown Source)
        at sun.security.ssl.krb5.KerberosClientKeyExchangeImpl.init(Unknown Source)
        at sun.security.ssl.KerberosClientKeyExchange.init(Unknown Source)
        at sun.security.ssl.KerberosClientKeyExchange.<init>(Unknown Source)
        at sun.security.ssl.ClientHandshaker.serverHelloDone(Unknown Source)
        at sun.security.ssl.ClientHandshaker.processMessage(Unknown Source)
        at sun.security.ssl.Handshaker.processLoop(Unknown Source)
        at sun.security.ssl.Handshaker.process_record(Unknown Source)
        at sun.security.ssl.SSLSocketImpl.readRecord(Unknown Source)
        at sun.security.ssl.SSLSocketImpl.performInitialHandshake(Unknown Source)
        ... 5 more
    Caused by: KrbException: KrbException: Cannot locate default realm
        at sun.security.krb5.Realm.getDefault(Unknown Source)
        at sun.security.krb5.PrincipalName.<init>(Unknown Source)
        at sun.security.krb5.PrincipalName.<init>(Unknown Source)
        ... 15 more
    Caused by: KrbException: Cannot locate default realm
        at sun.security.krb5.Config.getDefaultRealm(Unknown Source)
        ... 18 more
    Caused by: KrbException: Generic error (description in e-text) (60) - Unable to locate Kerberos realm
        at sun.security.krb5.Config.getRealmFromDNS(Unknown Source)
        ... 19 more
    

    5 days ago this is what I saw when connecting to my game server from my client:

    [2015-07-21 00:07:34] 11102 [Thread-2] INFO com.jayavon.game.client.KisnardOnline - Socket class: class sun.security.ssl.SSLSocketImpl
    [2015-07-21 00:07:34] 11102 [Thread-2] INFO com.jayavon.game.client.KisnardOnline -    Remote address = /54.165.60.189
    [2015-07-21 00:07:34] 11102 [Thread-2] INFO com.jayavon.game.client.KisnardOnline -    Remote port = 34215
    [2015-07-21 00:07:34] 11102 [Thread-2] INFO com.jayavon.game.client.KisnardOnline -    Local socket address = /192.168.1.4:61480
    [2015-07-21 00:07:34] 11102 [Thread-2] INFO com.jayavon.game.client.KisnardOnline -    Local address = /192.168.1.4
    [2015-07-21 00:07:34] 11102 [Thread-2] INFO com.jayavon.game.client.KisnardOnline -    Local port = 61480
    [2015-07-21 00:07:34] 11102 [Thread-2] INFO com.jayavon.game.client.KisnardOnline -    Need client authentication = false
    [2015-07-21 00:07:34] 11289 [Thread-2] INFO com.jayavon.game.client.KisnardOnline -    Cipher suite = TLS_DH_anon_WITH_AES_128_CBC_SHA256
    [2015-07-21 00:07:34] 11289 [Thread-2] INFO com.jayavon.game.client.KisnardOnline -    Protocol = TLSv1.2
    

    I thought it was that my keystore.jks file's certificate had expired, but I even tried to update with the certificate I just updated with startssl to no avail. I think an update may have screwed something up and I have no idea what to do. Any help would be so appreciated. I tried to Google around, but did not see anything.

    Idealy I would like to fix this (so I can continue to update my EC2 server). Is there a way to rollback the whole batch of yum updates?

    EDIT

    I found the list of my last updates with the following command: rpm -qa --last

    kernel-3.14.48-33.39.amzn1.x86_64             Sun 26 Jul 2015 12:23:21 AM EDT
    tcpdump-4.0.0-3.20090921gitdf3cb4.2.10.amzn1.x86_64 Sun 26 Jul 2015 12:23:19 AM EDT
    nss-tools-3.19.1-3.71.amzn1.x86_64            Sun 26 Jul 2015 12:23:19 AM EDT
    aws-cli-1.7.38-1.18.amzn1.noarch              Sun 26 Jul 2015 12:23:19 AM EDT
    authconfig-6.1.12-23.25.amzn1.x86_64          Sun 26 Jul 2015 12:23:19 AM EDT
    krb5-workstation-1.12.2-14.43.amzn1.x86_64    Sun 26 Jul 2015 12:23:18 AM EDT
    java-1.7.0-openjdk-1.7.0.85-2.6.1.3.61.amzn1.x86_64 Sun 26 Jul 2015 12:23:17 AM EDT
    python26-botocore-1.1.1-1.16.amzn1.noarch     Sun 26 Jul 2015 12:23:14 AM EDT
    openssh-server-6.2p2-8.44.amzn1.x86_64        Sun 26 Jul 2015 12:23:14 AM EDT
    openssh-clients-6.2p2-8.44.amzn1.x86_64       Sun 26 Jul 2015 12:23:14 AM EDT
    glibc-devel-2.17-55.143.amzn1.x86_64          Sun 26 Jul 2015 12:23:14 AM EDT
    curl-7.40.0-3.52.amzn1.x86_64                 Sun 26 Jul 2015 12:23:14 AM EDT
    bind-utils-9.8.2-0.30.rc1.37.amzn1.x86_64     Sun 26 Jul 2015 12:23:14 AM EDT
    python26-jmespath-0.7.1-1.9.amzn1.noarch      Sun 26 Jul 2015 12:23:13 AM EDT
    kernel-headers-3.14.48-33.39.amzn1.x86_64     Sun 26 Jul 2015 12:23:13 AM EDT
    glibc-headers-2.17-55.143.amzn1.x86_64        Sun 26 Jul 2015 12:23:13 AM EDT
    openssl-1.0.1k-10.87.amzn1.x86_64             Sun 26 Jul 2015 12:23:12 AM EDT
    openssh-6.2p2-8.44.amzn1.x86_64               Sun 26 Jul 2015 12:23:12 AM EDT
    libverto-0.2.5-4.9.amzn1.x86_64               Sun 26 Jul 2015 12:23:12 AM EDT
    libcurl-7.40.0-3.52.amzn1.x86_64              Sun 26 Jul 2015 12:23:12 AM EDT
    krb5-libs-1.12.2-14.43.amzn1.x86_64           Sun 26 Jul 2015 12:23:12 AM EDT
    bind-libs-9.8.2-0.30.rc1.37.amzn1.x86_64      Sun 26 Jul 2015 12:23:12 AM EDT
    python27-jmespath-0.7.1-1.9.amzn1.noarch      Sun 26 Jul 2015 12:23:11 AM EDT
    python27-botocore-1.1.1-1.16.amzn1.noarch     Sun 26 Jul 2015 12:23:11 AM EDT
    nss-sysinit-3.19.1-3.71.amzn1.x86_64          Sun 26 Jul 2015 12:23:11 AM EDT
    nss-softokn-3.16.2.3-9.36.amzn1.x86_64        Sun 26 Jul 2015 12:23:11 AM EDT
    nss-3.19.1-3.71.amzn1.x86_64                  Sun 26 Jul 2015 12:23:11 AM EDT
    nss-util-3.19.1-1.41.amzn1.x86_64             Sun 26 Jul 2015 12:23:10 AM EDT
    nspr-4.10.8-1.33.amzn1.x86_64                 Sun 26 Jul 2015 12:23:10 AM EDT
    nss-softokn-freebl-3.16.2.3-9.36.amzn1.x86_64 Sun 26 Jul 2015 12:23:07 AM EDT
    glibc-2.17-55.143.amzn1.x86_64                Sun 26 Jul 2015 12:23:07 AM EDT
    glibc-common-2.17-55.143.amzn1.x86_64         Sun 26 Jul 2015 12:23:01 AM EDT
    

    by KisnardOnline at August 01, 2015 04:26 PM

    /r/emacs

    evil-mode Trial 2 Day 1

    Some how the feel of emacs and responsiveness is better than my Frankenstein vim install. evil-matchit seems to work much better with HTML and JavaScript. I like all the plugin options and enjoy Lisp, so the idea of using emacs is something I strive. I tried to go full evil about a year ago, but ran into some deal breakers most of which seem fixed.

    I've got a few of issues with my configuration at the moment and was wondering if people can help. I'm using emacs in a terminal just for reference.

    Tabs as spaces

    I use 2 spaces with tabs in vim[1]. I'm having trouble figuring how to set that in .emacs

    Pasting

    In vim copying into register "+ will copy into the clipboard. This doesn't appear to work in emacs.

    Fuzzy matching file names.

    Using helm it seemed to not recognize directories deeper than current. Using find files it was only recognizing files in the current directory. Command-P allows me to see files anywhere below the current working directory.

    File explorer ala NERDTree

    Anyway to see the whole project structure and open files?

    Tabs

    I use tabs a lot in vim. Typically I have one tab with a bunch of split windows to work on one related area of code. How do people handle this the emacs way? Or is there a way to get tabs? Using tabs I bind <Leader> + h to tab left and <Leader> + l to tab right. This allows quick navigation.

    Tmux + panes

    I also use tmux so I can use a repl/run a server in one pane and do my development in a larger pane. I have interop setup between vim and tmux, so navigating from one pane down with C-w j will jump to the vim window below, or if no window is present will jump to the tmux window below. I change the line in my tmux.conf to work with vim, but doesn't appear to work with emacs.[2] I believe this is do to tmux send-keys returning a zero error code when I hit C-j

    REPL

    Speaking of REPLs, I see emacs developers using the repl right in the editor, any good ones? Good terminal emulators? Should I be using a GUI emacs instead of in the terminal with the servers running in a buffer in emacs? I do primarily Node development, with some python, Java and Clojure sprinkled on top.

    Shell in emacs

    M-x shell doesn't seem to pick up my normal bash profile nor is it aware of any of the environment. Found the answer to this... It's only for the GUI mode: http://stackoverflow.com/questions/13671839/cant-launch-lein-repl-in-emacs

    If I should use the GUI, which one?

    How do you stop it from writing autosave files?

    [1] .vimrc

    set expandtab set tabstop=2 set shiftwidth=2 set softtabstop=2 

    [2].tmux.conf

    bind -n C-j run "(tmux display-message -p '#{pane_current_command}' | grep -iqE '(^|\/)emacs-24.5$' && tmux send-keys C-j) || tmux select-pane -D" 
    submitted by base698
    [link] [12 comments]

    August 01, 2015 04:19 PM

    CompsciOverflow

    How to make full width website? [on hold]

    I have few questions.

    1. I am creating my own website with coding HTML, CSS, JavaScript, etc., but I don't know how to fit the screen to all the web browsers. For example, I was testing (or open the website) on Google Chrome and it looked good.

    Hi Website #inner_layout_box { width: 1200px; height: 1300px; margin-left: 350px; background-color: #F7F7F7; }

            #header {
                background-color: #000000;
                height: 100px;
            }
    
            #logo_div img {
                float: left;
                padding-left: 25px;
            }
    
            #menu {
                float: right;
                padding-right: 25px;
                padding-top: 15px;
                color: #white;
                height: 45px;
            }
            #menu ul {
                list-style-type: none;
            }
            #menu ul li {
                display: inline;
                font-size: 22px;
            }
            #menu ul li a {
                text-decoration: none;
                padding: 10px;
            }
    
            #image_div {
                padding-top: 5px;
                clear: left;
                height: 600px;
            }
    
            #content {
                height: 600px;
                text-align: center;
            }
            #content_head {
                padding-top: 40px;
            }
        </style>
    </head>
    
    <body>
        <div id="full_layout_box">
            <div id="inner_layout_box">
                <div id="header">
                    <div id="logo_div">
                        <img src="logo.jpg"/>
                    </div>
                    <div id="menu">
                        <ul style="list-style-type: none;">
                            <li><a href="www.google.com">Home</a></li>
                            <li><a href="www.google.com">About Us</a></li>
                            <li><a href="www.google.com">Project</a></li>
                            <li><a href="www.google.com">Schedule</a></li>
                            <li><a href="www.google.com">Contact Us</a></li>
                        </ul>
                    </div>
                    <div id="image_div">
                        <img src="background.jpg" />
                    </div>
                    <div id="content">
                        <p id="content_head" style="font-size:40px"><b>Welcome To My Website.</b></p>
                        <p style="font-size: 18px">Check out my awesome file and here is the option that you can choose.</p>
                    </div>
    
                </div>
            </div>
        </div>
    </body>
    

    However, when I open same HTML source file in FireFox, the most outter div box pushed to the right too much and it did not look very good. How to fit my website to all the browsers? Should I use % values for divs to fit to all the browsers?

    1. I am new to WordPress. When I bought a theme, there are few things that I want to add and edit something that I do not like. For example, this theme have a tab element, but they do not have an option that make the tab title to center. Another example is one of my client requested me to make him a search logo and a rss logo on the top right corner of the webpage. But in our theme, the search logo comes with search box, which my client do not want and he only want the search logo. In this case, I need to fix them with CSS/ PHP file I think? This is something I am in panic. Can anyone good at WordPress give me some tips how I can edit the theme if there is similar situation comes up?

    Thanks,

    by Sung at August 01, 2015 04:17 PM

    StackOverflow

    SBT Assembly - Deduplicate error & Exclude error

    hey guys I am trying to build a JAR with dependencies using sbt-assembly. But I am running into this error again and again. I have tried multiple different things but I end up here. I am pretty new to SBT and wanted to get some help on this one. Here are the build.sbt & assembly.sbt files.

    build.sbt

    seq(assemblySettings: _*)
    
    name := "StreamTest"
    
    version := "1.0"
    
    scalaVersion := "2.10.4"
    
    libraryDependencies += "org.apache.spark" %% "spark-core" % "1.1.0"
    
    libraryDependencies += "org.apache.spark" % "spark-streaming-kinesis-asl_2.10" % "1.1.0"
    

    project/assembly.sbt

    addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.11.2")
    

    When I run the sbt assembly command, I am getting this error below.

    [info] Including: joda-time-2.5.jar
    [info] Checking every *.class/*.jar file's SHA-1.
    [info] Merging files...
    [warn] Merging 'about_files/LICENSE.txt' with strategy 'rename'
    [warn] Merging 'about_files/NOTICE.txt' with strategy 'rename'
    [warn] Merging 'META-INF/NOTICE.txt' with strategy 'rename'
    [warn] Merging 'META-INF/NOTICE' with strategy 'rename'
    [warn] Merging 'org/xerial/snappy/native/README' with strategy 'rename'
    [warn] Merging 'META-INF/maven/org.xerial.snappy/snappy-java/LICENSE' with strategy 'rename'
    [warn] Merging 'META-INF/license' with strategy 'rename'
    [warn] Merging 'about.html' with strategy 'rename'
    [warn] Merging 'META-INF/LICENSE.txt' with strategy 'rename'
    [warn] Merging 'META-INF/README.txt' with strategy 'rename'
    [warn] Merging 'LICENSE.txt' with strategy 'rename'
    [warn] Merging 'META-INF/LICENSE' with strategy 'rename'
    [warn] Merging 'META-INF/DEPENDENCIES' with strategy 'discard'
    java.lang.RuntimeException: deduplicate: different file contents found in the following:
    /Users/user/.ivy2/cache/org.eclipse.jetty.orbit/javax.transaction/orbits/javax.transaction-1.1.1.v201105210645.jar:META-INF/ECLIPSEF.RSA
    /Users/user/.ivy2/cache/org.eclipse.jetty.orbit/javax.servlet/orbits/javax.servlet-3.0.0.v201112011016.jar:META-INF/ECLIPSEF.RSA
    /Users/user/.ivy2/cache/org.eclipse.jetty.orbit/javax.mail.glassfish/orbits/javax.mail.glassfish-1.4.1.v201005082020.jar:META-INF/ECLIPSEF.RSA
    /Users/user/.ivy2/cache/org.eclipse.jetty.orbit/javax.activation/orbits/javax.activation-1.1.0.v201105071233.jar:META-INF/ECLIPSEF.RSA
        at sbtassembly.Plugin$Assembly$.sbtassembly$Plugin$Assembly$$applyStrategy$1(Plugin.scala:253)
        at sbtassembly.Plugin$Assembly$$anonfun$15.apply(Plugin.scala:270)
        at sbtassembly.Plugin$Assembly$$anonfun$15.apply(Plugin.scala:267)
        at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
        at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
        at scala.collection.Iterator$class.foreach(Iterator.scala:727)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
        at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
        at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
        at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251)
        at scala.collection.AbstractTraversable.flatMap(Traversable.scala:105)
        at sbtassembly.Plugin$Assembly$.applyStrategies(Plugin.scala:272)
        at sbtassembly.Plugin$Assembly$.x$4$lzycompute$1(Plugin.scala:172)
        at sbtassembly.Plugin$Assembly$.x$4$1(Plugin.scala:170)
        at sbtassembly.Plugin$Assembly$.stratMapping$lzycompute$1(Plugin.scala:170)
        at sbtassembly.Plugin$Assembly$.stratMapping$1(Plugin.scala:170)
        at sbtassembly.Plugin$Assembly$.inputs$lzycompute$1(Plugin.scala:214)
        at sbtassembly.Plugin$Assembly$.inputs$1(Plugin.scala:204)
        at sbtassembly.Plugin$Assembly$.apply(Plugin.scala:230)
        at sbtassembly.Plugin$Assembly$$anonfun$assemblyTask$1.apply(Plugin.scala:373)
        at sbtassembly.Plugin$Assembly$$anonfun$assemblyTask$1.apply(Plugin.scala:370)
        at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
        at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:40)
        at sbt.std.Transform$$anon$4.work(System.scala:63)
        at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
        at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
        at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17)
        at sbt.Execute.work(Execute.scala:235)
        at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
        at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
        at sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:159)
        at sbt.CompletionService$$anon$2.call(CompletionService.scala:28)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
    [error] (*:assembly) deduplicate: different file contents found in the following:
    [error] /Users/user/.ivy2/cache/org.eclipse.jetty.orbit/javax.transaction/orbits/javax.transaction-1.1.1.v201105210645.jar:META-INF/ECLIPSEF.RSA
    [error] /Users/user/.ivy2/cache/org.eclipse.jetty.orbit/javax.servlet/orbits/javax.servlet-3.0.0.v201112011016.jar:META-INF/ECLIPSEF.RSA
    [error] /Users/user/.ivy2/cache/org.eclipse.jetty.orbit/javax.mail.glassfish/orbits/javax.mail.glassfish-1.4.1.v201005082020.jar:META-INF/ECLIPSEF.RSA
    [error] /Users/user/.ivy2/cache/org.eclipse.jetty.orbit/javax.activation/orbits/javax.activation-1.1.0.v201105071233.jar:META-INF/ECLIPSEF.RSA
    [error] Total time: 23 s, completed Nov 28, 2014 9:32:53 PM
    

    sbt-version

    0.13.6
    

    EDIT

    Now after looking around I have made another change to exclude any dependencies, as part of another question on stackoverflow.

    Updated build.sbt

    seq(assemblySettings: _*)
    
    name := "StreamTest"
    
    version := "1.0"
    
    scalaVersion := "2.10.4"
    
    libraryDependencies += "org.apache.spark" %% "spark-core" % "1.1.0"
    
    libraryDependencies += "org.apache.spark" % "spark-streaming-kinesis-asl_2.10" % "1.1.0"
    
    libraryDependencies ++= Seq(
        exclude("org.eclipse.jetty.orbit", "javax.servlet").
        exclude("org.eclipse.jetty.orbit", "javax.transaction").
        exclude("org.eclipse.jetty.orbit", "javax.mail").
        exclude("org.eclipse.jetty.orbit", "javax.activation").
        exclude("commons-beanutils", "commons-beanutils-core").
        exclude("commons-collections", "commons-collections").
        exclude("commons-collections", "commons-collections").
        exclude("com.esotericsoftware.minlog", "minlog")
    )
    

    When I run the assembly command again, this is the error that I get.

    build.sbt:14: error: not found: value exclude
        exclude("org.eclipse.jetty.orbit", "javax.servlet").
        ^
    

    EDIT 2:

    Updated build.sbt

    import AssemblyKeys._
    
    seq(assemblySettings: _*)
    
    name := "SparkStreamingKinesis"
    
    version := "1.0"
    
    scalaVersion := "2.10.4"
    
    libraryDependencies += "org.apache.spark" %% "spark-core" % "1.1.0"
    
    libraryDependencies += "org.apache.spark" % "spark-streaming-kinesis-asl_2.10" % "1.1.0"
    
    assemblyMergeStrategy in assembly := {
        case PathList(ps @ _*) if ps.last endsWith ".RSA" => MergeStrategy.first
        case x =>
           val oldStrategy = (assemblyMergeStrategy in assembly).value
           oldStrategy(x)
    }
    

    After making the update, here is the error that I am getting. I tried to do an import of assemblyMergeStrategy, but it doesn't look like a class that I can import.

    build.sbt:21: error: not found: value assemblyMergeStrategy
    assemblyMergeStrategy in assembly := {
    ^
    [error] Type error in expression
    

    by macha at August 01, 2015 04:08 PM

    importing avro schema in Scala

    I am writing a simple twitter program, where I am reading Tweets using Kafka and want to use Avro for serialization. So far I have just set up twitter configuration in Scala and now want to read tweets using this config.

    How do I import the following avro schema as defined in the file tweets.avsc in my program?

    {
        "namespace": "tweetavro",
        "type": "record",
        "name": "Tweet",
        "fields": [
            {"name": "name", "type": "string"},
            {"name": "text", "type": "string"}
        ]
    }
    

    I followed some examples on web which shows something like import tweetavro.Tweet to import the schema in Scala so that we can use it like

    def main (args: Array[String]) {
        val twitterStream = TwitterStream.getStream
        twitterStream.addListener(new OnTweetPosted(s => sendToKafka(toTweet(s))))
        twitterStream.filter(filterUsOnly)
      }
    
      private def toTweet(s: Status): Tweet = {
        new Tweet(s.getUser.getName, s.getText)
      }
    
      private def sendToKafka(t:Tweet) {
        println(toJson(t.getSchema).apply(t))
        val tweetEnc = toBinary[Tweet].apply(t)
        val msg = new KeyedMessage[String, Array[Byte]](KafkaTopic, tweetEnc)
        kafkaProducer.send(msg)
      }
    

    I am following the same and using the below following plugins in pom.xml

    <!-- AVRO MAVEN PLUGIN -->
    <plugin>
      <groupId>org.apache.avro</groupId>
      <artifactId>avro-maven-plugin</artifactId>
      <version>1.7.7</version>
      <executions>
        <execution>
          <phase>generate-sources</phase>
          <goals>
            <goal>schema</goal>
          </goals>
          <configuration>
            <sourceDirectory>${project.basedir}/src/main/avro/</sourceDirectory>
            <outputDirectory>${project.basedir}/src/main/scala/</outputDirectory>
          </configuration>
        </execution>
      </executions>
    </plugin>
    
    
    <!-- MAVEN COMPILER PLUGIN -->
    <plugin>
      <groupId>org.apache.maven.plugins</groupId>
      <artifactId>maven-compiler-plugin</artifactId>
      <configuration>
        <source>1.7</source>
        <target>1.7</target>
      </configuration>
    </plugin>
    

    After doing all this, still i cannot do import tweetavro.Tweet

    Can anayone please help?

    Thanks!

    by PRP at August 01, 2015 04:07 PM

    Lobsters

    Dave Winer

    Braintrust: Desktop for Ubuntu?

    A question for the Scripting News braintrust...

    First, remember that I am working on the new EC2 for Poets. I am talking about software that will be pre-installed on a Ubuntu-based AMI that's designed for people who use Macs and PCs, so they can run apps in the cloud.

    I make several such apps, for example River4, PagePark, Noderunner and the amazingly useful and adaptable nodeStorage.

    The best desktop for Ubuntu?

    Now that the standard recital is out of the way...

    I'm thinking about including a desktop interface, so that people who like Finder-like graphic filesystem browsers (such as myself, for example) in addition to command-line interfaces (for the discerning server connoisseur) will be happy using their cloud-based app server.

    Which one?

    There seem to be three choices.

    1. Gnome.

    2. Unity.

    3. Lubuntu.

    I have no preference, and no basis to make a choice.

    Please read before commenting

    I know a lot of people will say "Don't do it," so you can skip that. I want to understand what the choices are if I choose to include a desktop in the AMI. I expect a few people won't read this and will write long missives about why this is Not A Good Idea. To them I say zzzz, in advance.

    Dan MacTough's howto

    In January 2014 I was emailing with Dan MacTough on this question, and he wrote a howto for installing VNC on a server and connecting to it from the Mac desktop.

    August 01, 2015 03:43 PM

    infra-talk

    6 Months of Avoiding Rails Controllers with DDC

    My distaste for unnecessary Rails controllers is no secret. That’s why I wrote the ddc gem (Data Driven Controllers).

    When DDC was released, it got mixed reviews in the comments section, so I thought I’d post a follow up with my results thus far.

    Example

    
    DDC::ControllerBuilder.build :monkeys, 
      actions: {
        show: {
          context: 'context_builder#user_and_id',
          service: 'monkey_service#find'
        },
        index: {
          context: 'context_builder#user_and_id',
          service: 'monkey_service#find_all'
        },
        update: {
          context: 'context_builder#monkey',
          service: 'monkey_service#update'
        },
        create: {
          context: 'context_builder#monkey',
          service: 'monkey_service#create'
        }
      }
    

    Review

    My current project’s Rails API has about 20 controllers, almost all of which are mere data describing how to glue the Rails pieces together. The overall approach has held up well. With decent naming conventions and DDC, I barely ever have to think about Rails controllers. When I did need a custom bit of something, DDC let me define the methods that actually required some thought. So far, working with DDC has been great: easy to use with very little mental overhead.

    New Features

    There have been a few features added along the way to round out the DDC API:

    1. :render_opts can now be specified. These options let you control which serializer will be used to render domain specific content to JSON. I actually haven’t needed this feature, but other projects using DDC have.
    2. :context will now take an optional array of function strings. The functions will be called in order, being passed the previous functions results and outputting its own. (Like building an onion from the inside-out.)
    3. DDC has been a great tool in my tool belt. I’d really like to see Rails find a clean way to make controllers optional the way Ember.js does.

      I’m curious what others are doing to streamline the building of Rails controllers and Rails sites in general. Leave a comment below.

      The post 6 Months of Avoiding Rails Controllers with DDC appeared first on Atomic Spin.

    by Shawn Anderson at August 01, 2015 03:00 PM

    TheoryOverflow

    Complexity of non-comparison based sorting $O(n\lceil(\log k/ \log n) \rceil)$

    I want to ask if our new algorithm that in some aspect is better than count, bucket and radix sort, would be accepted for publication.I know that our alog don't achieve best known upper bound for integer sorting $O(n\sqrt{\log\log n})$, but some way it's better than existing algorithm and practical.

    Summary of our new algo:

    As per analysis, our sorting algo which is non-comparison based has expected time complexity of $O(n\lceil(\log k/ \log n) \rceil)$. here $k$ is range of integers. (i) our algo don't require distribution on keys as bucket (ii) no limit on the number of digits as radix($O(n\log_b k)$ where $b$ is radix), (iii) for $k=O(n^2)$, count sort runs in $O(n^2)$ and our algo still runs in $O(n)$. Practically, our algo stands second among four algos compared and first when $k=O(n^2)$.

    by chandresh at August 01, 2015 02:41 PM

    CompsciOverflow

    How can a cyclic tag system halt with an output?

    For example, we can say we have a abstract program that, given a finite binary string as input, removes all of the zeros (i.e. 0010001101011 evaluates to 111111), which is definitely a Turing-computable function.

    How can a cyclic tag system compute this (which it can, by definition of it being Turing-complete) when it only halts when it reaches the empty string? The Wikipedia article gives an example of converting to a 2-tag system, but it adds an emulated halt that the original system does not have.

    I can't find any reference to how a cyclic tag system halts meaningfully. What is its output supposed to be? I've considered things like

    • Number of steps (but then input restricts possible output without some kind of fancy encoding I can't find)
    • The last production (but that only has a finite output range)
    • Fixed points (which can't be detected in this system and only exist with very limited production rules and inputs)

    but they don't work, at least not in any way I can see.

    by user1657355 at August 01, 2015 02:35 PM

    StackOverflow

    Cannot install openjdk-7-jdk on Ubuntu 14 [on hold]

    I'm currently using ubuntu 14 as my OS. I want to install openjdk-7-jdk on Ubuntu but I have an error when I typed sudo apt-get install openjdk-7-jdk. Here is the error message :

    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    You might want to run 'apt-get -f install' to correct these:
    The following packages have unmet dependencies:
      google-chrome-stable : Depends: libappindicator1 but it is not going to be installed
      openjdk-7-jdk : Depends: openjdk-7-jre (= 7u79-2.5.6-0ubuntu1.14.04.1) but it is not going to be installed
                      Recommends: libxt-dev but it is not going to be installed
    E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution).
    

    I've tried sudo apt-get -f install openjdk-7-jdk but did not work.

    Please help me. Thanks.

    by kyan at August 01, 2015 02:34 PM

    /r/clojure

    Can we go further?

    Clojure is a super language for writing asynchronous code. Immutable data structures make sharing data between threads very easy. And mutable data is typically accessed only via thread-safe wrappers like atom or agent. No need now for locks and race conditions are largely a thing of the past. What could be better?

    And yet, one huge impediment to asynchronous programming remains: You can not trust it. In general, there is no assurance that an asynchronous request will complete. And it is up to our application code to handle the matter.

    OK, you can timeout and resubmit. And hopefully anything that got messed up will eventually be restarted. But are your requests idempotent? (If they silently completed, will resubmitting them cause a problem?) And doesn't timeout/resubmit cause problems for loaded systems?

    I believe we can do better by writing systems that guarantee a response to every request, though that response might be as rude as "something happened." My mantra is "improved error handling through the use of 2-way messaging." And "basic sanity checks can go a long way to improving robustness."

    To me, it all comes down to maintenance. Move a lot of weird error handling code from the application and we add clarity while eliminating a major source of bugs. And by adding support for 2-way messages, we can also add some simple heuristics for catching bugs.

    Asynchronous programming is where we are headed. Clojure makes it easy to do many thing right. But there is still room for improvement.

    submitted by laforge49
    [link] [18 comments]

    August 01, 2015 02:31 PM

    CompsciOverflow

    Why is there more frequent overflow in normalised Floating Point

    I read that overflow is more frequent when we work with normalised mantissas. Why is this? Is it because when we adopt a normalised representation, our range is smaller than in a unnormalised representation?

    Source: I read this in: Hwang, K. Computer arithmetic. John Wiley & sons, 1979. Chapter 10 "Advanced topics on floating-point arithmetic (page 323)" It says this:

    The operands are not required to be normalized, overflow may be expected to occur less frequently in unnormalized arithmetic.

    by Joseph at August 01, 2015 02:31 PM

    Solving a bivariate recurrence equation

    I'm dealing with this recurrence equation:

    $\qquad\displaystyle T(m, n) = T(m/2, n/2) + T(m, n/2) + O(1)$.

    Any idea how to solve this? I've looked in to few advanced resources but the literature on two dimensional recurrence equation seems to be very thin.

    by ShitalShah at August 01, 2015 02:25 PM

    Need help constructing a Deterministic Finite Automata with AT MOST

    I am stuck with constructing an DFA that has at most in it.

    The question I am stuck with is:

    Design an DFA that accepts all strings over {0, 1} that contain at most two 00’s and at most three 11’s as a substring.

    An answer in a form of a graph would be perfect but if someone can explain in plain words, I will appreciate that as well.

    by Gana at August 01, 2015 02:16 PM

    How to read BNF syntax of C?

    Today I heard of BNF which is a language for languages. Also heard that it specifies the entire syntax of C in four pages. So I thought of checking it out. After reading through this page, I got a fair idea about how to read the syntax.

    But I don't understand what the first two lines mean:

    %token int_const char_const float_const id string enumeration_const
    %%
    

    Also I am reading BNF just because I am just curious. How should I proceed to understand the formal syntax by reading BNF?

    Here is the BNF syntax of C.

    by daltonfury42 at August 01, 2015 02:08 PM

    Optimality of EDF

    For Earliest Deadline First, I have read that if utilization is less or equal to 1, the task set is schedulable under EDF. But does this criteria expresses the optimality for EDF? If not, how can we express optimality of EDF? So if I get asked is EDF optimal, can I get away with the following sentence?

    EDF is able to execute T if and only some other scheduler can execute T.

    by Saeid Yazdani at August 01, 2015 01:54 PM

    TheoryOverflow

    On ST lower bound

    I saw this question Best current space lower bound for SAT? and am curious that to even store a SAT instance you need $\Omega(n)$ space and read it you need $\Omega(n)$ time. Doesn't this prove $\Omega(n^2)$ ST lower bound? Am I missing something in ST lower bound?

    by Turbo at August 01, 2015 01:36 PM

    StackOverflow

    Deal with futures in a loop

    I am working on a method that makes a query to the database and takes one row. This row has a column specifying the parent id if any. So, my method has a closure named "iterate" that do the same process if the last row has a parent and finally the method returns a Sequence of those rows. This is simple at first sight, but I have to deal with futures and that stuff and I do not have much experience with async programming. So. My question is:

    Is there a way to do this method right without the use of "Await"?

    /**
       * Returns all the parents of the given sector if any
       * @param childSector
       * @return
       */
      def getParents(childSector: ShopSector): Future[Option[Seq[ShopSector]]] = {
        val p = Promise[Option[Seq[ShopSector]]]
        val f: Future[Option[Seq[ShopSector]]] = p.future
    
        val parentsSeq: Seq[ShopSector] = Seq()
    
        f.onComplete( thing => println(s"Result from Iteration future: $thing") )
    
        def iterate(sector: ShopSector): Unit = {
          val query = for {
            c <- ShopSectors if c.id === sector.id
            p <- c.parent
          } yield p
    
          exists(sector.parent_id).map { exists =>
            if (exists) {
              db.run(query.result.head).map { parent =>
                println(s"Result parent: $parent")
                parentsSeq +: Seq(parent)
                iterate(parent)
              }
            } else {
              p success Option(parentsSeq)
            }
          }
        }
    
        iterate(childSector)
    
        f
      }
    

    I am using Slick by the way. And notice that this method is not working well. It returns an empty Seq and I know it is obvious that this is going to return that. But the print works fine and print the right results. The thing is that I can't imagine a way to have a variable that doesn't "disappear" before all the futures are completed.

    Thanks in advance.

    EDIT: Okay guys. The problem was so simple. As Ixx said, the parentsSeq collection is not mutable. I fixed it by using a ListBuffer and then by converting it to a sequence.

    by Asier Paz at August 01, 2015 01:31 PM

    TheoryOverflow

    Claw finding using quantum walk: superposition for Szegedy's framework

    Within Claw Finding Algorithms Using Quantum Walk there is the subroutine $claw_{detect}$ described. As in above paper: Let $J_f(N, l)$ and $J_G(M, m)$ be Johnson graphs. Let $F$ and $G$ be vertices of $J_F$ and $J_g$. Let $(F, G)$ be a vertex of the product of the Johnsons graphs. Two vertices in the graph product are conected if: $F$ is adjacent with $F'$ and if $G$ is adjacent with $G'$.

    On page 5 the superposition created in order to apply Szegedy's framework is described as: $|\psi_0\rangle = \frac{1}{\sqrt{\binom{N}{l}\binom{M}{l}l (N-l) m(M-m)}} \otimes |F, G, L_{F, G} \rangle | F', G', L_{F', G'} \rangle$

    Taken the nominator of the Amplitude: That would be all possibilities to connect any two vertices in the graph $J_f \times J_g$. The algorithm processes connected vertices in each ''step''.

    Question: Is the above assumption of the nominator coorect? If so, why is it not sufficient to create a superposition over all connected vertices in the graph $J_f \times J_g$?

    I do see that on a quantum level the ''few more connected vertices'' do not have much of an impact on the running time, but does it have some kind of impact on the algorithm?

    by Fleeep at August 01, 2015 01:29 PM

    StackOverflow

    Using comparison operators in Scala's pattern matching system

    Is it possible to match on a comparison using the pattern matching system in Scala? For example:

    a match {
        case 10 => println("ten")
        case _ > 10 => println("greater than ten")
        case _ => println("less than ten")
    }
    

    The second case statement is illegal, but I would like to be able to specify "when a is greater than".

    by Brian Heylin at August 01, 2015 01:26 PM

    CompsciOverflow

    Undefined behaviour when composing primitive-recursive with $\mu$-recursive functions?

    It is quite easy to show that the following two functions are primitive recursive and thus also $\mu$-recursive: $$ifthen(n,a,b) = \begin{cases}a & n > 0 \\ b & else\end{cases} $$ $$ eq(x,y) = \begin{cases} 1 & x = y \\ 0 & else\end{cases} $$

    Now, given a $\mu$-recusive function $f$ which is not total, i.e. whose domain $\mathcal{D}_f$ is a proper subset of $\mathbb{N}_0$, we can extend it to a function $f^\prime$ on $\mathcal{D}_{f^\prime} = \mathcal{D}_f\cup\{x^\prime\}$ for a fixed $x^\prime\not\in\mathcal{D}_f$ via $$ f^\prime(x) = \begin{cases}x^\prime & x = x^\prime \\ f(x) & else\end{cases}$$ and this function $f^\prime$ can be expressed in terms of $ifthen$, $eq$, the constant function $x\mapsto x^\prime$ and $f$ as a composition of $\mu$-recursive functions as $$ g(x) = ifthen(eq(x,x^\prime), x^\prime, f(x)).$$

    Here is the interesting question: Is there a consistent way to define at what time the arguments to any $\mu$-recursive function, and the function $g$ above in particular, are evaluated?

    Let me illustrate the problem:

    Assuming all arguments of $ifthen$ are evaluated before being passed to $ifthen$, then $g(x^\prime)$ will not return $x^\prime$ since the computation of $f(x^\prime)$ never terminates.

    We could also assume some kind of "lazy evaluation" where arguments are only evaluated when they really are needed. Then, when implementing $ifthen$ using primitive recursion as $$ ifthen(0, a, b) = b \\ ifthen(n+1, a, b) = a,$$ the function $g$ would really be equal to the function $f^\prime$ as $f$ will never be called with argument $x^\prime$. But there is a different way to implement $ifthen$ using just arithmetic operations $$ ifthen(n, a, b) = a \cdot (1 \dot{-} (1 \dot{-} n)) + b \cdot (1 \dot{-} n)$$ where $\dot{-}$ is the modified difference $\dot{-}:\mathbb{N}_0\times\mathbb{N}_0 \rightarrow \mathbb{N}_0$ defined by $$ x \dot{-} y = \begin{cases}x - y & x \geq y \\ 0 & else \end{cases}.$$ As primitive recursive functions, both implementation are equivalent and really compute $ifthen$, but if we assume $g$ uses this second implementation, then $f(x^\prime)$ again has to be evaluated and the computation of $g(x^\prime)$ never terminates, in particular $g(x^\prime) \neq f^\prime(x^\prime)$.

    This seems odd to me, as computability of functions should not depend on the particular implementation of a function, but I have not been able to come up with a solution to fix that problem.

    EDIT: More formally speaking, if we introduce the symbol $\bot$ for non-termination, the function $f$ above can be interpreted as a function $$ f:\mathbb{N}_0\rightarrow\mathbb{N}_0\cup\{\bot\} \\ f(x) = \begin{cases} f(x) & x\in\mathcal{D}_f \\ \bot & else\end{cases}. $$ Using this notation, every $\mu$-recursive function $$ h : \mathbb{N}_0^k \rightarrow \mathbb{N}_0\cup\{\bot\} $$ can be extended to a $\mu$-recursive function $$ h^\prime : (\mathbb{N}_0\cup\{\bot\})^k \rightarrow \mathbb{N}_0\cup\{\bot\} \\ h^\prime(\bar{x}) = \begin{cases} h(\bar{x}) & \bar{x}\in\mathbb{N}_0^k \\ \bot & else \end{cases}. $$ This corresponds to evaluation of arguments before the function $h$ is called.

    Unfortunately, while this extension is sometimes unique (e.g. for the function $eq$ above), most of the time it is not. Especially in the case of a k-ary constant function it even is more natural to extend it to $$ const_c^\prime : (\mathbb{N}_0\cup\{\bot\})^k \rightarrow \mathbb{N}_0 \\ const_c^\prime(x_1,\dots,x_k) = c $$ instead.

    The question I'm asking is the following: Given the implementation of a $\mu$-recursive function, is there a way to infer which extensions are / are not $\mu$-recursive?

    One approach I tried is always assuming lazy evaluation of arguments. This yields the "correct" extension in the case of constant functions, $eq$ and the first implementation of $ifthen$, where the extension I need to be $\mu$-recursive in order to implement the function $f$ fulfills $$ ifthen^\prime : (\mathbb{N}_0\cup\{\bot\})^3 \rightarrow \mathbb{N}_0\cup\{\bot\} \\ ifthen^\prime(n, a, b) = \begin{cases} b & (n,a,b)\in {0}\times(\mathbb{N}_0\cup\{\bot\})\times\mathbb{N}_0 \\ a & (n, a, b) \in \mathbb{N}_{>0}\times\mathbb{N}_0\times(\mathbb{N}_0\cup\{\bot\}) \\ \bot & else\end{cases}. $$

    But if I were just given the second implementation of $ifthen$, lazy evaluation just yields the canonical extension and not the one I require. Then the proof that $f^\prime$ is $\mu$-recursive would be incomplete unless I find the first implementation of $ifthen$ (or any other implementation of it that leads to the "correct" extension when assuming lazy evaluation), moreover it would be incorrect if there would not be such an implementation of $ifthen$.

    How can I identify the possible behaviours of compositions of $\mu$-recursive functions?

    by Xodion at August 01, 2015 01:10 PM

    StackOverflow

    How to use Java Stream map for mapping between different types?

    I have two arrays of equal size:

    1. int[] permutation
    2. T[] source

    I want to do smth like this

    Arrays.stream(permutation).map(i -> source[i]).toArray();
    

    But it won't work saying: Incompatible types: bad return type in lambda expression

    by Dmytrii Shchadei at August 01, 2015 01:07 PM

    Scala XML change literal node

    For now I have XML Like this:

    <root>
    <snippet>
      <title></title>
      <content></content>
    </snippet>
    <snippet>
      <title></title>
      <content></content>
    </snippet>
    <snippet>
      <title></title>
      <content></content>
    </snippet>
    .
    .
    .
    </root>
    

    And i wana replace n snippet example:

       xml = XML.loadFile(xmlFilePath)
        val snippets = xml \\ "root" \\ "snippet"
    

    Now I can refer to each by snippets(1),snippets(2),snippets(3) etc.

    And now how to change/replace for example snippets(5)

    by user3123906 at August 01, 2015 12:54 PM

    Dave Winer

    Why I blog, part 811

    My blog posts always "just write themselves." If they don't I do something else.

    For me, a blog post is just the culmination of something I've been thinking about or a story I've told in person a dozen times.

    It has to pop to the top of my consciousness a dozen times or more before I'm ready to write about it.

    Therefore I blog because:

    1. I think,

    2. And tell stories.

    August 01, 2015 12:47 PM

    StackOverflow

    Spark streaming on Yarn Error while creating FlumeDStream java.net.BindException: Cannot assign requested address

    I am trying to create spark stream from flume push based approach .I am running spark on my Yarn cluster.while starting the stream it is unable to bind the requested address. I am using scala-shell to execute the program ,below is the code I am using

    import org.apache.spark.streaming.StreamingContext
    import org.apache.spark.streaming.StreamingContext._
    import org.apache.spark.streaming.Seconds
    import org.apache.spark.streaming.flume._
    var ssc = new StreamingContext(sc,Seconds(60))
    var stream = FlumeUtils.createStream(ssc,"master.internal", 5858);
    stream.print()
    stream.count().map(cnt => "Received " + cnt + " flume events." ).print()
    ssc.start()
    ssc.awaitTermination()
    

    Flume Agent is unable to write to this port since this code is unable to bind 5858 port.

    Flume Stack Trace :


     [18-Dec-2014 15:20:13] [WARN] [org.apache.flume.sink.AbstractRpcSink.start(AbstractRpcSink.java:294) 294] Unable to create Rpc client using hostname: hostname, port: 5858
    org.apache.flume.FlumeException: NettyAvroRpcClient { host: hadoop-master.nycloudlab.internal, port: 7575 }: RPC connection error
            at org.apache.flume.api.NettyAvroRpcClient.connect(NettyAvroRpcClient.java:178)
            at org.apache.flume.api.NettyAvroRpcClient.connect(NettyAvroRpcClient.java:118)
            at org.apache.flume.api.NettyAvroRpcClient.configure(NettyAvroRpcClient.java:624)
    
    
    
    Caused by: java.io.IOException: Error connecting to /hostname:port
            at org.apache.avro.ipc.NettyTransceiver.getChannel(NettyTransceiver.java:280)
            at org.apache.avro.ipc.NettyTransceiver.<init>(NettyTransceiver.java:206)
            at org.apache.avro.ipc.NettyTransceiver.<init>(NettyTransceiver.java:155)
            at org.apache.flume.api.NettyAvroRpcClient.connect(NettyAvroRpcClient.java:164)
            ... 18 more
    Caused by: java.net.ConnectException: Connection refused
            at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
            at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
            at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:396)
            at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:358)
            at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:274)
            ... 3 more
    

    Stack Trace from spark streaming as below.

        14/12/18 19:57:48 ERROR scheduler.ReceiverTracker: Deregistered receiver for stream 0: Error starting receiver 0 - org.jboss.netty.channel.ChannelException: Failed to bind to: <server-name>/IP:5858
            at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
            at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:106)
            at org.apache.spark.streaming.flume.FlumeReceiver.initServer(FlumeInputDStream.scala:157)
            at org.apache.spark.streaming.flume.FlumeReceiver.onStart(FlumeInputDStream.scala:171)
            at org.apache.spark.streaming.receiver.ReceiverSupervisor.startReceiver(ReceiverSupervisor.scala:121)
            at org.apache.spark.streaming.receiver.ReceiverSupervisor.start(ReceiverSupervisor.scala:106)
            at org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$$anonfun$9.apply(ReceiverTracker.scala:264)
            at org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$$anonfun$9.apply(ReceiverTracker.scala:257)
            at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121)
            at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121)
            at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
            at org.apache.spark.scheduler.Task.run(Task.scala:54)
            at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
            at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
            at java.lang.Thread.run(Thread.java:662)
    Caused by: java.net.BindException: Cannot assign requested ad`enter code here`dress
            at sun.nio.ch.Net.bind(Native Method)
            at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:126)
            at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
            at org.jboss.netty.channel.socket.nio.NioServerBoss$RegisterTask.run(NioServerBoss.java:193)
            at org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:366)
            at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:290)
            at org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)
            ... 3 more
    

    by Aaditya Raj at August 01, 2015 12:31 PM

    DragonFly BSD Digest

    TheoryOverflow

    Can choice or sequential execution be expressed with other basic operators of the pi calculus?

    There are several definitions of what basic operators constitute the pi calculus (see papers from R. Milner, B.Pierce, J.Wing).

    The common basic operators ($C$) are:

    • the parallel composition
    • reading from a named channel
    • writing to a named channel
    • restriction to get a new named channel
    • replication to get an infinite number of parallel copies of a process
    • the inert process that does nothing.

    Sometimes, there is additionally one of the following two operator mentioned as basic operator:

    • the sequential operator for sequential execution;
    • the summation operator to choose between processes.

    Is $C$ together with the sequential operator equivalent to $C$ with the summation operator? Are they even equivalent to $C$ without any further operator? Why (not)?

    by DaveBall aka user750378 at August 01, 2015 12:11 PM

    Lobsters

    Fred Wilson

    Video Of The Week: The Walkoff Homer

    I apologize if you came here looking for the business/tech section and landed on the sports section. But that’s how its going to be today.

    I grew up an army brat moving every year. I was a baseball fan and my teams were the A’s and the Pirates, the two most colorful teams in baseball in the 70s. When we arrived in NYC in 1983, I had two choices, the Mets and the Yankees. There was no way I was going to be a Yankees fan, so the Mets were the default choice but not one I was excited about.

    A year later in the summer of 1984, I arrived back in NYC from a business trip on a steamy July night, just like this week has been, and got in a taxi at LaGuardia. Back then the taxis did not have AC so we drove into Manhattan with the windows down and the breeze in our faces. The taxi driver had the Mets game crackling on the radio, the best way to consume baseball in my opinion. The Mets had their young rookie pitcher Dwight Gooden on the mound and it was late in the game and he was striking out everyone. It was mesmerizing to listen to this kid strike out batter after batter. I got home, turned on the game in our apartment, watched the end of it, and have been a dedicated Met fan ever since.

    The early years of my Met-fandom were easy. The 80s were a great time to be a Met fan. The rest of my time in NYC no so much.

    But this year has been different. The Mets have pitching, lots of it. And so I’ve been watching more closely all summer long.

    Last night, after dinner and after our guests retired for the night, Josh and I turned on the Mets Nationals game. Matt Harvey was in fine form and, as usual, the Mets were not hitting. Harvey stayed in an inning too long, lost the lead, and the game went into extra innings. Finally, in the 12th inning, Wilmer Flores hit this walk off homer and the Mets are now two games out of first place with Yoenis Cespedes on a plane to NYC. I think we’ll be watching a lot of Mets games the rest of this season.

    by Fred Wilson at August 01, 2015 11:38 AM

    TheoryOverflow

    Construction of a graph which has regular subgraphs at each iteration of a recursive process

    I am studying Graph Isomorphism and also trying to figure out the complexity of a certain class of graph. The graph I am studying at the moment is described below

    Description: <br> $G$ is a $r$ regular graph, $k$ connected( not a complete , cycle graph). A vertex of $G$ is $x_1$. All vertices which are not adjacent to $x_1$ create a sub-graph $C_1$. All vertices adjacent to $x_1$ create a sub-graph $, D_1 $. A vertex of $D_1$ is $x_2$.

    Using same method, based on adjacency of $x_2$ , $D_1$ can be divided.

    All vertices which are not adjacent to $x_2$ create a sub-graph $C_2$.

    All vertices adjacent to $x_2$ create a sub-graph $, D_2 $. In general , $ D_{y-1} $ is a graph and can be divided/ partitioned in to 2 sub graphs $C_y, D_y $ .

    There are 2 restrictions, they are-

    1. $C_y, D_y $ are $s_y , t_y > 0 ; s_y \neq t_y; $ regular graphs respectively for all iteration $y$

    2. $C_y, D_y $ cannot be complete bipartite graph (utility graph), complete graph or disjoint union of complete graphs.

      $G$ can be divided/ partitioned maximum $\log_2(|G|)$ times , using this dividing process recursively .

    Questions :

    1. Does there exist such graph in current literature as described above(follows restriction 1 or 2 or both)?
    2. If not, then how can I characterize such graph (e.g. in terms of number of total vertices, edges, graph spectra, parameters of strongly regular graph, so the resulting graph will follow both/first rule for all $y$) ?

      Any advice will help.

    by Jim at August 01, 2015 11:13 AM

    Advogato

    HOWTO Light A Fire Under A Solicitor

    Subject: General Inquiries

    Dear Mister Storage,

    Please reply with the name and street address of Maritime Moving and Storage's Agent For Service Of Process so my Process Server can serve my complaint.

    I am the plaintiff and Maritime Moving and Storage is the respondent.

    I am filing a United States Federal complaint against your firm for Unlawful Restraint Of Trade.

    You put the arm on Damn near every last one of the the tools of my trade when my quite elderly and quite profoundly mentally ill mover directed you to ship MY property to Vancouver, Washington.

    This despite that you yourselves specifically informed my mother that it was a violation of Canadian as well as United States law for you to do so.

    You shipped my stolen property to Vancouver, British Columbia when you sweet-talked my mother into providing a photocopy of her own passport as well as her own name and handwritten signature on my stolen property's customs declaration.

    Mom's signature is felony perjury however she was not at the time mentally competent to sign contracts.

    At Maritime Moving and Storage's own expense, I shall retain on of your friendly competitors to return my stolen property to the U-Haul climate-controlled storage facility on Windmill Road in Dartmouth. Most distressing to me these past few years is that your theft of my grandfather's piano prevents me from playing it when I visit Halifax from time to time to solicit potential clients.

    While I am a United States citizen, I have a Canadian Social Insurance Number, a Work Permit (ie. temporary residency) and hope to become a Canadian citizen someday.

    Elder Abuse is QUITE a serious criminal offense.

    Clark County Washington Adult Protective Services is already working to help my mother get her money back, I expect that the Halifax Regional Municipality's Adult Protective Services will be happy to point out the error of Maritime Moving and Storage's ways by putting every last one of you sorry lot behind bars for a good long time.

    I can't help you there. Doubtlessly the damages you pay out of my lawsuit's judgement - no, I will not accept a settlement, only judgement creates precedent - will return at least a small portion of my father's life insurance to my mother.

    My beloved mother, Patricia Ann Crawford is now penniless and will soon be homeless as well. My hope has always been to provide for Mom's Golden Years, but she has no "Golden" years because you right chaps robbed my mother blind.

    Have A Nice Day.

    Michael David Crawford, Consulting Software Engineer
    mdcrawford@gmail.com
    +1 (503) 688-8345


    Solving the Software Problem: a Taxonomy of Error

    August 01, 2015 10:55 AM

    /r/scala

    TheoryOverflow

    Hierarchy theorem for NTIME intersect coNTIME?

    $\newcommand{\cc}[1]{\mathsf{#1}}$Does a theorem along the following lines hold: If $g(n)$ is a little bigger than $f(n)$, then $\cc{NTIME}(g) \cap \cc{coNTIME}(g) \neq \cc{NTIME}(f) \cap \cc{coNTIME}(f)$?

    It's easy to show that $\cc{NP} \cap \cc{coNP} \neq \cc{NEXP} \cap \cc{coNEXP}$, at least. Proof: Assume not. Then $$\cc{NEXP} \cap \cc{coNEXP} \subseteq \cc{NP} \cap \cc{coNP} \subseteq \cc{NP} \cup \cc{coNP} \subseteq \cc{NEXP} \cap \cc{coNEXP},$$ so $\cc{NP} = \cc{coNP}$, and hence (by padding) $\cc{NEXP} = \cc{coNEXP}$. But then our assumption implies that $\cc{NP} = \cc{NEXP}$, contradicting the nondeterministic time hierarchy theorem. QED.

    But I don't even see how to separate $\cc{NP} \cap \cc{coNP}$ from $\cc{NSUBEXP} \cap \cc{coNSUBEXP}$, as diagonalization seems tricky in this setting.

    by William Hoza at August 01, 2015 10:21 AM

    StackOverflow

    can a scala match statement mach something which is not listed in any case? without the underscore?

    for a scala match syntax like this

    something match {
         case "oneThing" => doOneThing()
         case "anotherThing" => doAnotherThing()
    }
    

    now point to be considered here is there is no wild card used so clearly for a value of

    something = "yetAnotherThing"

    there must be nothing executed from any of the cases, if i'm thinking correctly. or there is something i am missing?

    by Himanshu Mehra at August 01, 2015 10:20 AM

    compare RxScala with akka stream [on hold]

    I am trying to compare RxScala with Akka Streams. They both provide streams that support publish and subscribe with back pressure.

    My research shows:

    RxScala is easier to understand and use. I feel it is suitable for application code. On the other hand, akka stream gives fine-grained control (e.g. configurable stages) but has a steeper learning curve. I feel Akka Streams is good for framework or library code.

    Also, it is easy to use RxScala to migrate an existing blocking application to non-blocking code. See Netflix's migration experience

    However, according to Roland Kuhn's Reactive Streams Presentation, RxJava/RxScala has below drawbacks:

    • Implement pure "push" model
    • only allows blocking for back pressure
    • use unbounded buffering for crossing an async boundary

    Neither is perfect. I would like to read more about other's experiences when these tools are used in real world. If I start an application project from scratch (without the concern of legacy code), which solution should I use to benefit in the long term? Thanks.

    by jiangok at August 01, 2015 10:08 AM

    DataTau

    StackOverflow

    Ansible: How to inherit variables?

    I want to achieve variable inheritance in Ansible. Lets say I have:

    group_vars/all

    ---
    ifaces:
       -   name: eth0
           adress: 10.107.13.236
           netmask: 255.255.255.192
           routes:
               - {from: 10.108.100.34/31, via: 10.107.13.193}
               - {from: 10.108.105.128/31, via: 10.107.13.193}
       -   name: eth1
           adress: 10.112.13.236
           netmask: 255.255.255.192
           gateway: 10.112.13.193
           routes:
               - {from: 10.2.1.0/26, via: 10.112.13.254}
    

    Now I want to extend the routes of eth0, like this:

    group_vars/webserver

    --- ifaces:
       -   name: eth0
           routes:
               - {from: 1.2.3.34, via: 5.6.7.8}
               - {from: 9.9.9.9/9, via: 5.6.7.8}
    

    My desired result is:

    ---
    ifaces:
       -   name: eth0
           adress: 10.107.13.236
           netmask: 255.255.255.192
           routes:
               - {from: 10.108.100.34/31, via: 10.107.13.193}
               - {from: 10.108.105.128/31, via: 10.107.13.193}
               - {from: 1.2.3.34, via: 5.6.7.8}
               - {from: 9.9.9.9/9, via: 5.6.7.8}
       -   name: eth1
           adress: 10.112.13.236
           netmask: 255.255.255.192
           gateway: 10.112.13.193
           routes:
               - {from: 10.2.1.0/26, via: 10.112.13.254}
    

    So the routes should be extendend and not overwritten. I know about setting hash_behaviour: merge in ansible.cfg but that does not satisfy my needs, because I want to append values to the list stored in routes.

    The background is, that I need to be able to define some standard routes (note: this is not limited to routes, it is just an example) and enhance these standards for specific groups instead of overriding them.

    Is this possible in Ansible?

    by Johannes at August 01, 2015 09:54 AM

    QuantOverflow

    Difference between Closing Price, Last traded price and Settlement Price for option contracts?

    What is the difference between Closing price, Last traded price and settlement price ?

    I got the difference between Closing Price and Settlement price from previous post : The difference between Close price and Settelment Price for future contracts

    but still confused how closing price is different from Last traded price ?

    by Neeraj at August 01, 2015 09:41 AM

    What is a stationary process?

    How do you explain what a stationary process is? In the first place, what is meant by process, and then what does the process have to be like so it can be called stationary?

    by user40 at August 01, 2015 09:35 AM

    TheoryOverflow

    Extractor with somewhat corrupted seeds

    In conditional min-entropy extractor, there is a joint distribution $(X,Y)$ such that if the average min-entropy (for some appropriate notion of it) ${\rm H}_\infty(X|Y)$ is large, then ${\rm Ext}(X, Z)$, where $Z$ is a seed distribution that is independent from both $X$ and $Y$, looks random. This can be equivalently viewed as an "adversary", who has some limited knowledge $Y$ of $X$, cannot predict ${\rm Ext}(X, Z)$.

    My question concerns the case where the adversary knows something about $Z$. I model this situation as that there is another random variable $M$ (say of $O(1)$ bits), that is allowed to be correlated with both $X$ and $Z$. Of course, if there is no structure about $M$, then there is no hope. As an example, suppose that $M = {\rm Ext}_1(X, Z)$ where $\rm Ext_1$ is the projection of $\rm Ext$ to its first bit, then clearly one cannot hope that the joint distribution $(M, Y, {\rm Ext}(X, Z))$ looks random.

    My question is then the case where we do know something about $M$. Specifically, consider the case that the correlation between any bit of $M$ and any bit of ${\rm Ext}(X, Z)$ is small. In other words, $(M, {\rm Ext}(X, Z))$ is close to $(M, U)$ where $U$ is the uniform distribution (or modeling in another way, $(M, {\rm Ext}(X, Z))$ is close to $(U, {\rm Ext}(X, Z))$.

    Do we know extractor constructions in this situation? Any pointer or reference? This problem seems pretty natural.

    Thanks.

    by Xi Wu at August 01, 2015 09:16 AM

    Substitution in Resolution Proofs

    Let $F = C_1 \wedge C_2\; \wedge ... \wedge\; C_m$ be a unsatisfiable $k$-CNF on variables $x_1,...,x_n$, where $k$ is constant.

    Let $x_j\rightarrow x_j^1\wedge x_j^2$ be a substitution that replaces each variable $x_j$ in with the conjunction $x_j^1\wedge x_j^2$, where $x_j^1$ and $x_j^2$ are two new variables. Let $G$ be the formula which is obtained from $F$ by applying the substitution above to all variables, and then converting it back to a CNF formula.

    Clearly, $F$ is unsatisfiable if and only if $G$ is unsatisfiable. Additionally, $G$ is a $O(k)$-CNF with at most $2^{k}\cdot m$ clauses, since intuitively each clause of F is expanded into at most $2^{k}$ clauses.

    Question: Suppose we have a resolution refutation $P$ for $F$ of size $s$. Can $P$ be converted into a resolution refutation for $G$ of size $s^{O(1)}$?

    I feel that there might be a standard trick for getting such conversion but I couldn't find any reference. The idea I had in mind was to apply the substitution on each clause appearing in the proof, and then use the fact that resolution is implicationally complete. But this would only work for refutations of small width, since each clause of width $r$ may be get expanded into up to 2^r clauses.

    References or suggestions are very welcome.

    by tori at August 01, 2015 09:12 AM

    /r/emacs

    DataTau

    StackOverflow

    What Singletons does Play! Framework 2.4 provide out of the box?

    Recently, Play! Framework 2.4 introduced us to the magic world of Dependency Injection and it's benefits but what application specific singletons are there? Digging through the documentation, I've found a couple already:

    • ActorSystem
    • Application
    • Configuration

    Are there any more? Is there a central place where all these are listed?

    by Martijn R at August 01, 2015 08:39 AM

    QuantOverflow

    Range options in BS

    I know how barrier options are priced in Black-Scholes scheme.

    I'm wondering if an analytical formula exists also for range (corridor) digital options i.e. options paying only if the price remains between an up and an out barrier.

    I think that if the joint distribution for the minimum and the max of a Wiener is known, an analytical pricing formula should exists. I didn't find it in the literature.

    by jimifiki at August 01, 2015 08:38 AM

    StackOverflow

    Json format with Play 2.4 / Scala

    I know this question has been asked thousand times but I can't figure why this code cannot work properly.

    case class Partner
    (_id: Option[BSONObjectID], name: String, beacon: Option[Beacon])
    
    class PartnerFormatter @Inject() (val beaconDao: BeaconDao){
      implicit val partnerReads: Reads[Partner] = (
          (__ \ "_id").readNullable[String]and
          (__ \ "name").read[String] and
          (__ \ "beacon").read[String]
        )((_id, name, beaconID) => Partner(_id.map(BSONObjectID(_)), name, Await.result(beaconDao.findById(beaconID), 1 second))))
    
      implicit val partnerWrites: Writes[Partner] = (
            (JsPath \ "_id").writeNullable[String].contramap((id: Option[BSONObjectID]) => Some(id.get.stringify)) and
            (JsPath \ "name").write[String] and
            (JsPath \ "beacon").writeNullable[String].contramap((beacon: Option[Beacon]) => Some(beacon.get._id.get.stringify))
          )(unlift(Partner.unapply))
    }
    

    And I'm facing

    No Json deserializer found for type models.Partner. Try to implement an implicit Reads or Format for this type
    

    Or

    No Json deserializer found for type models.Partner. Try to implement an implicit Writes or Format for this type
    

    Shouldn't it be working ?

    by buzz2buzz at August 01, 2015 08:30 AM

    /r/compscipapers

    Lobsters

    TheoryOverflow

    Linear diophantine equation in non-negative integers

    There's only very little information I can find on the NP-complete problem of solving linear diophantine equation in non-negative integers. That is to say, is there a solution in non-negative $x_1,x_2, ... , x_n$ to the equation $a_1 x_1 + a_2 x_2 + ... + a_n x_n = b$, where all the constants are positive? The only noteworthy mention of this problem that I know of is in Schrijver's Theory of Linear and Integer Programming. And even then, it's a rather terse discussion.

    So I would greatly appreciate any information or reference you could provide on this problem.

    There are two questions I mostly care about:

    1. Is it strongly NP-Complete?
    2. Is the related problem of counting the number of solutions #P-hard, or even #P-complete?

    by 4evergr8ful at August 01, 2015 08:21 AM

    Analogues of different complexity classes in various models

    We suspect following relation: $$TC^0\subsetneq NC^1\subsetneq L\subsetneq NL\subsetneq AC^1\subsetneq NC^2\subsetneq P\subsetneq NP\subsetneq PH\subsetneq PSPACE$$ in Turing/boolean circuit complexity model.

    What are analogous complexity classes in BSS model and Valiant's arithmetic complexity model?

    What are known relations and implications among complexity classes in these models?

    What surprise would counter intuitive results such as $P=NP$ have on these hierachies and vice versa?

    by Turbo at August 01, 2015 08:04 AM

    Planet FreeBSD

    FreeBSD 10.2-RC2 Available

    The second RC build for the FreeBSD 10.2 release cycle is now available. ISO images for the amd64, armv6, i386, ia64, powerpc, powerpc64 and sparc64 architectures are available on most of our FreeBSD mirror sites.

    by FreeBSD News Flash at August 01, 2015 08:00 AM

    StackOverflow

    Scala error : unbound placeholder parameter and pattern matching condition

    I'm trying to combine pattern matching and condition, but this code (that's a Samza task):

    override def process(incomingMessageEnvelope: IncomingMessageEnvelope, messageCollector: MessageCollector, taskCoordinator: TaskCoordinator): Unit = {
        val event = (incomingMessageEnvelope getMessage).asInstanceOf[Map[String, Date]]
        val symbol = (event get "symbol").asInstanceOf[String]
        val eventDate = (event get "date").asInstanceOf[Date]
    
        (store get symbol) match {
          case x: java.util.Date if x.equals(eventDate) || x.after(eventDate) => _ 
          case _ => {
            this.store.put(symbol, eventDate)
          }
        }
      }
    

    returns this error:

    Error:(30, 38) unbound placeholder parameter
      case x if isGreaterOf(x, x) => _
                                     ^
    

    Have you any idea of the error?

    Thank you

    Regards

    Gianluca

    by rucka at August 01, 2015 07:48 AM

    QuantOverflow

    Factoring risk premium in to Forward Rate calculation

    This is a self study question. I'm calculating a forward rate.

    Specifically, I have that in a country X, the Spot Rate is 5X/1US. I also have that the 1 year interest rate is 13% in country X and inflation is 12%. The US interest rate is 4% with 3% inflation.

    I'm computing the forward rate as:

    $F= S(1+i_d)/(1+i_f) = 5 *(1+.04)/(1+.13) = 4.602.$

    However I'm also told that X's market risk premium is 300 basis points above US treasuries. I'm unsure how to factor that in....

    by user1357015 at August 01, 2015 07:38 AM

    /r/compsci

    Lobsters

    Articulate Lisp

    Rebuilt the site over the past week or so; looking for feedback on it, especially the UI (and just generally showing it off too. :-) ).

    Comments

    by pnathan at August 01, 2015 07:13 AM

    StackOverflow

    Tree collections in Scala

    I want to implement a tree in Scala. My particular tree uses Swing Split panes to give multiple views of a geographical map. Any pane within a split pane can itself be further divided to give an extra view. Am I right in saying that neither TreeMap nor TreeSet provide Tree functionality? Please excuse me if I've misunderstood this. It strikes me there should be standard Tree collections and it is bad practice to keep reinventing the wheel. Are there any Tree implementation out there that might be the future Scala standard?

    All Trees have three types of elements: a Root, Nodes and Leaves. Leaves and Nodes must have a single reference to a parent. Root and Nodes can have multiple references to child nodes and leaves. Leaves have zero children. Nodes and Root can not be deleted without their children being deleted. there's probably other rules / constraints that I've missed.

    This seems like enough common specification to justify a standard collection. I would also suggest that there should be a standard subclass collection for the case where Root and Nodes can only have 2 children or a single leaf child. Which is what I want in my particular case.

    by Rich Oliver at August 01, 2015 07:09 AM

    Lobsters

    Crack the Code - San José Semaphore

    I’m not sure what tag this goes under, so please change if necessary.

    Comments

    by zg at August 01, 2015 07:02 AM

    DataTau

    CompsciOverflow

    How to exclude all points adjacent to a given point from the feasible region of IP

    Consider a basic integer program such as:

    $$\begin{align} \min_x & \quad c^Tx \\ \text{s.t.} & \quad Ax \leq b \\ &\quad x_i \in \{-100,\ldots,100\} \end{align} $$

    where $x \in \mathbb{Z}^n, A \in \mathbb{R}^{m \times n}$ and $b \in \mathbb{R^m}$.

    Say that I have a feasible point $y \in \mathbb{Z}^n \cap [-100,100]^n$ with the property that all points adjacent to $y$ cannot be optimal. In other words, given $y$, all points in the set:

    $$\mathcal{A}(y) = \Big\{ z \in \mathbb{Z}^d ~\big|~ z_i = y_i \pm 1 \Big\}$$

    could be excluded from the feasible region of the IP.

    I am wondering if there is an elegant way to formulate constraints that will exclude all points that adjacent to $y$ from the feasible region.

    by Berk U. at August 01, 2015 06:43 AM

    /r/emacs

    CompsciOverflow

    How to create a debugger for Windows processes [on hold]

    My goal is to create an application that will help me debug/profile my software (that was compiled in C#/.NET) in a GUI representation, with the following requirements:

    • When a function is called, display the name of the function, with the parameters, and the name of the class that it originates from.
    • When an exception is thrown, display information about the exception.
    • Be able to break in the debugger when a specified function gets called, and to step through the code that's displayed as well.
    • Display the value of global variables in existing objects.

    I would want to see a visual tree of all the functions that were called in the executing process, and if a node in the tree (that represents a function that was previously executed) threw an exception, I would want it to flash red. And when I click on a node, I would want to see details about its execution (how long it took, global variables used, etc.)

    Given the PDB files, is such a debugger possible to create? Any insight on how to will be appreciated!

    by Darius at August 01, 2015 05:21 AM

    Lobsters

    Planet Theory

    Planet Clojure

    Planck Scripting

    When Planck was first created, the primary purpose was to explore how quickly a bootstrapped ClojureScript REPL could be started on the desktop. But it was also clear that low-latency startup would be great for scripting.




    With that in mind, some pragmatic aspects are being developed, specifically in support of scripting, outside of the normal use as a REPL.

    Command Line Arguments FTW

    All of the command line arguments that are supported by the regular Clojure REPL have been implemented. This lends great flexibility in how you start up Planck, having it load a ClojureScript file prior to anything else (--init) or evaluate a form directly inline (--eval).

    You can run a ClojureScript by specifying its path as the first argument to Planck, and you can even have Plank run CloureScript code that is read in via standard input (by passing in - where you would normally specify a path to a file).

    And, if your code is a bit more structured, you can call into a -main entry point as is covered in ClojureScript Mainia.

    Shebang

    As Jack Rusher kindly points out, it is possible to embed a shebang at the beginning of a ClojureScript file in order to use Planck as the interpreter. In other words, with hello.cljs containing

    #!/usr/local/bin/planck
    (println "Hello world!")
    

    if you make the file executable then, voilà:

    $ ./hello.cljs
    Hello world!
    

    This works because the reader supports the #! syntax as a comment specifically for this purpose. Woot!

    In fact, your shebang scripts can use the full power of ClojureScript's namespace and dependency management. The following riff on ClojureScript Mainia works just fine:

    #!/usr/local/bin/planck -s /path/to/src
    (ns calculate.core
      (:require [pythag.core :refer [dist]]))
    
    (println (dist 3 4))
    

    Of course, if you need to pass arguments to your script, --main is your friend.

    Stream Processing

    Planck has slurp and spit, but what if you want to process more data than can fit into memory, or you want to write scripts that can participate in the Unix pipes architecture?

    To that end, with the help of Ryan Fowler, Planck has been revised to support read-line:

    $ planck
    cljs.user=> (require 'planck.io)
    nil
    cljs.user=> (planck.io/read-line)
    abc
    "abc"
    cljs.user=>
    

    In the above, I typed abc and hit return after calling read-line, and it returned the string "abc".

    Given this primitive, let's see if we can write the equivalent of a classic “count distinct lines” pipeline:

    cat /etc/services | sort | uniq | wc -l
       13697
    

    First, let me create a little helper function in src/helper/core.cljs to generate a line sequence using read-line:

    (ns helper.core 
      (:require planck.io))
    (defn line-seq []
      (take-while identity 
        (repeatedly planck.io/read-line)))
    

    Then it is simple to write equivalents of sort, uniq, and wc -l in terms of the core sort, dedupe, and count functions:

    sort.cljs:

    #!/usr/local/bin/planck
    (ns sort.core 
      (:require helper.core))
    (run! println 
      (sort (helper.core/line-seq)))
    

    uniq.cljs:

    #!/usr/local/bin/planck
    (ns uniq.core 
      (:require helper.core))
    (run! println 
      (dedupe (helper.core/line-seq)))
    

    wc_l.cljs:

    #!/usr/local/bin/planck
    (ns wc-l.core 
      (:require helper.core))
    (println 
      (count (helper.core/line-seq)))
    

    Now you can construct the pipeline:

    cat /etc/services | ./sort.cljs | ./uniq.cljs | ./wc_l.cljs
    13697
    

    Of course, there is much more you can typically do with scripts. I'm thinking that porting more of Clojure's I/O facilities and a port of clojure.java.sh would be very useful.

    by Mike Fikes at August 01, 2015 04:00 AM

    StackOverflow

    Is it correct behaviour that `lazy val` acts like `def` in case of exception?

    I've noticed that lazy val repeats calculation (in case of exception) several times:

    scala> lazy val aaa = {println("calc"); sys.error("aaaa")}
    aaa: Nothing = <lazy>
    
    scala> aaa
    calc
    java.lang.RuntimeException: aaaa
      at scala.sys.package$.error(package.scala:27)
      at .aaa$lzycompute(<console>:7)
      at .aaa(<console>:7)
      ... 33 elided
    
    scala> aaa
    calc
    java.lang.RuntimeException: aaaa
      at scala.sys.package$.error(package.scala:27)
      at .aaa$lzycompute(<console>:7)
      at .aaa(<console>:7)
      ... 33 elided
    

    Shouldn't it be like:

    scala> aaa
    calc
    java.lang.RuntimeException: Not Initialized! 
    caused by
    java.lang.RuntimeException: aaaa
    
    scala> aaa
    java.lang.RuntimeException: Not Initialized! 
    caused by
    java.lang.RuntimeException: aaaa  
    

    by dk14 at August 01, 2015 03:49 AM

    UnixOverflow

    Zeroing out FreeBSD swap space?

    I'd like to zero-fill the partitions/slices in my FreeBSD VMs in order to provide for better compression for archival. For those partitions/slices with file systems the process is no problem for me to figure out.

    I know I can turn off the swap space use via swapoff -a. However, I am uncertain as to whether the swap space has a special structure in FreeBSD and whether I need to reinitialize this structure (like in Linux with mkswap) after zero-filling the slice using dd.

    Can anybody shed light on how I can safely zero-fill the swap space and all partitions such that after shutdown I get the best compression possible?

    by 0xC0000022L at August 01, 2015 02:43 AM

    Network fail on FreeBSD: Ping to router fails, but router believes computer is connected

    I have a TP-Link TL-WN851N wireless adapter, which is based on an Atheros device. When I attempt to connect to my WPA2 wireless network, ifconfig wlan0 tells me that the connection is 'associated'. My computer also shows up as connected in the list on the router. However, I can not ping anything, even the router itself.

    On the same system, running Linux, there are no connection problems, and running Windows, there are occasional dropped connections, but no failure to reconnect. DHCP is noticeably slow on both of these however.

    After doing some debugging with people on the #freebsd channel on Freenode, I have found the following:

    • arp -an shows up no routes.
    • If I attempt to get an IP address from DHCP, it fails. On the FreeBSD system, it shows DHCPDISCOVER, then gives an error about no DHCPOFFER. According to my router's web interface, it believes it has given the computer an IP address after this.

    by Macha at August 01, 2015 02:21 AM

    CompsciOverflow

    Proof that P is closed against switching between polynomially related encodings

    Lemma 34.1
    Let $Q$ be an abstract decision problem on an instance set $I$, and let $e_1$ and $e_2$ be polynomially related encodings on $I$. Then, $e_1(Q)\in \mathrm{P}$ if and only if $e_2(Q)\in\mathrm{P}$.

    Proof   We need only prove the forward direction, since the backward direction is symmetric. Suppose, therefore, that $e_1(Q)$ can be solved in time $O(n^K)$ for some constant $k$. Further, suppose that for any problem instance $i$, the encoding $e_1(i)$ can be computed from the encoding $e_2(i)$ in time $O(n^c)$ for some constant $c$, where $n=|e_2(i)|$. To solve problem $e_2(Q)$, on input $e_2(i)$, we first compute $e_1(i)$ and then run the algorithm for $e_1(Q)$ on $e_1(i)$. How long does this take? Converting encodings takes time $O(n^c)$ and therefore $|e_1(i)|=O(n^c)$, since the output of a serial computer cannot be longer than its running time. Solving the problem on $e_1(i)$ takes time $O(|e_1(i)|^k) = O(n^{ck})$, which is polynomial since both $c$ and $k$ are constants. $\Box$

    I have some questions to the proof:

    1. Why does it only consider $e(i)$? As I know, $Q$ is binary relation on the set of instances $I$ and the set of solution so if we say that $e(Q)$ and it should be $e(i,s)$.

    Let Q be an abstract decision problem: What is an instance of NP complete problem?

    1. The proof only shows the forward direction.

    step 1. suppose that $e_1(Q)$ can be solved in polynomial time

    step 2. convert $e_2(i)$ into $e_1(i)$ to prove $e_2(Q)$ can be solved in polynomial time.

    If we prove the backward direction, then we can confirm $e_1(Q)$ to be solved in polynomial time as well, am I right?

    by Y.S. Chen at August 01, 2015 01:36 AM

    Lobsters

    Split UI/UX out of design tag

    Currently, it’s being used for not just visual design and whatever tag compounded with it (like hardware), but also and user experience/interface as well. I’d argue that UI/UX is different than the type of design mentioned, enough to make a separate tag useful and end vagueness.

    by calvin at August 01, 2015 01:08 AM

    StackOverflow

    Scala sbt console - code changes not reflected in sbt console

    I use scala sbt console to test my methods. (commands : sbt then console) But the code changes done in eclipse or other external editor, are not getting reflected in the sbt console.

    Every time, I have to quit the console (using Crt + D) and again start it using console command to see the changes.

    Any one facing this problem? Is there any ways to reload the code from console?

    I am using Ubuntu 64-Bit,

    by C.Karthik at August 01, 2015 01:04 AM

    CompsciOverflow

    I read that by increasing block size in cache memory miss rates decreses as spatial locality is exploited.

    Please explain how does first miss rate reduce by increasing block size and then after some point it increases like a bath tub curve?

    by sarita at August 01, 2015 12:24 AM

    StackOverflow

    Error loading large JSON files using Scala Play Framework 2

    I'm trying to use Apache Bench to load test a group of large (4MB each) JSON requests. When running with a large file and many concurrent requests I get the following error:

    Exception caught in RequestBodyHandler java.nio.channels.ClosedChannelException: null

    Here is my ab command:

    ab -p large.json -n 1000 -c 10 http://127.0.0.1:9000/json-tests
    

    If I run this with no concurrency and only 10 requests it works fine. Increasing the number of requests or the concurrency causes this error to occur over and over.

    My controller currently has no logic in it:

    def addJsonTest = Action {
      Ok("OK")
    }
    

    Here is the full error:

    [error] play - Exception caught in RequestBodyHandler java.nio.channels.ClosedChannelException: null at org.jboss.netty.channel.socket.nio.AbstractNioWorker.setInterestOps(AbstractNioWorker.java:506) [netty-3.9.3.Final.jar:na] at org.jboss.netty.channel.socket.nio.AbstractNioWorker$1.run(AbstractNioWorker.java:455) [netty-3.9.3.Final.jar:na] at org.jboss.netty.channel.socket.ChannelRunnableWrapper.run(ChannelRunnableWrapper.java:40) [netty-3.9.3.Final.jar:na] at org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:372) [netty-3.9.3.Final.jar:na] at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:296) [netty-3.9.3.Final.jar:na]

    This is just using Play in development mode, is there any setup or configuration for having Play handle multiple large requests?

    Thanks!

    by Dalton001 at August 01, 2015 12:05 AM

    CompsciOverflow

    Is there a more efficient algorithm than backtracking/dynamic programming?

    Consider the following game:

    One day a castle is attacked at sunrise (by surprise) by n soldiers.

    Each soldier carries a canon and a rifle.

    The castle has strength s.

    On the first day each soldier shoots his canon at the castle costing the castle n strength points (i.e. the castle ends the first day with s=s-n strength points). After all the soldiers have fired, the castle sends dpw defenders to battle them.

    In the ensuing days the castle and the soldiers battle it out following these rules:

    1. All the soldiers fire first. A soldier can fire his canon at the castle or his rifle at one of the defenders (but not both and each soldier can only shoot once). One shot at a defender kills him. One shot at the castle decreases its strength by 1.
    2. Then each of the d defenders shoots at one soldier (and only one) killing him.

    3. If the castle still has strength points (i.e. s>0) it sends a new batch of dpw defenders at this point. The total number of defenders in the next round will be d=d+dpw.

    4. Repeat 1 through 3 on each new day.

    5. If all soldiers are killed by the defenders, the castle wins.

    6. If there are zero defenders after the soldiers shoot and the castle strength is zero, the soldiers win.

    We would like to know the minimum number of rounds the soldiers need to win the game (or if not possible the minimum number of rounds for the castle to win the game).

    We would also like to know the number of soldiers, defenders and castle strength at the beginning of each round for the minimum number of rounds solution.

    An easy way to solve this problem seems to me to be by recursive backtracking/depth first search or dynamic programming.

    The problem however seems simple enough that a better/faster solution seems likely to exist (one that doesnt involve a large search or maybe even a search at all).

    Is it obvious what that better strategy/algorithm might be?

    by user35202 at August 01, 2015 12:03 AM

    HN Daily

    July 31, 2015

    StackOverflow

    case object simple use case with example

    I am a newbie to scala. So far i learned that, object in scala is singleton and if we declare case object, then override and hashcode default implementations are also added.

    Just wondering to find any simple practical example, where we can fit case object.


    Edit 1:

    @Aivean :-

    But without declaring object as case, below code also works fine :-

    object ScalaPractice {
    
      def main(args: Array[String]): Unit = {
        val trade1 = Trade(EUR)
        trade1.currency match{
          case EUR | USD => println("trade possiblein this currency : " + trade1.currency)
          case _ => println("trade not possible")
        }
      }
    }
    
    case class Trade(val currency : Currency){
    
    }
    
    sealed trait Currency { def name: String }
    object EUR extends Currency { val name = "EUR" }
    object USD extends Currency { val name = "USD" }
    

    Why it is required to add case ?


    Edit 2:

    @dwickern

    As @dwickern quoted :-

    Converting your object to a case object gives you:

    1. A slightly nicer default toString implementation
    2. The case object automatically implements Serializable. This can be important, for example when passing these objects as messages via akka remoting
    3. You also get default equals, hashCode and scala.Product implementations just like a case class but these really don't really matter

    Is there any official documentation for this on scala website, scal docs, etc.. (specially for the third point, that is in bold italics)

    by rits at July 31, 2015 11:49 PM

    Caching function results using a hashmap scala

    Here is some code I wrote to solve project Euler #14 in scala

    The output is shown below as well. My issue is, I expect better performance from the cached version, but the opposite is true. I think I did something wrong, since I don't think HashMap's overhead is enough to make this this slow.

    Any suggestions?

    object Problem14 {
    def main(args: Array[String]) {
    val collatzCacheMap = collection.mutable.HashMap[Long,Long]()
    
    def collatzLengthWithCache(num: Long): Long = {
      def collatzR(currNum: Long, solution: Long): Long = {
        val cacheVal = collatzCacheMap.get(currNum)
        if(cacheVal != None) {
          val answer = solution + cacheVal.get
          collatzCacheMap.put(num, answer);
          answer;
        }
        else if(currNum == 1) { collatzCacheMap.put(num, solution + 1); solution + 1; }
        else if(currNum % 2 == 0) collatzR(currNum/2, solution + 1)
        else collatzR(3*currNum + 1, solution + 1)
      }
    
      collatzR(num, 0)
    }
    
    def collatzLength(num: Long): Long = {
      def collatzR(currNum: Long, solution: Long): Long = {
        if(currNum == 1) solution + 1
        else if(currNum % 2 == 0) collatzR(currNum/2, solution + 1)
        else collatzR(currNum*3 + 1, solution + 1)
      }
    
      collatzR(num, 0)
    }
    
    var startTime = System.currentTimeMillis()
    
    //val answer = (1L to 1000000).reduceLeft((x,y) => if(collatzLengthWithCache(x) > collatzLengthWithCache(y)) x else y)
    val answer = (1L to 1000000).zip((1L to 1000000).map(collatzLengthWithCache)).reduceLeft((x,y) => if(x._2 > y._2) x else y)
    
    println(answer)
    
    println("Cached time: " + (System.currentTimeMillis() - startTime))
    
    collatzCacheMap.clear()
    
    startTime = System.currentTimeMillis()
    
    //val answer2 = (1L to 1000000).par.reduceLeft((x,y) => if(collatzLengthWithCache(x) > collatzLengthWithCache(y)) x else y)
    val answer2 = (1L to 1000000).par.zip((1L to 1000000).par.map(collatzLengthWithCache)).reduceLeft((x,y) => if(x._2 > y._2) x else y)
    
    println(answer2)
    
    println("Cached time parallel: " + (System.currentTimeMillis() - startTime))
    
    startTime = System.currentTimeMillis()
    
    //val answer3 = (1L to 1000000).reduceLeft((x,y) => if(collatzLength(x) > collatzLength(y)) x else y)
    val answer3 = (1L to 1000000).zip((1L to 1000000).map(collatzLength)).reduceLeft((x,y) => if(x._2 > y._2) x else y)
    
    println(answer3)
    
    println("No Cached time: " + (System.currentTimeMillis() - startTime))
    
    startTime = System.currentTimeMillis()
    
    //val answer4 = (1L to 1000000).par.reduceLeft((x,y) => if(collatzLength(x) > collatzLength(y)) x else y)
    val answer4 = (1L to 1000000).par.zip((1L to 1000000).par.map(collatzLength)).reduceLeft((x,y) => if(x._2 > y._2) x else y)
    
    println(answer4)
    
    println("No Cached time parallel: " + (System.currentTimeMillis() - startTime))
     }
    }
    

    Output:

    (837799,525)
    Cached time: 1070
    (837799,525)
    Cached time parallel: 954
    (837799,525)
    No Cached time: 450
    (837799,525)
    No Cached time parallel: 241

    by Brandon Ross Pollack at July 31, 2015 11:40 PM

    Is there a Hoogle analog in the Clojure world?

    Even though Clojure is dynamically typed, sometimes I want to Hoogle a Clojure function.

    I know about ClojureDocs and the Grimoire, but are there any similar tools that allow you to search "deeper" than just the var names?

    Such a tool would ideally include third party open source code as well, and maybe incorporate a full text search of docstrings.

    by yurrriq at July 31, 2015 11:37 PM

    In Clojure, is there a function like Haskell's on?

    In Haskell, we have Data.Function.on:

    on :: (b -> b -> c) -> (a -> b) -> a -> a -> c
    (.*.) `on` f = \x y -> f x .*. f y
    

    In Clojure, I want to be able to define, for example, an anagram predicate as follows:

    (defn anagram? [word other-word]
      (and (not= word other-word)
           ((on = sort) word other-word)))
    

    It's trivial to implement:

    (defn on [g f] (fn [x y] (g (f x) (f y))))
    

    But is there any built-in function that accomplishes the same goal? I can't seem to find one.

    by yurrriq at July 31, 2015 11:23 PM