Planet Primates

August 21, 2014

StackOverflow

Transform a map into a new map based on the pattern of keys in Scala

Given a map to find elements of a pattern of C{NUMBER} -> STRING; this is my code to do that.

val pattern = "C([0-9]+)".r
// find the elements C[0-9]+ format
val plots = smap filter { x =>
  x._1 match {
    case pattern(r) => true
    case  _ => false
  }
}

I need to extract the elements with the pattern, but to create a new map of Map[Int, String]. For example:

Map[String, String]("C1"->"a", "B", "C2"->"c") => Map[Int](1 -> "a", 2 -> "c")

How can it implemented in Scala?

by prosseek at August 21, 2014 06:57 PM

It seems that it is not possible to do POST with body >10K with scala play ws framework

It seems that it is not possible to do POST with body >10K

If I do:

WS.url(url).post("content more than 10K")

I've got clipped body. Exactly 10K. How can I avoid this limitation?

by Volodymyr Nayda at August 21, 2014 06:57 PM

What does %-mark mean in Clojure?

I've tried to find the answer but it's quite difficult to search just the %-mark. So I've seen %-mark sometimes but I can't understand what is its function. It would be very nice if somebody could tell the explanation.

by Szanne at August 21, 2014 06:56 PM

CompsciOverflow

How to conduct time complexity analysis for an implemented algorithm

Main task

In my bachelor degree's thesis I've developed an algorithm for recommender systems which uses personalized PageRank with some particular features as nodes. In the recommender systems' field, there is the opportunity to understand how good is the algorithm using some accuracy metrics (MAE, RMSE, F-measure, ecc.).

What I want to do

In my case I don't want to limit my analysis on the accuracy field, but I want to extend my work with a proper discussion which will compare the amout of time needed by each of the algorithm that I've implemented. During my degree I've never done something like that so I don't know how to conduct it in a formal and proper way.

The personalized PageRank implementation that I use is already present in a library (java JUNG library), so I don't need to analyze it. Instead I want to compare the different ways of using this algorithm that is the object of my thesis.

What I've thought

Actually I'm calculating (using some Java methods) the time needed by each algorithm in order to complete the specific task. After that, I will draw a plot in which I describe how much you need to pay in time in order to get a specific accuracy level.

Questions

  • Is there some good work from which I can take inspiration from (papers, books, ecc.)?
  • Can you give me some tips or simply tell your experience in the field in order to conduct this kind of analysis in a proper way?

If there is something not clear, please leave me a comment and I'll improve my question. Thank you in advance

by Alessandro Suglia at August 21, 2014 06:34 PM

StackOverflow

Why do you need to create these json read/write when in java you didn't have to?

Please correct me if I am wrong, but when using Java with say spring mvc you didn't have to create these extra classes to map your Java class to json and json => class.

Why do you have to do this in Play with scala? Is it something to do with scala?

case class Location(lat: Double, long: Double)

implicit val locationWrites: Writes[Location] = (
  (JsPath \ "lat").write[Double] and
  (JsPath \ "long").write[Double]
)(unlift(Location.unapply))


implicit val locationReads: Reads[Location] = (
  (JsPath \ "lat").read[Double] and
  (JsPath \ "long").read[Double]
)(Location.apply _)

by public static at August 21, 2014 06:33 PM

/r/netsec

/r/compsci

QuantOverflow

How to classify stocks by their volatility?

I would like to hear other possible ways of classifying Stocks by the Volatility of their returns. Assuming that I want to characterize each stock as Low, Medium or High Volatility Stock and assuming that I know the Annualized Volatility for each of the stocks in my sample, what ways are there to do such classification? I can think of two:

  • Below, say, 30th percentile (of the Annualized Volatilities) -> Low Volatility; Between 30th and 70th Percentile -> Medium Volatility; Top 30th percentile -> High Volatility
  • (-2)*Std.Dev (of the distribution of the Annualized Volatilities) -> Low Volatility; Between (+-2)*Std.Dev -> Medium Volatility; (+2)*Std.Dev -> High Volatility

Feel free to point out papers where I can find my answer.

by g_puffo at August 21, 2014 06:30 PM

/r/clojure

/r/compilers

Lexer rule for numerics

I was recently having a discussion with someone about the rules for how numerics should be defined in a toy language we're trying to design. The argument was thus: Should a number terminate upon hitting an alphabetic character or not. E.G. should the string 15sdfg emit two tokens (15 and sdfg and then let the parser determine that its an error in syntax) or should it throw an error because that is not a valid form of number? I used rust and C as two languages to look at for inspiration.

For Rust you get: error: expected ; but found `sdfg for both "let a = 15sdfg;" and "let a = 15 sdfg;". This on the other hand leads me to believe it generates two tokens (one for the number 15, and one for an identifier sdfg. Then the parser will determine that the sequence of tokens is wrong, but it can't give you different errors because both look the same to the parser.

In C "int a = 15sdfg;", will give the error: invalid suffix "sdfg" on integer constant. but "int a = 15 sdfg;" gives: expected ‘,’ or ‘;’ before ‘asdf’. As I said before, the parser wouldnt be able to tell the difference between the two cases if two tokens were generated when there is no space (e.g. 15sdfg), so I assume that it throws an error then and there, or generates a single token.

I want to say that given that you can differentiate between two different types of error, the C way is better. Also in general it just seems like more of a lexical issue in the first place, and doesn't really have anything to do with syntax. But my co-worker argues otherwise.

Anyone who knows one way or another about the above two examples, feel free to chime in, because I'm just guessing. But really I'm wondering what the better way to do it is, or if it is really all that big of a deal at all.

submitted by DanCardin
[link] [comment]

August 21, 2014 06:29 PM

StackOverflow

lein test (:numbers) example

From

 lein help  test

,,

(deftest ^:integration network-heavy-test
   (is (= [1 2 3] (:numbers (network-operation)))))

What is

  (:numbers (network-operation)

doing here?

I added the network-operation function and understand network-heavy-test2 (and it as expected passes.

I assume that (:numbers ..) or :numbers needs to be added / defined / called somewhere?

network-heavy-test fails with

FAIL in (network-heavy-test1) (core_test.clj:23)
expected: (= [1 2 3] (:numbers (network-operation)))
actual: (not (= [1 2 3] nil))

....

(defn network-operation [] [1 2 3])

(deftest ^:integration network-heavy-test2
  (is (= [1 2 3] (network-operation))))

(deftest ^:integration network-heavy-test
   (is (= [1 2 3] (:numbers (network-operation)))))

by mstram at August 21, 2014 06:28 PM

CompsciOverflow

Difference between weak and strong AI

I'm trying to understand the difference between weak and strong AI. For an example, let's say we would pass the turing test - would it show strong AI or weak AI then?

I don't believe that this is standard terminology, but more philosophical. It was mentioned by John Searle in his "Chinese room argument". As I understand, strong AI is about computers really being intelligent such as having a mind and thus a conciousness, and weak AI refers more to computers being able to simulate the behaviour of human intelligence on only specific problems (think chess, etc.)

Now, the question is - if we would be able to pass the turing test, would it be called weak or strong AI then? Could it be strong AI due to the fact that the turing test is not limited to a certain area or a specific problem?

I came across it on wikipedia: http://en.wikipedia.org/wiki/Chinese_room

by Regnard at August 21, 2014 06:27 PM

Learning to program in C

I have about a month to become proficient at programming in C. I wonder if anybody could recommend some worksheets/exercises that get progressively harder so I can practice and learn.

Many thanks!

by Red at August 21, 2014 06:16 PM

StackOverflow

two clojure map refs as elements of each other

I have two maps in refs and want to assoc them to each other in one transaction.

My function looks like this:

(defn assoc-two
  [one two]
  (let [newone (assoc @one :two two)
        newtwo (assoc @two :one one)]
    (ref-set one newone)
    (ref-set two newtwo)))

Now i am calling assoc-two like this:

(dosync (assoc-two (ref {}) (ref {})))

Im getting and StackOverflowError at this point.

I also tried this:

(defn alter-two
  [one two]
  (alter one assoc :two two)
  (alter two assoc :one one))

Is there away to do this in way that one has an entry referencing two and vice versa and still being in one transaction?

by Finn at August 21, 2014 06:12 PM

CompsciOverflow

NP-hardness of an optimization problem with real value

I have an optimization problem, whose answer is a real value, not an integer such as vertex cover and set cover. Therefore, the decision version of my problem is given an input and a real value $r$.

I have been able to reduce an NP-complete problem to my own problem in polynomial time. I also showed that my problem is NP.

Since the input to the decision problem is a real value, is this reduction valid and can I categorize my problem as NP-complete?

Edit: What if the precision of this real number is limited to $\frac{1}{polynomial(n)}$, which means that the solution is a real number with a polynomial precision.

by emab at August 21, 2014 06:07 PM

Fefe

Was ist das eigentlich alles für Militär-Hardware, ...

Was ist das eigentlich alles für Militär-Hardware, mit der die Polizei in Ferguson so herumrennt? Hier erklärt das mal ein Ex-Marine. Money Quote:
What we’re seeing here is a gaggle of cops wearing more elite killing gear than your average squad leader leading a foot patrol through the most hostile sands or hills of Afghanistan.
Er weist auch darauf hin, dass sie im Irak und in Afghanistan bemüht sind, nicht mit voller Kampfmontur herumzurennen, während ihre Politiker in den Medien was von Frieden und Völkerverständigung blubbern. Gerade um denen nicht ihre Botschaft kaputtzumachen. Aber daheim rennen die Cops in Schwarzenvierteln so herum.

Oh und als Marine ist er besonders angepisst, dass es Fotos gibt, in denen die Cops genau in die Kamera zielen. Den Marines wird in der Grundausbildung eingebläut, niemals mit einer Schusswaffe auf etwas zu zielen, wenn man nicht gerade darauf schießen will.

August 21, 2014 06:02 PM

StackOverflow

SBT 0.12.4 global configuration under Windows

Where should I put global sbt 0.12.4 configuration files (like plugins/build.sbt etc) under Windows?

I'm trying C:\Users\username\.sbt\plugins, but it doesn't work.

by Tolsi at August 21, 2014 05:56 PM

Play Framework 2.1: Scala: how to get the whole base url (including protocol)?

Currently I am able to get the host from the request, which includes domain and optional port. Unfortunately, it does not include the protocol (http vs https), so I cannot create absolute urls to the site itself.

object Application extends Controller {
  def index = Action { request =>
    Ok(request.host + "/some/path") // Returns "localhost:9000/some/path"
  }
}

Is there any way to get the protocol from the request object?

by Eneko Alonso at August 21, 2014 05:56 PM

In what scenario does self-type annotation provide behavior not possible with extends

I've tried to come up with a composition scenario in which self-type and extends behave differently and so far have not found one. The basic example always talks about a self-type not requiring the class/trait not having to be a sub-type of the dependent type, but even in that scenario, the behavior between self-type and extends seems to be identical.

trait Fooable { def X: String }
trait Bar1 { self: Fooable =>
  def Y = X + "-bar"
}
trait Bar2 extends Fooable {
  def Y = X + "-bar"
}
trait Foo extends Fooable {
  def X = "foo"
}
val b1 = new Bar1 with Foo
val b2 = new Bar2 with Foo

Is there a scenario where some form of composition or functionality of composed object is different when using one vs. the other?

Update 1: Thanks for the examples of things that are not possible without self-typing, I appreciate the information, but I am really looking for compositions where self and extends are possible, but are not interchangeable.

Update 2: I suppose the particular question I have is why the various Cake Pattern examples generally talk about having to use self-type instead of extends. I've yet to find a Cake Pattern scenario that doesn't work just as well with extends

by Arne Claassen at August 21, 2014 05:42 PM

AWS

DISA Authorizes AWS as First Commercial Cloud Approved for Sensitive Workloads

I am happy to be able to announce that AWS has achieved the first DoD Provisional Authorization under the DoD Cloud Security Model's at security impact levels 3-5! AWS previously received a DoD Provisional Authorization for security impact levels 1-2. This new Authorization covers AWS GovCloud (US) and DoD customers can now move forward with their deployments of applications processing controlled and for official use only unclassified information. As part of the Level 3-5 Authorization, our partners and DoD customers will be able to implement a wide range of DoD requirements necessary to protect their data at these levels, including AWS Direct Connect routing to the DoD's network, comprehensive computer network defense coverage, and Common Access Card (CAC) integration.

In March, AWS announced its compliance with security impact levels 1-2 for all AWS Regions in the US, demonstrating adherence to hundreds of controls. With this authorization, we have provided a means for DoD customers deploy applications at levels 3-5. DoD customers with prospective Level 3-5 applications should contact the ECSB to begin the deployment process.

With today's announcement, DoD agencies can leverage the AWS Provisional Authorization for security impact levels 1-2 and AWS GovCloud.s Provisional Authorization at levels 3-5 to evaluate AWS for their unclassified applications and workloads, achieve their own authorizations to use AWS, and transition DoD workloads into the AWS environment. DoD components and federal contractors can immediately request DoD compliance support by submitting a FedRAMP/DoD Compliance Support Request and begin to moving through the authorization process to achieve a DoD ATO for Levels 1-5 with AWS.

-- Jeff;

by Jeff Barr (awseditor@amazon.com) at August 21, 2014 05:41 PM

StackOverflow

Jackson / JSON Custom Serializers for polymorphic classes in collections

I'm running into a problem using Jackson to serialize a list of polymorphic objects. Using this link as a starting point, I can recreate the issue. http://programmerbruce.blogspot.com/2011/05/deserialize-json-with-jackson-into.html

Classes:

@JsonTypeInfo(use = JsonTypeInfo.Id.CLASS,
  include = JsonTypeInfo.As.PROPERTY,
  property = "type")
trait IAnimal
{
  def name : String

}

abstract class AbstractAnimal extends IAnimal
{
  @BeanProperty
  var name : String = _
}

class Cat extends AbstractAnimal
{
  @BeanProperty
  var favoriteToy : String = _
}

class Dog extends AbstractAnimal
{
  @BeanProperty
  var breed : String = _
  @BeanProperty
  var leashColor : String = _
}

My example code looks like this:

val zoo = new PolyZoo()
zoo.animals = List( Cat("fluffy", "catnip"), Dog("spike", "mutt", "red"))
val mapper = new ObjectMapper()
mapper.registerModule(DefaultScalaModule)
val json = mapper.writerWithDefaultPrettyPrinter().writeValueAsString(zoo)
println(json)

and yields this result:

{
  "animals" : [ {
    "type" : "com.example.Cat",
    "favoriteToy" : "catnip",
    "name" : "fluffy"
  }, {
    "type" : "com.example.Dog",
    "breed" : "mutt",
    "leashColor" : "red",
    "name" : "spike"
  } ]
}

Now, for my real application, I need to define custom serializers for the different types (in this case, Cat and Dog). So, I wrote a CatSerializer. The implementation below will just throw an exception when called, but it works for this illustration:

class CatSerializer extends StdSerializer[Cat](classOf[Cat])
{
  // should throw a NotImplementedException when called, just for test
  def serialize(value: Cat, jgen: JsonGenerator, provider: SerializerProvider) = ???
}

I modified my main program to register the new serializer

val zoo = new PolyZoo()
zoo.animals = List( Cat("fluffy", "catnip"), Dog("spike", "mutt", "red"))
val mapper = new ObjectMapper()
mapper.registerModule(DefaultScalaModule)

// install the CatSerialier
val module = new SimpleModule("CustomStuff")
module.addSerializer(classOf[Cat], new CatSerializer())
mapper.registerModule(module)

// continue on as before
val json = mapper.writerWithDefaultPrettyPrinter().writeValueAsString(zoo)
println(json)

Unfortunately, I now get a JsonMappingException:

com.fasterxml.jackson.databind.JsonMappingException: Type id handling not implemented for type com.example.Cat (through reference chain: com.example.PolyZoo["animals"]->scala.collection.convert.IterableWrapper[0])
    at com.fasterxml.jackson.databind.JsonMappingException.wrapWithPath(JsonMappingException.java:210)
    at com.fasterxml.jackson.databind.JsonMappingException.wrapWithPath(JsonMappingException.java:189)
    at com.fasterxml.jackson.databind.ser.std.StdSerializer.wrapAndThrow(StdSerializer.java:213)
    at com.fasterxml.jackson.databind.ser.std.CollectionSerializer.serializeContents(CollectionSerializer.java:126)
    at com.fasterxml.jackson.module.scala.ser.IterableSerializer.serializeContents(IterableSerializerModule.scala:30)
    at com.fasterxml.jackson.module.scala.ser.IterableSerializer.serializeContents(IterableSerializerModule.scala:16)
    at com.fasterxml.jackson.databind.ser.std.AsArraySerializerBase.serialize(AsArraySerializerBase.java:183)
    at com.fasterxml.jackson.databind.ser.BeanPropertyWriter.serializeAsField(BeanPropertyWriter.java:505)
    at com.fasterxml.jackson.databind.ser.std.BeanSerializerBase.serializeFields(BeanSerializerBase.java:639)
    at com.fasterxml.jackson.databind.ser.BeanSerializer.serialize(BeanSerializer.java:152)
    at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:114)
    at com.fasterxml.jackson.databind.ObjectWriter._configAndWriteValue(ObjectWriter.java:800)
    at com.fasterxml.jackson.databind.ObjectWriter.writeValueAsString(ObjectWriter.java:676)

So - it appears that adding a CatSerializer into the mix caused the Jackson/Json type system to get confused, so much that it never even got to my NotImplementedException. Any suggestions?

by fbl at August 21, 2014 05:36 PM

Ambiguous implicit values

I've been thinking I understand scala implicits until recently faced strange problem.

In my application I have several domain classes

case class Foo(baz: String)
case class Bar(baz: String)

And a class that is able to construct domain object from a string. It could be subclassed to do real deserialization it doesn't matter.

class Reads[A] {
  def read(s: String): A = throw new Exception("not implemented")
}

Next, there are implicit deserializers

implicit val fooReads = new Reads[Foo]
implicit val barReads = new Reads[Bar]

And a helper to convert strings to one of domain classes

def convert[A](s: String)(implicit reads: Reads[A]): A = reads.read(s)

Unfortunatelly, when trying to use it

def f(s: String): Foo = convert(s)

I get compiler errors like

error: ambiguous implicit values:
 both value fooReads of type => Reads[Foo]
 and value barReads of type => Reads[Bar]
 match expected type Reads[A]
       def f(s: String): Foo = convert(s)
                                      ^

To me code seems simple and right. Reads[Foo] and Reads[Bar] is a completely different types, what is ambiguous about it?

The real code is much more complicated and uses play.api.libs.json but this simplified version is sufficient to reproduce the error.

by lambdas at August 21, 2014 05:36 PM

CompsciOverflow

Time Complexity of Apriori and Fp Growth

What is the time wise complexity of apriori and fp growth. I am searching from a week on internet there big o notation but unable to find any proper reference. Kindly help me in this please.

by greatmajestics at August 21, 2014 05:26 PM

QuantOverflow

Methods for "prompt month equivalent" exposure in commodities forwards/futures markets

It is common in commodities markets to hold many positions, both long and short, across a range of contract months beginning in the prompt month (today, September) to five or more years out. In general the prompt month exhibits the most volatility, and far out months exhibit the least (among the same exact products)

This makes total notional 'position' for a product quite misleading. For example if today I purchase 1 September 2014 contract, and sell 1 September 2019 contract, my net notional position is 0, suggesting the portfolio has no risk. In reality if the prompt month appreciates 10%, the 2019 contract will appreciate maybe 1%, and you have realized a significant gain.

Standard VAR calculation process captures and handles this well, but I want to be able to measure my true spot price exposure for a product class for purposes of separating position limits from VAR limits.

So, the question,

What is the industry standard model for condensing a strip of forward contracts into a single exposure number "FME" such that it is reasonable to approximate PNL by taking FME*spot price. (presume you are given a corr/cov matrix)?

The most relevant article I can find is here. My calculations from that paper seem somewhat accurate and it covers the subject quite well subjectively, but it has no citations and I doubt my peers would accept it as a source for policy.

http://kiodex.com/our_library/FME_Whitepaper.pdf

by evan_irl at August 21, 2014 05:26 PM

StackOverflow

How to disable Gradle daemon in IntelliJ Idea?

I need to disable the Gradle daemon in IntelliJ Idea, because somehow Scala plugin is not working with the daemon (the compilation fails with NullPointerException). I have tried to edit my IntelliJ Gradle build configurations to include a JVM system parameter "-Dorg.gradle.daemon=false": (http://i.stack.imgur.com/x4P98.png)

Also I've tried to use "--no-daemon" flag at the same place (Script parameters and VM options). Also I've tried to specify these options in the "Preferences->Gradle" menu of IntelliJ. None of these attempts gave any result, the daemon continue to start, so I have to kill it before running/compiling for the second time. (http://i.stack.imgur.com/syBgC.png)

How can I disable the Gradle daemon usage in IntelliJ Idea?

by user3895471 at August 21, 2014 05:23 PM

QuantOverflow

Modelling currency exchange rates timeseries data across re-denomation dates

I am working with data for an exotic currency, that has been re-denominated a couple of times during the twenty years of data that I have.

What is the best way of 'normalising' the data, so that I can work with the data, although it contains two 'switch over' dates on which the currency was re-denominated?

by Homunculus Reticulli at August 21, 2014 05:12 PM

CompsciOverflow

Optimal displacement on a board [closed]

A $N$*$M$ matrix is given where each element of matrix is positive integer greater than or equal to zero. One can move on the matrix by steps. Each step is a succession of $p$ jumps either rightwards or downwards, i.e either from $A[i][j]$ to $A[i+1][j]$ or from $A[i][j]$ to $A[i][j+1]$, such that $p\leq A[i_0][j_0]$, where $i_0$ and $j_0$ are the coordinates at the beginning of the step.

The problem is to give an algorithm to find the minimum number of jumps to reach the cell ($N$,$M$) from $(0,0)$;


$N$ is number of rows and $M$ is number of columns.

$\forall i,j \; A[i][j]\geq 0$

can someone help me with this.

thanks

by user157920 at August 21, 2014 05:08 PM

TheoryOverflow

YouTube is broken! [Read first]

Ok, so before you may mark this as off-topic, this has something to do with my computer. But I can't seem to find the right topic for this. IT IS NOT a web app problem!

I can not seem to connect to YouTube at all. If I connect directly to YouTube (http://youtube.com) I get this:

Kloxo Control Panel

And if I try to connect via video directly (http://youtube.com/watch?v=), I get this:

404 Not Found Here's how this isn't a web issue: I have malware on my computer I believe. And the question here is, how do I get rid of it, or what possible issue could this be? I can verify this is not a website issue or wifi issue, I've tried to connect on another computer in my wifi, and it worked. It is the local machine issue. I've tried to get rid of malware at my best and also have tried to disable possible virus extensions. All can out as the same result: no help.

by Agentleader1 at August 21, 2014 05:02 PM

/r/netsec

Lobsters

/r/freebsd

Dave Winer

StackOverflow

Are Databases and Functional Programming at odds?

I've been a web developer for some time now, and have recently started learning some functional programming. Like others, I've had some significant trouble apply many of these concepts to my professional work. For me, the primary reason for this is I see a conflict between between FP's goal of remaining stateless seems quite at odds with that fact that most web development work I've done has been heavily tied to databases, which are very data-centric.

One thing that made me a much more productive developer on the OOP side of things was the discovery of object-relational mappers like MyGeneration d00dads for .Net, Class::DBI for perl, ActiveRecord for ruby, etc. This allowed me to stay away from writing insert and select statements all day, and to focus on working with the data easily as objects. Of course, I could still write SQL queries when their power was needed, but otherwise it was abstracted nicely behind the scenes.

Now, turning to functional-programming, it seems like with many of the FP web frameworks like Links require writing a lot of boilerplate sql code, as in this example. Weblocks seems a little better, but it seems to use kind of an OOP model for working with data, and still requires code to be manually written for each table in your database as in this example. I suppose you use some code generation to write these mapping functions, but that seems decidedly un-lisp-like.

(Note I have not looked at Weblocks or Links extremely closely, I may just be misunderstanding how they are used).

So the question is, for the database access portions (which I believe are pretty large) of web application, or other development requiring interface with a sql database we seem to be forced down one of the following paths:

  1. Don't Use Functional Programming
  2. Access Data in an annoying, un-abstracted way that involves manually writing a lot of SQL or SQL-like code ala Links
  3. Force our functional Language into a pseudo-OOP paradigm, thus removing some of the elegance and stability of true functional programming.

Clearly, none of these options seem ideal. Has found a way circumvent these issues? Is there really an even an issue here?

Note: I personally am most familiar with LISP on the FP front, so if you want to give any examples and know multiple FP languages, lisp would probably be the preferred language of choice

PS: For Issues specific to other aspects of web development see this question.

by Tristan Havelick at August 21, 2014 04:13 PM

/r/emacs

StackOverflow

Type mismatch with Array of Array in Scala

I'm trying to build an array of an array to give it as a argument to a method. The value of inner arrays are any kind of data (AnyVal) such as Int or Double.

The method's signature is as follows:

def plot[T <: AnyVal](config:Map[String, String], data:Array[Array[T]]): Unit = {

This is the code:

val array1 = (1 to 10).toArray
val array2 = ArrayBuffer[Int]()
array1.foreach { i =>
  array2 += (getSize(summary, i))
}
val array3 = new Array[Int](summary.getSize())

val arrays = ArrayBuffer[Array[AnyVal]](array1, array2.toArray, array3) # <-- ERROR1
Gnuplotter.plot(smap, arrays.toArray) # <-- ERROR2

However, I have two errors:

enter image description here enter image description here

What might be wrong?

by prosseek at August 21, 2014 03:51 PM

/r/netsec

StackOverflow

What can cause the Squeryl table object to be not found?

I am encountering a compile time error while attempting to get Squeryl example code running. The following code is based on the My Adventures in Coding blog post about connecting to SQLServer using Squeryl.

import org.squeryl.adapters.MSSQLServer
import org.squeryl.{ SessionFactory, Session}
import com.company.model.Consumer

class SandBox {
  def tester() = {
    val databaseConnectionUrl = "jdbc:jtds:sqlserver://myservername;DatabaseName=mydatabasename"
    val databaseUsername = "userName"
    val databasePassword = "password"

    Class.forName("net.sourceforge.jtds.jdbc.Driver")

    SessionFactory.concreteFactory = Some(()=>
      Session.create(
        java.sql.DriverManager.getConnection(databaseConnectionUrl, databaseUsername, databasePassword),
        new MSSQLServer))

    val consumers = table[Consumer]("Consumer")
  }
}

I believe I have the build.sbt file configured correctly to import the Squeryl & JTDS libraries. When running SBT after adding the dependencies it appeared to download the libraries need.

libraryDependencies ++= List (
  "org.squeryl" %% "squeryl" % "0.9.5-6",
  "net.sourceforge.jtds" % "jtds" % "1.2.4",
  Company.teamcityDepend("company-services-libs"),
  Company.teamcityDepend("ssolibrary")
) ::: Company.teamcityConfDepend("company-services-common", "test,gatling")

I am certain that at least some of the dependencies were successfully installed. I base this on the fact that the SessionFactory code block compiles successfully. It is only the line that attempts to setup a map from the Consumer class to the Consumer SQLServer table.

val consumers = table[Consumer]("Consumer")

This line causes a compile time error to be thrown. The compile is not able to find the table object.

[info] Compiling 8 Scala sources to /Users/pbeacom/Company/integration/target/scala-2.10/classes...
[error] /Users/pbeacom/Company/integration/src/main/scala/com/company/code/SandBox.scala:25: not found: value table 
[error]     val consumers = table[Consumer]("Consumer")

The version of Scala in use is 2.10 and if the table line is commented the code compiles successfully. Use of the table object to accomplish data model mappings is nearly ubiquitous in the Squeryl examples I'm been researching online and no one else seems to have encountered a similar problem.

by Peter Beacom at August 21, 2014 03:45 PM

/r/compsci

What is the difference between getting a Open Source licence, like MIT, and doing nothing?

I was reading up about the availability of different open source licenses on opensource.org and I just can't seem to answer this question. I don't see the benefits from getting a license other then your program looking more official with all types of license talk at the top of your files. Another guess I had was that people would be more inclined to work with your code if they saw it was licensed knowing there work wouldn't go to waste when they can't distribute or sell it.

Can someone please help me understand this whole thing?

submitted by wastapunk
[link] [6 comments]

August 21, 2014 03:38 PM

Lobsters

How to Keep Your Neighbours in Order [Conor McBride]

In this paper McBride uses dependent types to define a set of data types whose elements are known (at compile time) to be in order. Generic programs for insertion and flattening are put together to build algorithms like quicksort and deletion from balanced 2-3 trees in a correct-by-construction way (without having to work with proofs).

It is exhilarating being drawn to one’s code by the strong currents of a good design. But that happens only in the last iteration: we are just as efficiently dashed against the rocks by a bad design, and the best tool to support recovery remains, literally, the drawing board.

Comments

by ika at August 21, 2014 03:25 PM

Dave Winer

Lobsters

StackOverflow

GridFS resizing image on the fly in Scala/Java/Play Framework

Here is my code for serving an image. (no resizing) I want to resize the image before serve. So I tried put the size in URL like this /img/24x24/filename.jpg I tried so many methods before I ask and it didn't work, anyone implement this before? please help. Thanks.

val gridFS = new GridFS(db, "pics")
val file = gridFS.find(BSONDocument("filename" -> filename))

serve(gridFS, file).map(_.withHeaders(CONTENT_DISPOSITION -> "inline;", CONTENT_TYPE -> "image/jpeg")) recover {
      case e => NotFound(
        ...
      )
}

This is another method I used. Also work.

    val t = file.headOption.filter(_.isDefined).map(_.get).map { file => 
             val enumerateContent = gridFS.enumerate(file)        
             SimpleResult(
                 header = ResponseHeader(200),
                 body = enumerateContent
             ).withHeaders(CONTENT_DISPOSITION -> "inline;", CONTENT_TYPE -> "image/jpeg")
     }

by garpod at August 21, 2014 03:02 PM

Portland Pattern Repository

StackOverflow

Specifying logarithmic axis values (labels and ticks) in JFreeChart

I am struggling with LogAxis to get sensible frequency labels, e.g. using an equal tempered scale with A4 = 440 Hz, such as this table, I want labels to appear for example at

(30 to 120 by 2).map(midicps).foreach(println)

46.249302
51.91309
58.270466
65.406395
73.4162
82.40688
92.498604
103.82618
116.54095
130.81279
146.83238
164.81378
184.99721
207.65234
233.08188
261.62558
293.66476
329.62756
369.99442
415.3047
466.16376
523.25116
587.3295
...
4698.6367
5274.0405
5919.9106
6644.8755
7458.621
8372.019

Hertz, where

def midicps(d: Double): Double = 440 * math.pow(2, (d - 69) / 12)

In other words, I have twelve divisions per octave (doubling of value), with a fixed frequency being 440.0. I happen to have a lower bound of 32.7 and upper bound of 16700.0 for the plot.

My first attempt:

import org.jfree.chart._
val pl = new plot.XYPlot
val yaxis = new axis.LogAxis
yaxis.setLowerBound(32.7)
yaxis.setUpperBound(16.7e3)
yaxis.setBase(math.pow(2.0, 1.0/12))
yaxis.setMinorTickMarksVisible(true)
yaxis.setStandardTickUnits(axis.NumberAxis.createStandardTickUnits())
pl.setRangeAxis(yaxis)
val ch = new JFreeChart(pl)
val pn = new ChartPanel(ch)
new javax.swing.JFrame {
  getContentPane.add(pn)
  pack()
  setVisible(true)
}

This gives my labels which do not fall into any of the above raster points:

enter image description here

Any ideas how to enforce my raster?

by 0__ at August 21, 2014 02:56 PM

Daniel Lemire

Expert performance and training: what we really know

Movies such as Good Will Hunting tell beautiful stories about young people able to instantly master difficult topics, without any effort on their part.

That performance is unrelated to effort is an appealing belief. Whether you perform well or poorly is not your fault. Some go further and conclude that success and skill levels are primarily about genetics. That is an even more convenient observation: the quality of your parenting or education becomes irrelevant. If kids raised in the ghetto do poorly, it is because they inherited the genes of their parents! I personally believe that poor kids tend to do poorly in school primarily because they work less at it (e.g., kids from the ghetto will tend to pass on their homework assignments for various reasons).

A recent study by Macnamara et al. suggests that practice explained less than 1% of the variance in performance within professions, and generally less than 25% of the variance in other activities.

It is one of several similar studies attempting to debunk the claim popularized by Gladwell that expert performance requires 10,000 hours of deliberate training.

Let us get one source of objection out of the way: merely practicing is insufficient to reach world-expert levels of performance. You have to practice the right way, you have to put in the mental effort, and you have to have the basic dispositions. (I can never be a star basketball player.) You also need to live in the right context. Meeting the right people at the right time can have a determining effect on your performance.

But it is easy to underestimate the value of hard work and motivation. We all know that Kenyan and Ethiopian make superb long-distance runners. Right? This is all about genetics, right? Actually, though their body type predispose them to good performance, factors like high motivation and much training in the right conditions are likely much more important than any one specific gene.

Time and time again, I have heard people claim that mathematics and abstract thinking was just beyond them. I also believe these people when they point out that they have put many hours of effort… However, in my experience, most students do not know how to study properly. You should never, ever, cram the night before an exam. You should not do your homework in one pass: you should do it once, set it aside, and then revise it. You absolutely need to work hard at learning the material, forget it for a time, and then work at it again. That is how you retain the material on the long run. You also need to have multiple references, repeatedly train on many problems and so on.

I believe that poor study habits probably explain much of the cultural differences in school results. Some cultures seem to do a lot more to show their kids how to be intellectually efficient.

I also believe that most people overestimate the amount of time and effort they put on skills they do not yet master. For example, whenever I face someone who failed to master the basics of programming, they are typically at a loss to describe the work they did before giving up. Have they been practicing programming problems every few days for months? Or did they just try for a few weeks before giving up? The latter appears much more likely as they are not able to document how they spent hundreds of hours. Where is all the software that they wrote?

Luck is certainly required to reach the highest spheres, but without practice and hard work, top level performance is unlikely. Some simple observations should convince you:

  • There are few people who make world-class contributions at once… there are few polymaths. It is virtually impossible for someone to become a world expert several distinct activities. This indicates that much effort is required for world-class performance in any one activity. This is in contrast with a movie like Good Will Hunting where the main character appears to have effortlessly acquired top-level skills in history, economics, mathematics.

    A superb scientist like von Neumann was able to make lasting contributions in several fields, but this tells us more about his strategies than the breadth of his knowledge:

    Von Neumann was not satisfied with seeing things quickly and clearly; he also worked very hard. His wife said “he had always done his writing at home during the night or at dawn. His capacity for work was practically unlimited.” In addition to his work at home, he worked hard at his office. He arrived early, he stayed late, and he never wasted any time. (…) He wasn’t afraid of anything. He knew a lot of mathematics, but there were also gaps in his knowledge, most notably number theory and algebraic toplogy. Once when he saw some of us at a blackboard staring at a rectangle that had arrows marked on each of its sides, he wanted to know that what was. “Oh just the torus, you know – the usual identification convention.” No, he didn’t know. The subject is elementary, but some of it just never crossed his path, and even though most graduate students knew about it, he didn’t. (Halmos, 1973)

  • In the arts and sciences, world experts are consistently in their 30s and 40s, or older. This suggests that about 10 years of hard work are needed to reach world-expert levels of performance. There are certainly exceptions. Einstein and Galois were in their 20s when they did their best work. However, these exceptions are very uncommon. And even Einstein, despite being probably the smartest scientist of his century, only got his PhD at 26. We know little about Galois except that he was passionate, even obsessive, about Mathematics as a teenager and he was homeschooled.
  • Even the very best improve their skills only gradually. Musicians or athletes do not suddenly become measurably better from one performance to the other. We see them improve over months. This suggests that they need to train and practice.

    When you search in the past of people who burst on the scene, you often find that they have been training for years. In interviews with young mathematical prodigies, you typically find that they have been teaching themselves mathematics with a passion for many years.

A common counterpoint is to cite studies on identical twins showing that twins raised apart exhibit striking similarities in terms of skills. If you are doing well in school, and you have an identical twin raised apart, he is probably doing well in school. This would tend to show that skills are genetically determined. There are two key weaknesses to this point. Firstly, separated twins tend to live in similar (solidly middle class) homes. Is it any wonder that people who are genetically identical and live in similar environment end up with similar non-extraordinary abilities? Secondly, we have virtually no reported case of twins raised apart reaching world-class levels. It would be fascinating if twins, raised apart, simultaneously and independently reached Einstein-level abilities… Unfortunately, we have no such evidence.

As far as we know, if you are a world-class surgeon or programmer, you have had to work hard for many years.

by Daniel Lemire at August 21, 2014 02:54 PM

/r/emacs

Using id-utils/mkid/gid with C++

I've been a satisfied id-utils user for the past year. I was recently confronted with C++ code and appear to be running into problems. For example:

1st file: class.h: -------------------- ... class class_A{ func_A(); ... }; ... 2nd file: lib.cpp -------------------- ... class_A::func_A(){ ... } ... 3rd file main.cpp --------------------- main(){ ... class_A::func_A(); ... } 

When I run mkid/gid on the function: func_A, the only hit I get is the member function declaration in the 1st file. Am I missing something here. Is there a trick to get 'gid' to work better with C++ code?

I checked man pages and the email archives, but wasn't able to come up with anything helpful. When I run 'gid' on func_A, I'd like to get hits for all 3 file occurrences above.

submitted by sbay
[link] [1 comment]

August 21, 2014 02:50 PM

/r/netsec

StackOverflow

How to simplify the scala code which continually reads next page and returns a `\/` type

I'm writing some scala code, found it a little bit complex, and trying to make it simpler.

There is a function can read the content from a url, which is a json:

{
    "items": ["aaa", "bbb", "ccc"],
    "next": "http://some.com/page2"
}

Then parse it to a page instance:

def loadPage(url:String): Try[Page] = { ignore the code here }

case class Item(content:String)
case class Page(items: List[Item], next: Option[String])

You can see in the page, it contains some items, and also a next url. If it's the last page, there is no next field provided.

Now I want to write a function, which takes a staring url, will read it and all the next pages, and will return a Throwable \/ List[Item] type. (Here, \/ is the Either type provided by scalaz)

def readItems(startingUrl: String): Throwable \/ List[Item] = {
    ???
}

Now I have a solution, which uses recursive(not tail-recursive) function:

def readItems(startLink: String): Throwable \/ List[Item] = {

  def fetchChanges(link: String): Throwable \/ List[Item] = {
    loadPage(link) match {
      case Success(page) => page.next.fold(page.items.right[Throwable]) { nextLink =>
        fetchChanges(nextLink).map(page.items ::: _)
      }
      case Failure(NonFatal(e)) => e.left
    }
  }

  fetchChanges(startLink)
}

Then I think it's better to provided a tail-recursive one, to avoid stack-overflow if the page chain is too long:

def readItems2(startLink: String): Throwable \/ List[Item] = {

  @tailrec
  def fetchChanges(link: String, result: Throwable \/ List[Item]): Throwable \/ List[Item] = {
    if (result.isLeft)
      result
    else
      loadPage(link) match {
        case Success(page) => page.next match {
          case Some(nextLink) => fetchChanges(nextLink, result.map(_ ::: page.items))
          case _ => result
        }
        case Failure(NonFatal(e)) => e.left
      }
  }

  fetchChanges(startLink, List.empty[Item].right)
}

You can see the code is pretty complex. Is there any way to make it a little bit simpler? (e.g. to use some features of scalaz)


Update: the return type of loadPage can be changed if needed

by Freewind at August 21, 2014 02:43 PM

comparing sbt and Gradle

I am diving into Scala and noticed sbt. I have been quite happy with Gradle in java/groovy projects, and I know there's a scala plugin for Gradle.

What could be good reasons to favour sbt over Gradle in a Scala project?

by Hans Westerbeek at August 21, 2014 02:38 PM

CompsciOverflow

What's time complexity of this algorithm for "Work Break"?

Word Break(Dynamic Programming)
Given a string s and a dictionary of words dict, add spaces in s to construct a sentence where each word is a valid dictionary word.

Return all such possible sentences.

For example, given

  • s = "catsanddog",dict = ["cat", "cats", "and", "sand", "dog"].
  • A solution is ["cats and dog", "cat sand dog"].


Question:

  • Time complexity = ?
  • Space complexity = ?


Personally I think,

  • Time complexity = O(n!), n is the length of the given string.
  • Space complexity = O(n).

Doubt:
Seems if without DP, the time complexity = O(n!), but with DP, what is that?


Solution: DFS+Backtracking(Recursion) + DP:
Code: Java

public class Solution {
    public List<String> wordBreak(String s, Set<String> dict) {
        List<String> list = new ArrayList<String>();

        // Input checking.
        if (s == null || s.length() == 0 || 
            dict == null || dict.size() == 0) return list;

        int len = s.length();

        // memo[i] is recording,
        // whether we cut at index "i", can get one of the result.
        boolean memo[] = new boolean[len];
        for (int i = 0; i < len; i ++) memo[i] = true;

        StringBuilder tmpStrBuilder = new StringBuilder();
        helper(s, 0, tmpStrBuilder, dict, list, memo);

        return list;
    }

    private void helper(String s, int start, StringBuilder tmpStrBuilder,
                        Set<String> dict, List<String> list, boolean[] memo) {

        // Base case.
        if (start >= s.length()) {
            list.add(tmpStrBuilder.toString().trim());
            return;
        }

        int listSizeBeforeRecursion = 0;
        for (int i = start; i < s.length(); i ++) {
            if (memo[i] == false) continue;

            String curr = s.substring(start, i + 1);
            if (!dict.contains(curr)) continue;

            // Have a try.
            tmpStrBuilder.append(curr);
            tmpStrBuilder.append(" ");

            // Do recursion.
            listSizeBeforeRecursion = list.size();
            helper(s, i + 1, tmpStrBuilder, dict, list, memo);

            if (list.size() == listSizeBeforeRecursion) memo[i] = false;

            // Roll back.
            tmpStrBuilder.setLength(tmpStrBuilder.length() - curr.length() - 1);
        }
    }
}

by Zhaonan at August 21, 2014 02:35 PM

Is computation expression the same as monad?

I'm still learning functional programming (with f#) and I recently started reading about computation expressions. I still don't fully understand the concept and one thing that keeps me unsure when reading all the articles regarding monads (most of them are written basing on Haskell) is the relation between computation expressions and monads.

Having written all that, here's my question (two questions actually):

Is every F# computation expression a monad? Can every monad be expressed with F# computation expression?

I've read this post of Tomas Petricek and if I understand it well, it states that computation expressions are more than monads, but I'm not sure if I interpret this correctly.

by Grzegorz Sławecki at August 21, 2014 02:33 PM

/r/netsec

StackOverflow

Making one Option[List[MyType]] from three different Option[List[MyType]]

I have

def searchListProducts1 = models.Products.IndivProduct.getProductsFromJsObjectList(productsTextSearchDescription)
def searchListProducts2 = models.Products.IndivProduct.getProductsFromJsObjectList(productsTextSearchName)
def searchListProducts3 = models.Products.IndivProduct.getProductsFromJsObjectList(productsTextSearchIngredients)

where each is Option[List[MyType]]

I want to "merge" them all together (is that a fold?) so that I have just one Option[List[MyType]]

Thanks

by user3231690 at August 21, 2014 02:19 PM

QuantOverflow

multi factor equity model exposures not as expected

I'm researching an equity multi factor model.

It contains three factors, say A, B & C. The factors are weighted as such,

         60%        40% 
  (70% A + 30% B) + C

I am running a back test on this model. When running the model I constrain the risk factors (momentum, beta & size etc) to have a limited exposure. So ideally most of the return should be explained by my model. Looking at the exposures of my factors A, B & C in the last 12 months the exposure of B is much larger than A. I am trying to understand why this might be but not sure where to start?

by mHelpMe at August 21, 2014 02:18 PM

StackOverflow

sbt/ivy failing to resolve wildcard ivy dependencies on a filesystem resolver

I am using the ~/.sbt/repositories file to tell sbt 0.13.5 which repositories to retrieve from. That file only contains local and a file:// repository with a custom layout that closely resembles the standard sbt one, with sbtVersion and scalaVersion optional fields represented.

When it comes to resolving dependencies for my project, I've noticed weird behavior:

  • Resolving exact dependencies works fine
  • latest.integration also works fine
  • Wildcard resolution of the form x.y.+ doesn't find anything, and instead seems to be searching for literal patterns. I get errors of the form:
    [warn] ==== myrepo: tried
    [warn]   file://path/to/my/repo/myorg/mypackage_2.10/[revision]/ivy-[revision].xml
    [info] Resolving myorg#mypackage_2.10;2.7.1.+ ...
    [warn]  module not found: myorg#mypackage_2.10;2.7.1.+

which as you can see, mention the repo layout pattern explicitly.

I'm mostly confused because the resolver works fine for anything but the + wildcard dependencies. I tried poking around the ivy documentation to figure out if certain resolvers (like the file:// resolver I'm using) don't implement certain kinds of dependency resolution, but that didn't seem to be a thing, so I'm mostly stumped. Any idea what I can do to make it work, or what might be causing it?

by Myserious Dan at August 21, 2014 02:04 PM

Scala IDE not working properly in Eclipse Luna for Java EE

I've tried and re-tried to install the Scala IDE in several different ways in the Java EE specific version of Eclipse, but I just can't get it to work.

The Scala first-time configuration screen doesn't appear, I can't create Scala projects, and the Scala perspective is nowhere to be found...

I've used the Scala IDE before, and it always worked flawlessly...

Going to Help -> About Eclipse -> Installation Details I can see that the IDE is indeed installed, so why it doesn't work is beyond me...

Any help in resolving this issue?

by Electric Coffee at August 21, 2014 02:03 PM

TheoryOverflow

How to conduct a computational analysis of java program

Main task

In my bachelor degree's thesis I've developed an algorithm for recommender systems which uses personalized PageRank with some particular features as nodes. In the recommender systems' field, there is the opportunity to understand how good is the algorithm using some accuracy metrics (MAE, RMSE, F-measure, ecc.).

What I want to do

In my case I don't want to limit my analysis on the accuracy field, but I want to extend my work with a proper discussion on the computational complexity of the developed algorithm. During my degree I've never done something like that so I don't know how to conduct it in a formal and proper way.

What I've thought

Actually I'm calculating (using some Java methods) the time needed by each algorithm in order to complete the specific task. After that, I will draw a plot in which I describe how much you need to pay in time in order to get a specific accuracy level.

Questions

  • Is there some good work from which I can take inspiration from (papers, books, ecc.)?
  • Can you give me some tips or simply tell your experience in the field in order to conduct this kind of analysis in a proper way?

If there is something not clear, please leave me a comment and I'll improve my question. Thank you in advance

by Alessandro Suglia at August 21, 2014 02:01 PM

Planet Clojure

Burlington Ruby Conference Talk: How to Consume Lots of Data

Concurrency is all the rage. When you have tons of data being shoved down your throat, you need all the help you can get. All the cool kids are turning to alternatives to try and keep up: node.js, clojure, erlang, elixir, Go. Popular thinking is that Ruby is too slow and won’t scale. But our favorite friend can support it very well.

In my talk at the 2014 Burlington Ruby Conference, I took a look at the actor pattern in Ruby with Celluloid and compared it to similar solutions in other languages.

Doug Alcorn - How to Consume Lots of Data - Burlington Ruby Conference 2014 from Burlington Ruby on Vimeo.

by Doug Alcorn at August 21, 2014 02:00 PM

StackOverflow

Best way to implement "zipLongest" in Scala

I need to implement a "zipLongest" function in Scala; that is, combine two sequences together as pairs, and if one is longer than the other, use a default value. (Unlike the standard zip method, which will just truncate to the shortest sequence.)

I've implemented it directly as follows:

def zipLongest[T](xs: Seq[T], ys: Seq[T], default: T): Seq[(T, T)] = (xs, ys) match {
  case (Seq(), Seq())           => Seq()
  case (Seq(), y +: rest)       => (default, y) +: zipLongest(Seq(), rest, default)
  case (x +: rest, Seq())       => (x, default) +: zipLongest(rest, Seq(), default)
  case (x +: restX, y +: restY) => (x, y) +: zipLongest(restX, restY, default)
}

Is there a better way to do it?

by user3364825 at August 21, 2014 01:52 PM

Planet Theory

Hashing Summer School

Back in July I took part in the Hashing Summer School in Copenhagen.  This was nominally set up by me, Rasmus Pagh, and Mikkel Thorup, though Mikkel was really the host organizer that put it all together.

The course materials are all online here.  One thing that was a bit different is that it wasn't just lectures -- we really did make more of a "summer school" by putting together a lot of (optional) exercises, and leaving time for people to work through some of them in teams.  I am hoping the result is a really nice resource.  There are lectures with the video online, and also the slides and exercises.  Students could go through whatever parts they like on their own, or people might find the material useful in preparing their own lectures when teaching graduate-level topics in hashing.  


by Michael Mitzenmacher (noreply@blogger.com) at August 21, 2014 01:45 PM

StackOverflow

installing JDK8 on Windows XP - advapi32.dll error

I downloaded JDK8 build b121 and while trying to install I'm getting the following error:

the procedure entry point RegDeleteKeyExA could not be located in the dynamic link library ADVAPI32.dll

The operating system is Windows XP, Version 2002 Service Pack 3, 32-bit.

by yashhy at August 21, 2014 01:44 PM

Lobsters

Planet Theory

Turing's Oracle

My daughter had a summer project to read and summarize some popular science articles. Having heard me talk about Alan Turing more than a few times, she picked a cover story from a recent New Scientist. The cover copy says "Turing's Oracle: Will the universe let us build the ultimate thinking machine" sounds like an AI story but in fact more of an attack on the Church-Turing thesis. The story is behind a paywall but here is an excerpt:
He called it the "oracle". But in his PhD thesis of 1938, Alan Turing specified no further what shape it might take...Turing has shown with his universal machine that any regular computer would have inescapable limitations. With the oracle, he showed how you might smash through them. 
This is a fundamental misinterpretation of Turing's oracle model. Here is what Turing said in his paper Systems of Logic Based on Ordinals, Section 4.
Let us suppose we are supplied with some unspecified means of solving number-theoretic problems; a kind of oracle as it were. We shall not go any further into the nature of the oracle apart from saying it cannot be a machine. (emphasis mine)
The rest of the section defines the oracle model and basically argues that for any oracle O, the halting problem relative to O is not computable relative to O. Turing is arguing here that there is no single hardest problem, there is always something harder.

If you take O to be the usual halting problem then a Turing machine equipped with O can solve the halting problem, just by querying the oracle. But that doesn't mean that you have some machine that solves the halting problem for, as Turing has so eloquently argued in Section 9 of his On Computable Numbers, no machine can compute such an O. Turing created the oracle model, not because he thought it would lead to a process that would solve the halting problem, but because it allowed him to show there are problems even more difficult.

Turing's oracle model, like so much of his work, has played a major role in both computability and computational complexity theory. But one shouldn't twist this model to think the oracle could lead to machines that solve non-computable problems and it is sacrilege to suggest that Turing himself would think that.

by Lance Fortnow (noreply@blogger.com) at August 21, 2014 01:38 PM

StackOverflow

How do I create an explicit companion object for a case class which behaves identically to the replaced compiler provided implicit companion object?

I have a case class defined as such:

case class StreetSecondary(designator: String, value: Option[String])

I then define an explicit companion object:

object StreetSecondary {
  //empty for now
}

The act of defining the explicit companion object StreetSecondary causes the compiler produced "implicit companion object" to be lost; i.e. replaced with no ability to access the compiler produced version. For example, the tupled method is available on case class StreetSecondary via this implicit companion object. However, once I define the explicit companion object, the tupled method is "lost".

So, what do I need to define/add/change to the above StreetSecondary explicit companion object to regain all the functionality lost with the replacement of the compiler provided implicit companion object? And I want more than just the tupled method restored. I want all functionality (for example, including extractor/unapply) restored.

Thank you for any direction/guidance you can offer.


UPDATE 1

I have done enough searching to discover several things:

A) The explicit companion object must be defined BEFORE its case class (at least that is the case in the Eclipse Scala-IDE WorkSheet, and the code doesn't work in the IntelliJ IDE's WorkSheet regardless of which comes first).

B) There is a technical trick to force tupled to work (thank you drstevens): (StreetSecondary.apply _).tupled While that solves the specific tupled method problem, it still doesn't accurately or completely describe what the scala compiler is providing in the implicit companion object.

C) Finally, the explicit companion object can be defined to extend a function which matches the signature of the parameters of the primary constructor and returns an instance of the case class. It looks like this:

object StreetSecondary extends ((String, Option[String]) => StreetSecondary) {
  //empty for now
}

Again, I am still not confident accurately or completely describes what the scala compiler is providing in the implicit companion object.

by chaotic3quilibrium at August 21, 2014 01:33 PM

Lein test failing with No such var: leiningen.util.injected/add-hook

When I ran lein test I get this error in a new project, without touching tests or test related configuration:

Exception in thread "main" java.lang.RuntimeException: No such var: leiningen.util.injected/add-hook, compiling:(NO_SOURCE_PATH:1)
    at clojure.lang.Compiler.analyze(Compiler.java:6235)
    at clojure.lang.Compiler.analyze(Compiler.java:6177)
    at clojure.lang.Compiler$InvokeExpr.parse(Compiler.java:3452)
    at clojure.lang.Compiler.analyzeSeq(Compiler.java:6411)
    at clojure.lang.Compiler.analyze(Compiler.java:6216)
    at clojure.lang.Compiler.analyze(Compiler.java:6177)
    at clojure.lang.Compiler$BodyExpr$Parser.parse(Compiler.java:5572)
    at clojure.lang.Compiler.analyzeSeq(Compiler.java:6409)
    at clojure.lang.Compiler.analyze(Compiler.java:6216)
    at clojure.lang.Compiler.analyze(Compiler.java:6177)
    at clojure.lang.Compiler$IfExpr$Parser.parse(Compiler.java:2597)
    at clojure.lang.Compiler.analyzeSeq(Compiler.java:6409)
    at clojure.lang.Compiler.analyze(Compiler.java:6216)
    at clojure.lang.Compiler.analyzeSeq(Compiler.java:6397)
    at clojure.lang.Compiler.analyze(Compiler.java:6216)
    at clojure.lang.Compiler.analyze(Compiler.java:6177)
    at clojure.lang.Compiler$BodyExpr$Parser.parse(Compiler.java:5572)
    at clojure.lang.Compiler.analyzeSeq(Compiler.java:6409)
    at clojure.lang.Compiler.analyze(Compiler.java:6216)
    at clojure.lang.Compiler.analyze(Compiler.java:6177)
    at clojure.lang.Compiler$BodyExpr$Parser.parse(Compiler.java:5572)
    at clojure.lang.Compiler$TryExpr$Parser.parse(Compiler.java:2091)
    at clojure.lang.Compiler.analyzeSeq(Compiler.java:6409)
    at clojure.lang.Compiler.analyze(Compiler.java:6216)
    at clojure.lang.Compiler.analyze(Compiler.java:6177)
    at clojure.lang.Compiler$BodyExpr$Parser.parse(Compiler.java:5572)
    at clojure.lang.Compiler$FnMethod.parse(Compiler.java:5008)
    at clojure.lang.Compiler$FnExpr.parse(Compiler.java:3629)
    at clojure.lang.Compiler.analyzeSeq(Compiler.java:6407)
    at clojure.lang.Compiler.analyze(Compiler.java:6216)
    at clojure.lang.Compiler.eval(Compiler.java:6462)
    at clojure.lang.Compiler.eval(Compiler.java:6455)
    at clojure.lang.Compiler.eval(Compiler.java:6431)
    at clojure.core$eval.invoke(core.clj:2795)
    at clojure.main$eval_opt.invoke(main.clj:296)
    at clojure.main$initialize.invoke(main.clj:315)
    at clojure.main$null_opt.invoke(main.clj:348)
    at clojure.main$main.doInvoke(main.clj:426)
    at clojure.lang.RestFn.invoke(RestFn.java:421)
    at clojure.lang.Var.invoke(Var.java:405)
    at clojure.lang.AFn.applyToHelper(AFn.java:163)
    at clojure.lang.Var.applyTo(Var.java:518)
    at clojure.main.main(main.java:37)
Caused by: java.lang.RuntimeException: No such var: leiningen.util.injected/add-hook
    at clojure.lang.Util.runtimeException(Util.java:156)
    at clojure.lang.Compiler.resolveIn(Compiler.java:6694)
    at clojure.lang.Compiler.resolve(Compiler.java:6664)
    at clojure.lang.Compiler.analyzeSymbol(Compiler.java:6625)
    at clojure.lang.Compiler.analyze(Compiler.java:6198)
    ... 42 more

by Federico Tomassetti at August 21, 2014 01:32 PM

QuantOverflow

Using linear regression on (lagged) returns of one stock to predict returns of another

Suppose I want to build a linear regression to see if returns of one stock can predict returns of another. For example, let's say I want to see if the VIX return on day X is predictive of the S&P return on day (X + 30). How would I go about this?

The naive way would be to form pairs (VIX return on day 1, S&P return on day 31), (VIX return on day 2, S&P return on day 32), ..., (VIX return on day N, S&P return on day N + 30), and then run a standard linear regression. A t-test on the coefficients would then tell if the model has any real predictive power. But this seems wrong to me, since my points are autocorrelated, and I think the p-value from my t-test would underestimate the true p-value. (Though IIRC, the t-test would be asymptotically unbiased? Not sure.)

So what should I do? Some random thoughts I have are:

  • Take a bunch of bootstrap samples on my pairs of points, and use these to estimate the empirical distribution of my coefficients and p-values. (What kind of bootstrap do I run? And should I be running the bootstrap on the coefficient of the model, or on the p-value?)
  • Instead of taking data from consecutive days, only take data from every K days. For example, use (VIX return on day 1, S&P return on day 31), (VIX return on day 11, S&P return on day 41), etc. (It seems like this would make the dataset way too small, though.)

Are any of these thoughts valid? What are other suggestions?

by user672 at August 21, 2014 01:26 PM

StackOverflow

Error: scala: No 'scala-library*.jar' in Scala compiler library

Environment: Play 2.3.0/Scala 2.11.1/IntelliJ 13.1

I used Typesafe Activator 1.2.1 to create a new project with Scala 2.11.1. After the project was created, I ran gen-idea. The generated IDEA project fails to compile with the error:

Error: scala: No 'scala-library*.jar' in Scala compiler library in test

Am I doing something wrong? Workaround?

enter image description here

by jkschneider at August 21, 2014 01:19 PM

Planet Clojure

Common Lisp or Clojure Developer, Adelaide or remote

Common Lisp or Clojure Developer
A fantastic opportunity for a Common Lisp developer or Clojure developer that is fast, adaptable and can work independently or fit in well into a team

  • Experience with Common Lisp or Clojure is a MUST
  • Adelaide or the opportunity to work 100% remotely!
  • Great career opportunity – Great Salary or Contract rate

Experience with Common Lisp or Clojure is a must (could be non-commercial) as well as general knowledge of relational databases and web technologies.

This role could be 100% remote for the right person, to join a top class team and on a great product which could became the Common Lisp application with largest customer base in the world.   The successful applicant will join a small, focused team in maintaining and furthering the development of a leading multivariate testing platform.

Familiarity in the following areas would be considered a plus: backend web server technology, Javascript, PostgreSQL, SQL Server, statistics, distributed computing, Lispworks, any distributed version control system. A high degree of autonomy and self-motivation will be expected.

This is a great career building opportunity and salary package on offer! Although will consider contractors!

If this is of interest I’d be keen to discuss with you, please email me onstewart@totalresource.com.au or call 0061 (0)415 344 427


by Will Fitzgerald at August 21, 2014 01:19 PM

Edward Z. Yang

The fundamental problem of programming language package management

Why are there so many goddamn package managers? They sprawl across both operating systems (apt, yum, pacman, Homebrew) as well as for programming languages (Bundler, Cabal, Composer, CPAN, CRAN, CTAN, EasyInstall, Go Get, Maven, npm, NuGet, OPAM, PEAR, pip, RubyGems, etc etc etc). "It is a truth universally acknowledged that a programming language must be in want of a package manager." What is the fatal attraction of package management that makes programming language after programming language jump off this cliff? Why can't we just, you know, reuse an existing package manager?

You can probably think of a few reasons why trying to use apt to manage your Ruby gems would end in tears. "System and language package managers are completely different! Distributions are vetted, but that's completely unreasonable for most libraries tossed up on GitHub. Distributions move too slowly. Every programming language is different. The different communities don't talk to each other. Distributions install packages globally. I want control over what libraries are used." These reasons are all right, but they are missing the essence of the problem.

The fundamental problem is that programming languages package management is decentralized.

This decentralization starts with the central premise of a package manager: that is, to install software and libraries that would otherwise not be locally available. Even with an idealized, centralized distribution curating the packages, there are still two parties involved: the distribution and the programmer who is building applications locally on top of these libraries. In real life, however, the library ecosystem is further fragmented, composed of packages provided by a huge variety of developers. Sure, the packages may all be uploaded and indexed in one place, but that doesn't mean that any given author knows about any other given package. And then there's what the Perl world calls DarkPAN: the uncountable lines of code which probably exist, but which we have no insight into because they are locked away on proprietary servers and source code repositories. Decentralization can only be avoided when you control absolutely all of the lines of code in your application.. but in that case, you hardly need a package manager, do you? (By the way, my industry friends tell me this is basically mandatory for software projects beyond a certain size, like the Windows operating system or the Google Chrome browser.)

Decentralized systems are hard. Really, really hard. Unless you design your package manager accordingly, your developers will fall into dependency hell. Nor is there a one "right" way to solve this problem: I can identify at least three distinct approaches to the problem among the emerging generation of package managers, each of which has their benefits and downsides.

Pinned versions. Perhaps the most popular school of thought is that developers should aggressively pin package versions; this approach advocated by Ruby's Bundler, PHP's Composer, Python's virtualenv and pip, and generally any package manager which describes itself as inspired by the Ruby/node.js communities (e.g. Java's Gradle, Rust's Cargo). Reproduceability of builds is king: these package managers solve the decentralization problem by simply pretending the ecosystem doesn't exist once you have pinned the versions. The primary benefit of this approach is that you are always in control of the code you are running. Of course, the downside of this approach is that you are always in control of the code you are running. An all-to-common occurrence is for dependencies to be pinned, and then forgotten about, even if there are important security updates to the libraries involved. Keeping bundled dependencies up-to-date requires developer cycles--cycles that more often than not are spent on other things (like new features).

A stable distribution. If bundling requires every individual application developer to spend effort keeping dependencies up-to-date and testing if they keep working with their application, we might wonder if there is a way to centralize this effort. This leads to the second school of thought: to centralize the package repository, creating a blessed distribution of packages which are known to play well together, and which will receive bug fixes and security fixes while maintaining backwards compatibility. In programming languages, this is much less common: the two I am aware of are Anaconda for Python and Stackage for Haskell. But if we look closely, this model is exactly the same as the model of most operating system distributions. As a system administrator, I often recommend my users use libraries that are provided by the operating system as much as possible. They won't take backwards incompatible changes until we do a release upgrade, and at the same time you'll still get bugfixes and security updates for your code. (You won't get the new hotness, but that's essentially contradictory with stability!)

Embracing decentralization. Up until now, both of these approaches have thrown out decentralization, requiring a central authority, either the application developer or the distribution manager, for updates. Is this throwing out the baby with the bathwater? The primary downside of centralization is the huge amount of work it takes to maintain a stable distribution or keep an individual application up-to-date. Furthermore, one might not expect the entirety of the universe to be compatible with one another, but this doesn't stop subsets of packages from being useful together. An ideal decentralized ecosystem distributes the problem of identifying what subsets of packages work across everyone participating in the system. Which brings us to the fundamental, unanswered question of programming languages package management:

How can we create a decentralized package ecosystem that works?

Here are a few things that can help:

  1. Stronger encapsulation for dependencies. One of the reasons why dependency hell is so insidious is the dependency of a package is often an inextricable part of its outwards facing API: thus, the choice of a dependency is not a local choice, but rather a global choice which affects the entire application. Of course, if a library uses some library internally, but this choice is entirely an implementation detail, this shouldn't result in any sort of global constraint. Node.js's NPM takes this choice to its logical extreme: by default, it doesn't deduplicate dependencies at all, giving each library its own copy of each of its dependencies. While I'm a little dubious about duplicating everything (it certainly occurs in the Java/Maven ecosystem), I certainly agree that keeping dependency constraints local improves composability.
  2. Advancing semantic versioning. In a decentralized system, it's especially important that library writers give accurate information, so that tools and users can make informed decisions. Wishful, invented version ranges and artistic version number bumps simply exacerbate an already hard problem (as I mentioned in my previous post). If you can enforce semantic versioning, or better yet, ditch semantic versions and record the true, type-level dependency on interfaces, our tools can make better choices. The gold standard of information in a decentralized system is, "Is package A compatible with package B", and this information is often difficult (or impossible, for dynamically typed systems) to calculate.
  3. Centralization as a special-case. The point of a decentralized system is that every participant can make policy choices which are appropriate for them. This includes maintaining their own central authority, or deferring to someone else's central authority: centralization is a special-case. If we suspect users are going to attempt to create their own, operating system style stable distributions, we need to give them the tools to do so... and make them easy to use!

For a long time, the source control management ecosystem was completely focused on centralized systems. Distributed version control systems such as Git fundamentally changed the landscape: although Git may be more difficult to use than Subversion for a non-technical user, the benefits of decentralization are diverse. The Git of package management doesn't exist yet: if someone tells you that package management is solved, just reimplement Bundler, I entreat you: think about decentralization as well!

by Edward Z. Yang at August 21, 2014 01:02 PM

Portland Pattern Repository

CompsciOverflow

Cascading Two DFA's

How do we cascade two DFA's M1(Q1,S1,R1,F1,G1,qI1) and M2(Q2,S2,R2,F2,G2,qI2) such that the output of M1 is used as the input of M2 and the output of M2 is the output of the cascaded machine M? How can we define M in terms of M1 and M2?

I can tell that R = R2 and qI = qI1 but what about the others?

M(?,?,R2,?,?,qI1)

by b16db0 at August 21, 2014 12:53 PM

Lobsters

TheoryOverflow

Algorithms for online clique detection

Are there any algorithms which let you detect cliques when adding/deleting edges based on previously detected cliques? What would be the time/memory complexity of this approach?

by user27024 at August 21, 2014 12:41 PM

StackOverflow

Using scalaz-stream to calculate a digest

So I was wondering how I might use scalaz-stream to generate the digest of a file using java.security.MessageDigest?

I would like to do this using a constant memory buffer size (for example 4KB). I think I understand how to start with reading the file, but I am struggling to understand how to:

1) call digest.update(buf) for each 4KB which effectively is a side-effect on the Java MessageDigest instance, which I guess should happen inside the scalaz-stream framework.

2) finally call digest.digest() to receive back the calculated digest from within the scalaz-stream framework some how?

I think I understand kinda how to start:

import scalaz.stream._
import java.security.MessageDigest

val f = "/a/b/myfile.bin"
val bufSize = 4096

val digest = MessageDigest.getInstance("SHA-256")

Process.constant(bufSize).toSource
  .through(io.fileChunkR(f, bufSize))

But then I am stuck! Any hints please? I guess it must also be possible to wrap the creation, update, retrieval (of actual digest calculatuon) and destruction of digest object in a scalaz-stream Sink or something, and then call .to() passing in that Sink? Sorry if I am using the wrong terminology, I am completely new to using scalaz-stream. I have been through a few of the examples but am still struggling.

by adamretter at August 21, 2014 12:35 PM

merging dictionaries in ansible

I'm currently building a role for installing PHP using ansible, and I'm having some difficulty merging dictionaries. I've tried several ways to do so, but I can't get it to work like I want it to:

# A vars file:
my_default_values:
  key = value

my_values:
  my_key = my_value


# In a playbook, I create a task to attempt merging the
# two dictionaries (which doesn't work):

- debug: msg="{{ item.key }} = {{ item.value }}"
  with_dict: my_default_values + my_values

# I have also tried:

- debug: msg="{{ item.key }} = {{ item.value }}"
  with_dict: my_default_values|union(my_values)

# I have /some/ success with using j2's update,
# but you can't use j2 syntax in "with_dict", it appears.
# This works:

- debug: msg="{{ my_default_values.update(my_values) }}"

# But this doesn't:

- debug: msg="{{ item.key }} = {{ item.value }}"
  with_dict: my_default_values.update(my_values)

Is there a way to merge two dictionaries, so I can use it with "with_dict"?

by Berry Langerak at August 21, 2014 12:35 PM

CompsciOverflow

Can coq express its own metatheory?

I'm learning about language metatheory and type systems, and am using coq to formalize my study. One of the things I'd like to do is examine type systems that include dependent types, which I understand is very involved: being able to rely on coq would be invaluable.

Since this type system feature (and other, simpler ones) brings the expressive power of my studied system closer to coq's own, I worry that I might run into a bootstrapping problem that might not reveal itself until much later. Perhaps someone here can address my fears before I set out.

Can coq express its own metatheory? If not, can it still express simpler systems that include common forms of dependent typing?

by phs at August 21, 2014 12:32 PM

StackOverflow

Templating multiple yum .repo files with Ansible template module

I am attempting to template yum .repo files. We have multiple internal and external yum repos that the various hosts we manage may or may not use.

I want to be able to specify any number of repos and what .repo file they will be templated in. It makes sense to group these repos in the same .repo file where they have a common purpose (e.g. all centos repos)

I am unable to determine how to combine ansible, yaml and j2 to achieve this. I have tried using the ansible 'with_items', 'with_subelements' and 'with_dict' unsuccessfully.

YAML data

yum_repo_files:
- centos:
  - name: base
    baseurl: http://mirror/base
  - name: updates
    baseurl: http://mirror/updates
- epel:
  - name: epel
    baseurl: http://mirror/epel

Ansible task

- name: create .repo files
  template: src=yumrepos.j2 dest="/etc/yum.repos.d/{{ item }}.repo"
  with_items: yum_repo_files

j2 template

{% for repofile in yum_repo_files.X %} {# X being the relative index for the current repofile, e.g. centos = 0 and epel = 1 #}
{% for repo in repofile %}
name={{ repo.name }}
baseurl={{ repo.baseurl }}
{% endfor %}
{% endfor %}

by mephage at August 21, 2014 12:18 PM

CompsciOverflow

What is the notation for bounding running time in worst case with concrete example resulting in that worst case running time

I know that Big O is used to bound worst case running time. So an algorithm with running time $O(n^5)$ means its running time in worse case is less than $n^5$ asymptotically.

Similarly, one can say that for example merge sort's running time is $O(n^2)$ which is correct. But we know that there is a better bound for it: $O(n\log n)$. Technically speaking, one can say that every polytime algorithm has running time $O(2^n)$. This is correct, but not useful.

So my question is: what is the notation used for the case of worst case running time such that there exists an input in which the worst case running time happens.

In the merge sort example, one cannot construct an input example so that merge sort would take $n^2$ comparisons, but one can construct an example that requires the number of comparisons being about $n\log n$.

by randomA at August 21, 2014 12:15 PM

StackOverflow

Trying to use "with_items" and "when" in a Ansible playbook to clone a repo

Hello everyone and thanks for stepping by. As the title says, i'm trying to use those Ansible modules as follow. I want to clone a Wordpress repo depending if a variable is "yes" or "no".

This is my main Ansible playbook (using it with Vagrant through vagrant --provision). I'll provide just relevant parts.

vars:
  nginx_server_blocks:
  - { server_name: "dev.simple-site.io", document_root: "simple-site", wordpress: "no" }
  - { server_name: "dev.wordpress-site.io", document_root: "wordpress-site", wordpress: "yes" }
tasks:
 - name: clone Wordpress repo
   git: repo=git:https://github.com/WordPress/WordPress.git
        dest=/var/www/{{ item.document_root }}
   with_items: nginx_server_blocks
   when: item.wordpress == "yes"

When i run vagrant provisioni get this error:

fatal: [default] => failed to parse: SUDO-SUCCESS-rtlizwskstbaxddabxlgqtxxqzambxnh
Traceback (most recent call last):
File "/home/vagrant/.ansible/tmp/ansible-tmp-1408592922.35-152092658109200/git", line 2119, in <module>
    main()
File "/home/vagrant/.ansible/tmp/ansible-tmp-1408592922.35-152092658109200/git", line 524, in main
    add_git_host_key(module, repo, accept_hostkey=module.params['accept_hostkey'])
File "/home/vagrant/.ansible/tmp/ansible-tmp-1408592922.35-152092658109200/git", line 1986, in add_git_host_key
fqdn = get_fqdn(module.params['repo'])
File "/home/vagrant/.ansible/tmp/ansible-tmp-1408592922.35-152092658109200/git", line 2022, in get_fqdn
if "@" in result:
TypeError: argument of type 'NoneType' is not iterable

FATAL: all hosts have already failed -- aborting

Any ideas of the error? i've google it and read the ansible docs about using whenand with_itemsbut no luck.

If helps, my host machine is a mac and the guest is ubuntu 14.04 through Vagrant. Ansible was installed via pip and it's 1.7.

by Richard-MX at August 21, 2014 12:15 PM

Uniqueness of persistenceId in akka-persistence

I'm using the scala api for akka-persistence to persist a group of actor instances that are organized into a tree. Each node in the tree is a persistent actor and is named based on the path to that node from a 'root' node. The persistenceId is set to the name. For example the root node actor has persistenceId 'root'. The next node down has persistenceId 'root-europe'. Another actor might have persistenceId 'root-europe-italy'.

The state in each actor includes a list of the names of its children. E.g. the 'root' actor maintains a list of 'europe', 'asia' etc as part of its state.

I have implemented snapshotting for this system. When the root is triggered to snapshot, it does so and then tells each child to do the same.

The problem arises during snapshot recovery. When I re-create an actor with persistenceId = 'root' (by passing in the name as a constructor parameter), the SnapshotOffer event received by that actor is wrong. It is, for example, 'root-europe-italy....'. This seems like a contradiction of the contract for persistence, where the persistenceId identifies the actor state to be recovered. I got around this problem by reversing the persistenceId of node actors (e.g. 'italy-europe-root') so this seems to be something related to the way files are retrieved by the persistence module. Note that I tried other approaches first, for example I used a variety of separators between the node names, or no separator at all.

Has anyone else experienced this problem, or can an akka-persistence developer help me understand why this might have happened?

BTW: I am using the built-in file-based snapshot storage for now.

Thanks.

by Brendan at August 21, 2014 12:11 PM

/r/netsec

StackOverflow

How to print source code of "IF" condition in "THEN"

I would like to print Scala source code of IF condition while being in THEN section.

Example: IF{ 2 + 2 < 5 } THEN { println("I am in THEN because: " + sourceCodeOfCondition) }

Let's skip THEN section right now, the question is: how to get source code of block after IF?

I assume that IF should be a macro...

Note: this question is redefined version of Macro to access source code of function at runtime where I described that { val i = 5; List(1, 2, 3); true }.logValueImpl works for me (according to other question Macro to access source code text at runtime).

by Artur Stanek at August 21, 2014 12:00 PM

QuantOverflow

M&A hedging an equity portfolio against an index

Quick Note

This question was already posted under the userID user8170. Reason being I could not access my account. Now I am able to login to my account I am reposting the question here and will delete it from the profile user8170 (no comments or answers were posted anyway).

Question

I am trying to run a simple back test on a M&A strategy.

The idea is to buy the target company for the length of the deal and obviously hope to see a profit. The weight given to each deal is decided by the size of the deal.

Some of the deals are part cash, part equity in my study. I have a field in my data called 'Stock Exchange Ratio - Buyer Shares' (SER). This field is defined as the number of shares being issued by the acquirer to the target.

So for example if the acquirer called ABC is buying the target company called TAR in a part cash, part stock deal and the SER is 0.8. Then investors holding TAR will receive 0.8 shares of ABC for every TAR share they hold.

So when I have deals that are not 100% cash I will get extra equity exposure (from the acquirer) that I need to hedge as I understand it.

Rather than short every acquiring company and partly for simplicity I am going to short the MSCI World Index. I do not know how to calculate how much I need to hedge my portfolio against the index though? I have all the beta's for the acquiring companies.

  Portfolio

  Acquirer Target  Deal Size    Weight      Stock Exchange Ratio - Buyer Shares
  ABC      DEF     $1,000m      50%         0
  MNO      LMN     $600m        30%         0.6
  GHI      QRS     $400m        20%         2.5

Update

The beta's for the 3 companies above are,

ABC 0.93
MNO 1.11
GHI 1.14

by mHelpMe at August 21, 2014 11:53 AM

Lobsters

/r/compsci

StackOverflow

Merging Maps using `aggregate`

For any given collection of Map, for instance,

val in = Array( Map("a" -> 1,  "b" -> 2),
                Map("a" -> 11, "c" -> 4),
                Map("b" -> 7,  "c" -> 10))

how to use aggregate on in.par so as to merge the maps into

Map ( "a" -> 12, "b" -> 9, "c" -> 14 )

Note Map merging has been asked multiple times, yet looking for a solution with aggregate on parallel collections.

Many Thanks

by enzyme at August 21, 2014 11:35 AM

TheoryOverflow

What's some good introductory books on type theory?

I'm recently studying Haskell and programming languages. Could someone recommend some books on type theory?

by Problemaniac at August 21, 2014 11:32 AM

Fred Wilson

On SoundCloud

Today, our portfolio company SoundCloud is announcing its content partners program, called On SoundCloud.

For creators, there are three offerings, Partner, Pro, and Premier. Anyone can be a Partner. For a small monthly fee, you can upgrade to Pro. And if you are really serious, then you can become Premier and make money on SoundCloud.

For listeners, there will be two tiers. A free, advertising supported offering that values artists. As Alex Ljung, founder and CEO of SoundCloud says here:

Every time you see or hear an ad, an artist gets paid

There will also be a subscription offering that will be ad free and offer other features for listeners.

For brands, SoundCloud becomes a popular social platform where they can engage with creators and listeners. Here’s more on SoundCloud’s offerings for brands.

Here’s the thing that many people miss about SoundCloud. It’s not like iTunes, or Spotify, or Pandora. It’s a peer network with a social architecture that emphasizes engagement and sharing.

Like Twitter and Tumblr and a number of other popular social platforms, SoundCloud treats everyone as peers in its network. My profile is almost identical to an artist’s profile on SoundCloud. I can do the same things they can do and they can do the same things I can do. The same is true of a brand’s profile.

This social architecture encourages engagement, sharing, commenting, and favoriting. It’s like the artists, listeners, and brands are all hanging out together at one big party.

These social peer networks treat advertising very differently. The ads are native. On Twitter, the advertising is a Tweet. On Tumblr, the advertising is a post. On SoundCloud, the advertising is a track. You see the ads in your feed and you choose to engage with them if they are inviting. In the best case, you enjoy them so much that you favorite or reblog/retweet them. And brands can sponsor/promote tracks from other users. Think of Red Bull sponsoring and promoting artists on SoundCloud.

The New York Times has an article today about On SoundCloud.  It covers all the challenges that SoundCloud has overcome in getting to this place. It’s been a ton of work for the team at SoundCloud to get this launched, and there is certainly a lot more ahead of them as they undertake to get every artist On SoundCloud.

I am very optimistic that will happen because this network of 175mm mobile listeners all over the world connected together and sharing the audio they love with each other is too powerful to ignore.

by Fred Wilson at August 21, 2014 11:29 AM

StackOverflow

How to RegExp file with Spark?


I have UDP_file.txt containing:

2014-03-02 07:59:37;source-address=123.235.78.125 source-port=1780
2014-03-02 07:59:37;source-address=123.235.132.181 source-port=56399
2014-03-02 07:59:37;source-address=123.234.141.253 source-port=49170
2014-03-02 07:59:37;source-address=123.234.104.225 source-port=39123
2014-03-02 07:59:37;source-address=123.234.104.225 fake-port=0000

What I need to do is :

  • load file,
  • RegExp it,
  • lines than match pattern save in file 'good_records.txt',
  • lines than don't match pattern save in file 'bad_records.txt'

.

val file_in = sc.textFile("UPD_file.txt")
val FullName = """(^.{19}).+source-address=([^"]+) source-port=([^"]+)""".r

When I test pattern on one row it works:

scala> val FullName(ip,sa,sp) = "2014-03-02 07:59:37;source-address=10.114.104.225 source-port=3912
ip: String = 2014-03-02 07:59:37
sa: String = 10.114.104.225
sp: String = 39123

or

scala> "2014-03-02 07:59:37;source-address=10.115.78.125 source-port=1780" match { case FullName(ip,sa,sp) }
(2014-03-02 07:59:37,10.115.78.125,1780)

But I have no idea how to use it on each line of a loaded file.

file_in.AndWhatNow?

Can you help? I will be grateful for any suggestions.
Pawel

by Pawel Kowalski at August 21, 2014 11:25 AM

CompsciOverflow

Deciding if a finite automata accepts strings of any length

Question is you're given a DFA. Give an algorithm which tells you whether strings of all lengths $n\in \mathbb{N}$ are acceptable or not.

What I doing was, I have algorithm to count the number of all strings of some fixed length $n$. Now let there are $k$ states. Suppose we got a positive result (i.e the number of strings is $> 0$) for all $n$ up to $k$. Then check $k+1$: if it gives a positive result then we can say, at least one state is visited twice by that path of length $k+1$. That means we'll get $x$ such that all of $k+1+nx$ for all $n\geq 0$ will get accepted if $x=1$ then done. If not then check again $k+2$ again we'll get a $y$ like that. So for all $n>k$ we're getting APs of lengths which are acceptable but then can we say after some finite state we can say all numbers are accepted ?

by James Yang at August 21, 2014 10:47 AM

UnixOverflow

How do I send a shutdown event to a QEMU guest (OpenBSD)?

I'm using virtualisation solely to install OpenBSD onto the bare hardware, and during the installation, the redirection to the serial port didn't get configured, so, I ended up with the system running, but no way to login and do a clean shutdown.

kvm -m 6144 -smp 4 -drive file=/dev/sda,if=ide \
    -drive file=/dev/sdb,if=scsi -drive file=/dev/sdc,if=scsi \
    -cdrom install52.iso -boot d -nographic

How can I send a shutdown event to this session? AFAIK, Ctrl-a x as shown here or a pkill kvm would not do a clean shutdown yet.

Alternatively, how can I switch from the -nographic mode into the -curses mode ?

by cnst at August 21, 2014 10:47 AM

StackOverflow

Scala case class, can't override constructor parameter

I can't make to work simple stuff. Here is my case class:

 case class MyCaseClass(left:Long, right: Long = Option[Long], operator: Operator = Option[Operator]){

    def inRange(outer: Long) = outer >= left && outer <= right
  }

And I try to create it:

val instance = MyCaseClass(leftValue)

And I do get comple error: Unspecified value parameters right, operator

why? I've tried 100500 suggesntions from SO and I can't make it work. I just want to have 2 constructors for a case class: with 3 params and wih 1 param...

UPD:

this works as Ende Neu suggested:

case class MyCaseClass(left:Long, right: Option[Long] = None, operator: Option[Operator] = None)

The problem is that I have to wrap right, Operator into option. I'm not good at "implicit" stuff of Scala, but It look to me that I can do wrapping/unwrapping implicitly...? right?

UPD: Allows you to avoid Option wrapping for right and operator parameters. Details are in answer.

object MyCaseClass {
  def apply(left: Long, right: Long, operator: Operator) = new MyCaseClass(left, Option(right), Option(operator))
}

by Sergey at August 21, 2014 10:22 AM

Can I create a default OPTIONS method directive for all entry points in my route?

I don't want to explicitly write:

options { ... }

for each entry point / path in my Spray route. I'd like to write some generic code that will add OPTIONS support for all paths. It should look at the routes and extract supported methods from them.

I can't paste any code since I don't know how to approach it in Spray.

The reason I'm doing it is I want to provide a self discoverable API that adheres to HATEOAS principles.

by user3816466 at August 21, 2014 10:20 AM

QuantOverflow

hedging with a 3 month fx forward every month

I think this is a bit odd question. Let us say I want to hedge my fx exposure every month but using 3 month forwards . How can I do that ? Is it not easy just to use 1 month forwards ? I recalculate my expsoure every month. In other words let us say I am us investor but I get my profits from a euro company. So every month I calcuate the expected return I might get the next month and do the hedge accordingly. This is straight forward with a one month forward but assuming there exists only 3 month forward contracts in the market(hypothetical) how can one do the hedging ?

by lol at August 21, 2014 10:14 AM

StackOverflow

how to make 'while' return a collection? [duplicate]

This question already has an answer here:

I ran into a situation where I needed while to output a collection. Here is an example: (reading a JDBC ResultSet)

What I would have liked

val rs: java.sql.ResultSet = ???
val cols = while (rs.next()) for (i <- 1 to numberOfColumns) yield rs.getString(i) 

What I ended up doing:

val rs: java.sql.ResultSet = ???
var cols: List[Seq[String]] = Nil  
while (rs.next()) cols ::= (for (i <- 1 to numberOfColumns) yield rs.getString(i)) 

Is there any way to make while return a List (or some other collection)?
I want to avoid using a mutable variable.

by Jus12 at August 21, 2014 10:10 AM

CompsciOverflow

How to find degree of polynomial represented as a circuit?

I know very little about arithmetic circuits, so maybe it is something well-known.

Given a small circuit consisted of $\{1,x,-,+,*\}$ defining one variable polynomial. Let be additionally known that degree of this polynomial does not exceed $d$ and all the coefficients are small. I wonder if exists a fast way to find actual degree of this polynomial? Using FFT and some small field, one can do it in $O(d)$ time (regardless $polylog$ factors), but this time is sufficiently to reconstruct the the entire polynomial, so I hope computing degree only can be done more efficient.

by ivmihajlin at August 21, 2014 10:08 AM

StackOverflow

Jumping forward with the continuation monad

It is possible to jump backward in a program with the continuation monad:

{-# LANGUAGE RecursiveDo #-}

import Control.Monad.Fix
import Control.Monad.Trans.Cont

setjmp = callCC (\c -> return (fix c))

backward = do
  l <- setjmp
  -- some code to be repeated forever
  l

But when I try to jump forward, it is not accepted by GHC:

forward = mdo
  l
  -- some dead code
  l <- setjmp  
  return ()

This does not work because there is no instance for MonadFix (ContT r m) for the continuation monad transformer ContT defined in Control.Monad.Trans.Cont. See Section 5.1 of Levent Erkok's thesis for further details.

Is there a way to encode forward jump without value recursion for the continuation monad?

Is there an alternative definition of ContT that has an instance for MonadFix (ContT r m)? There is an unpublished draft by Magnus Carlsson that makes such proposal but I am not sure what to do of it in my case.

by Bob at August 21, 2014 10:02 AM

DataTau

StackOverflow

Issues while setting up lightweight modular staging

I'm trying to get started with the examples here. I'm trying to set up my dev environment using Scala IDE (Eclipse).

So far,

  1. I have downloaded lms, built it using sbt and added the generated jar library to my eclipse project.

  2. I'm trying to write this bit of the code sample provided.

    val snippet = new DslDriver[Array[Int],Array[Int]] {
    def snippet(v: Rep[Array[Int]]) = {// Continues
    

However, DslDriver isn't found inside the scala.virtualization.lms package. The library is being found so it's not a problem with the build path.

  1. I have also installed the scala-virtualized plugin to my Scala IDE.

  2. Perhaps this is an eclipse issue where it can't find the necessary classes? Should I switch to coding using an editor and building using sbt?

Any help would be appreciated. Thanks in advance

by Rohit Mukherjee at August 21, 2014 09:36 AM

Is it possible to write a varnish using zeromq?

I am now working on a VOD project and I want to try to build a proxy server like varnish (Reverse proxy). I know it is not easy at all and I am just thinking about the “feasibility”. I’ve done some researches and I found a powerful messaging library called “Zeromq”.

Certainly, zeromq is very useful for writing a server but I am not sure whether it is also useful for writing a varnish, or a proxy server that is similar to varnish? I realized that there are a few functions/ classes in zeromq which is related to proxy server such as “zmq_proxy” but I am not sure whether it is something that I really need.

I want a proxy server that can cache the video content in the memory from the root server and then send the stream back to the client. (It would be much more better if the library have some built-in thread-handling classes/ functions.)

Will zeromq store the content into the main memory? Or is there any a way it can?

Or do you guys have any other powerful library for writing a varnish server? may be ACE or …? Or should I just customize the varnish (eg: customize my own caching policy) which I think is less fun instead of writing my own varnish server?

Thanks in advance.

by hclee at August 21, 2014 09:28 AM

/r/netsec

CompsciOverflow

Check whether it is possible to turn one BST into another using only right-rotations

Given two binary search trees T1 and T2, if it is possible to obtain T2 from T1 using only right-rotation, we say that T1 can be right-converted to T2. For example, given three binary search tree T1, T2 and T3:

For example, given three binary search tree T1, T2 and T3:

 Tree 1                Tree 2              Tree 3
    a                    b                   a
   /                    / \                 /
  b                    d   a               d 
 / \                      /                 \
d   c                    c                   b
                                              \
                                               c

T2 can be right-converted from T1, T3 can be right-converted from T1. But T3 cannot be right-converted from T2, T2 cannot be right-converted from T3.

Given two arbitrary BST, how to determine whether they are right-convertible with respect to each other?

by leonidas1573 at August 21, 2014 09:20 AM

StackOverflow

type inference is smart enough to figure out the type when the type is operated with other type

Assume this type inference code for infer Element in the List,

def doStuff[A](x: List[A]) = x // ignore the result 

doStuff(List(3)) // I dont need to speicify the type Int here

However, if the type A is operated with other type, the type inference is not working, I have to specify the type.

def doStuff[A, B](x: List[A], f: (B, A) => B) = {

}

doStuff(List(3), (x: String, y) => x) //compilation failed, missing parameter type
doStuff[Int, String](List(3), (x, y) => x) //compilation fine.

May I know why is that ?

Many thanks in advance

by Cloud tech at August 21, 2014 09:11 AM

delta-time Versus raw-delta-time (difference?)

I'm implementing FPS style mouselook and keyboard controls. Using delta-time to mult stuff. And i can choose between delta and raw delta.

What is the difference? About non-raw delta DOCS say, "Might be smoothed over n frames vs raw".

What will i do to my code/game if i choose to use non smooth over smooth?

Since the docs say "Might be smoothed"... now thats not fun, that means a bunch of questions.

I'm looking at differend ways to "smooth" the transforms.

EDIT: I think the real question is that, if smoothed delta is a type of calculation based on raw delta. And while i find some people saying that smooth delta is giving them weird results. Then would i be better of writing my own calculation using raw delta...

by jaakkoj at August 21, 2014 09:09 AM

UnixOverflow

Secure FOSS alternative to Skype on Linux & OpenBSD?

Criteria:

  • Makes audio/video calls
  • Encrypts the whole traffic (using good encryption)
  • Is cross-platform (including Windows 7, etc.)
  • Runs on modern Linux distributions (Fedora, Ubuntu, etc.)
  • Runs on OpenBSD

Does anybody know a good Free and Open-Source alternative to Skype?

by LanceBaynes at August 21, 2014 08:59 AM

StackOverflow

lein deploy clojars stuck

I am on windows and attempting to deploy to clojars. I followed the leiningen gpg guide and installed Gpg4win as suggested there; I generated using the default encryption, set passphrase, in short, followed the guide 100%. I have gpg in my path. I am able to follow the guide for generating and publishing my keys. I have copied the public key over to clojars. I have followed the leiningen deployment guide to run lein deploy clojars. However, the project just does this and seems to hang forever:

C:\project\project>lein deploy clojars
No credentials found for clojars (did you mean `lein deploy clojars`?)
See `lein help deploy` for how to configure credentials.
Username: myemail@mail.com
Password:
Wrote C:\project\project\pom.xml
Created C:\project\project\target\project-0.1.1.jar

And that's it.

by dirtymikeandtheboys at August 21, 2014 08:55 AM

/r/netsec

StackOverflow

Read entire file in Scala?

What's a simple and canonical way to read an entire file into memory in Scala? (Ideally, with control over character encoding.)

The best I can come up with is:

scala.io.Source.fromPath("file.txt").getLines.reduceLeft(_+_)

or am I supposed to use one of Java's god-awful idioms, the best of which (without using an external library) seems to be:

import java.util.Scanner
import java.io.File
new Scanner(new File("file.txt")).useDelimiter("\\Z").next()

From reading mailing list discussions, it's not clear to me that scala.io.Source is even supposed to be the canonical I/O library. I don't understand what its intended purpose is, exactly.

... I'd like something dead-simple and easy to remember. For example, in these languages it's very hard to forget the idiom ...

Ruby    open("file.txt").read
Ruby    File.read("file.txt")
Python  open("file.txt").read()

by Brendan OConnor at August 21, 2014 08:22 AM

TheoryOverflow

Partition planar graph into connected subgraphs of equal size

Work

Jünger, Michael, Gerhard Reinelt, and William R. Pulleyblank. "On partitioning the edges of graphs into connected subgraphs." Journal of graph theory 9.4 (1985): 539-549.

states that for 4-edge-connected graph one can partition its edges into disjoint subsets of size $r$, such that each subset form a connected subgraph.

I wonder if the same kind of statement could be formulated for partition of vertices. For what kind of graphs one find partition of vertices into disjoint subsets of size $\approx r$, such that each subset form a connected subgraph (for each r)? I'm particularly interested in planar graphs, but would be happy with any class.

I can soften some conditions (it will still meet my needs): for what graph classes existence of partition into less than $\frac{\alpha n}{r}$ connected subgraphs of size less than $r$ is guaranteed (for some $\alpha$)?

by ivmihajlin at August 21, 2014 08:19 AM

QuantOverflow

Doubt on risk cost criterion

I want to minimize some kind of risk sensitive cost. But, I am confused what cost criterion should I use. I am aware of only expected exponential utility. I want to know what are the other such measures in literature and which one among them will be good and in which sense it is better than the other.

by malkhor at August 21, 2014 08:17 AM

StackOverflow

No operations allowed after connection closed in play framework

the code works fine but i am noticing that sometimes it is giving error

com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: 
No operations allowed after connection closed.

i am using a simple query and always it comes here(till now).

def result(id: String,mark:String) = {
DB.withConnection { implicit c =>
  val result = SQL("SELECT mark  FROM `subject` WHERE id={id}").on("id" -> id).apply().head
  if (result[String]("mark").equals(mark)) {
    Map("result" -> true)
  } else {
    Map("result" -> false)
  }
 }
}

i will provide more information if needed, because i also not seems any error in this code.

did i have to do something with this?

application.conf

contexts {
 simple-db-lookups {
    fork-join-executor {
      parallelism-factor = 10.0
    }
  }
  expensive-db-lookups {
    fork-join-executor {
      parallelism-max = 4
    }
  }
  db-write-operations {
    fork-join-executor {
      parallelism-factor = 2.0
    }
  }
  expensive-cpu-operations {
    fork-join-executor {
      parallelism-max = 2
    }
  }
}

I am using scala 2.10 with play framework 2.2

by Govind Singh Nagarkoti at August 21, 2014 08:15 AM

DataTau

Planet FreeBSD

QuantOverflow

Testing the validity of a factor model for stock returns

Consider the following m regression equation system:

$$r^i = X^i \beta^i + \epsilon^i \;\;\; \text{for} \;i=1,2,3,..,n$$

where $r^i$ is a $(T\times 1)$ vector of the T observations of the dependent variable, $X^i$ is a $(T\times k)$ matrix of independent variables, $\beta^i$ is a $(k\times1)$ vector of the regression coefficients and $\epsilon^i$ is the vector of errors for the $T$ observations of the $i^{th}$ regression.

My question is: in order to test the validity of this model for stock returns (i.e. the inclusion of those explanatory variables) using AIC or BIC criterion, should these criterion be computed on a time-series basis (i.e. for each stock), or on a cross-sectional basis (and then averaged over time)?

by Mariam at August 21, 2014 07:48 AM

TheoryOverflow

LEXICAL ANALYSIS

Respected Sir,

Which is the lexical analyzer used for compiling C programs.I think flex and lex are used to give us a view of how lexical analysis takes place.But I searched the internet but couldn't get anything related to lexical analyzer used in C.

I hope you would clarify my doubts regarding this topic.

Thanks, Justin

by JUSTIN JOHNS at August 21, 2014 07:42 AM

CompsciOverflow

OpenCV: How to focus camera only on required area [on hold]

I want to detect the face using VJ's algorithm in OpenCV and it's working fine, but I want to focus only on the region where face is detected not any other thing form the video stream.

by Punith at August 21, 2014 07:37 AM

StackOverflow

why case class can be used as a function in the argument

Occasionally, I found an interesting feature of case class. The foo needs a function which 3 Int to a case class, The code looks like this:

case class Whatever(a: Int, b: Int, c: Int)

def foo(f: (Int, Int, Int) => Whatever) = f(1,2,3).c

foo(Whatever) //compilation fine, scala complier is powerful ...........

If Whatever is normal class, obviously, the compilation will fail.

Can someone explain why case class can be used this way, I suspect it is the reason of factory apply method, but I am not sure. Also, if it is a normal class, is it possible to use it this way as case class.

by Cloud tech at August 21, 2014 07:35 AM

/r/freebsd

CompsciOverflow

Regular expression (ab U a)* to NFA with two states (Sipser)?

In the 3rd edition of Sipser's Introduction to the Theory of Computation (example 1.56, p.68), there is a step-by-step procedure to transform (ab U a)* into a NFA. And then the text ends with: "In this example, the procedure gives an NFA with eight states, but the smallest equivalent NFA has only two states. Can you find it?" Nope. I can't. After a good deal of head scratching, I've convinced myself that it's not doable. But being a novice, I'm probably wrong. Can anyone help? Thanks!

by Garp at August 21, 2014 07:11 AM

Planet Theory

TR14-110 | Separation between Estimation and Approximation | Uriel Feige, Shlomo Jozeph

We show (under standard assumptions) that there are NP optimization problems for which estimation is easier than approximation. Namely, one can estimate the value of the optimal solution within a ratio of $\rho$, but it is difficult to find a solution whose value is within $\rho$ of optimal. As an important special case, we show that there are linear programming relaxations for which no polynomial time rounding technique matches the integrality gap of the linear program.

August 21, 2014 07:10 AM

/r/freebsd

/r/compsci

StackOverflow

Fastest way to check list of integers against a list of Ranges in scala?

I have a list of integers and I need to find out the range it falls in. I have a list of ranges which might be of size 2 to 15 at the maximum. Currently for every integer,I check through the list of ranges and find its location. But this takes a lot of time as the list of integers I needed to check includes few thousands.

//list of integers
val numList : List[(Int,Int)] = List((1,4),(6,20),(8,15),(9,15),(23,27),(21,25))

//list of ranges
val  rangesList:List[(Int,Int)] = List((1,5),(5,10),(15,30))

def checkRegions(numPos:(Int,Int),posList:List[(Int,Int)]){
val loop = new Breaks()
loop.breakable {
  for (va <- 0 until posList.length) {
    if (numPos._1 >= posList(va)._1 && numPos._2 <= (posList(va)._2)) {
      //i save "va"
      loop.break()
    }
  }
}

}

Currently for every integer in numList I go through the rangesList to find its range and save its range location. Is there any faster/better way approach to this issue?

Update: It's actually a list of tuples that is compared against a list of ranges.

by Balaram26 at August 21, 2014 06:57 AM

Is it great to make classes as functions, and declare parameters type with function types?

I'm working on a scala project, and my colleague who prefers functional style and proposes a way to organize code: Define classes as functions

Here is a sample:

class FetchFeed extends (String => List[Feed]) {
   def apply(url:String):List[Feed] = ???
}

When other class needs this class, it will be declared using the type String => List[Feed]

class MyWork(fetchFeed: String => List[Feed])

Then in some place, pass a FetchFeed to it:

val fetchFeed = new FetchFeed
val myWork = new MyWork(fetchFeed)

The pros is that we can easily mock the FetchFeed by passing a function:

val myWork = new MyWork(_ => List(new Feed))

The syntax is simple and easy to read.

But the cons is that, when I see the declaration of MyWork:

class MyWork(fetchFeed: String => List[Feed])

It's hard to see which class will be passed in, even the IDE can't help me. We need to search extends (String => List[Feed]) in the codebase, or find the place to initialize the new MyWork.

And if there is another class which extends String => List[Feed] but which is never used in MyWork, it often will confuse me.

But if we declare it with the real type:

class MyWork(fetchFeed: FetchFeed)

It will be easier to jump to the declarations directly. But with this case, we can't pass functions directly, instead, we need to:

val fetchFeed = mock[FetchFeed]
fetchFeed.apply(any[String]) returns List(new Feed)

val myWork = new MyWork(fetchFeed)

I'm struggling with the two solutions. Is it a common pattern like this when write code in functional style? Is there any open-source projects take this style that I can get some ideas from?

by Freewind at August 21, 2014 06:55 AM

QuantOverflow

Infinite autocorrelation - Unit root?

I have a time series of gold prices, on which I want to build an ARIMA model. The series is autocorrelated and if I can difference as often as I want, it always is.

First: data: d1gold Dickey-Fuller = -18.5829, Lag order = 19, p-value = 0.01 alternative hypothesis: stationary

Second: data: d2gold Dickey-Fuller = -32.6297, Lag order = 19, p-value = 0.01 alternative hypothesis: stationary .. and so on.

What can I do to fit the data in an ARIMA model? Data: https://drive.google.com/file/d/0B7cBu_0IHA17a1lQUlpsS1BJXzg/edit?usp=sharing

Best Regards Erik

by user9358 at August 21, 2014 06:48 AM

StackOverflow

Error using spray-aws to connect to DynamoDB under spray framework

I am writing a new server use scala + akka + spray and I need to connect to DynamoDB in AWS. I have did some research and find your lib 'spray-aws'. But when I try to use it, I got some error.. scala version 2.11.2, sbt version should be 0.13.1

$ sbt
> re-start

[info] Compiling 6 Scala sources to /home/ubuntu/dc-judi-server-scala/target/scala-2.11/classes...
[error] /home/ubuntu/dc-judi-server-scala/src/main/scala/com/example/Boot.scala:13: object dynamodb is not a member of package com.sclasen.spray.aws
[error] import com.sclasen.spray.aws.dynamodb
[error]        ^
[error] /home/ubuntu/dc-judi-server-scala/src/main/scala/com/example/Boot.scala:27: not found: value DynamoDBClientProps
[error]   val props = DynamoDBClientProps("xxx", "yyy", Timeout(100 seconds), dbsystem, dbsystem)
[error]               ^
[error] /home/ubuntu/dc-judi-server-scala/src/main/scala/com/example/Boot.scala:28: not found: type DynamoDBClient
[error]   val client = new DynamoDBClient(props)
[error]                    ^
[error] three errors found
[error] (compile:compile) Compilation failed
[error] Total time: 25 s, completed Aug 21, 2014 4:30:42 AM

And attached are my build.sbt and Boot.scala I am pretty new to this framework and doesnt have much experience on it. Could you please help me and give me some insight..? Many thanks.

organization  := "com.example"

version       := "0.1"

scalaVersion  := "2.11.2"

scalacOptions := Seq("-unchecked", "-deprecation", "-encoding", "utf8")

libraryDependencies ++= {
  val akkaV = "2.3.5"
  val sprayV = "1.3.1"
  Seq(
    "io.spray"            %%  "spray-can"     % sprayV,
    "io.spray"            %%  "spray-routing" % sprayV,
    "io.spray"            %%  "spray-testkit" % sprayV  % "test",
    "com.typesafe.akka"   %%  "akka-actor"    % akkaV,
    "com.typesafe.akka"   %%  "akka-slf4j"    % akkaV,
    "com.typesafe.slick"  %%  "slick"         % "2.1.0",
    "com.typesafe.akka"   %%  "akka-testkit"  % akkaV   % "test",
    "org.specs2"          %%  "specs2-core"   % "2.3.11" % "test",
    "mysql"               %   "mysql-connector-java" % "5.1.32",
    "ch.qos.logback"      %   "logback-classic" % "1.1.1",
    "com.sclasen"         %   "spray-aws_2.11"  % "0.3.4"
  )
}

resolvers ++= Seq(
    "Spray repository" at "http://repo.spray.io",
    "Typesafe repository" at "http://repo.typesafe.com/typesafe/releases/"
)

Revolver.settings
organization  := "com.example"

version       := "0.1"

scalaVersion  := "2.11.2"

scalacOptions := Seq("-unchecked", "-deprecation", "-encoding", "utf8")

libraryDependencies ++= {
  val akkaV = "2.3.5"
  val sprayV = "1.3.1"
  Seq(
    "io.spray"            %%  "spray-can"     % sprayV,
    "io.spray"            %%  "spray-routing" % sprayV,
    "io.spray"            %%  "spray-testkit" % sprayV  % "test",
    "com.typesafe.akka"   %%  "akka-actor"    % akkaV,
    "com.typesafe.akka"   %%  "akka-slf4j"    % akkaV,
    "com.typesafe.slick"  %%  "slick"         % "2.1.0",
    "com.typesafe.akka"   %%  "akka-testkit"  % akkaV   % "test",
    "org.specs2"          %%  "specs2-core"   % "2.3.11" % "test",
    "mysql"               %   "mysql-connector-java" % "5.1.32",
    "ch.qos.logback"      %   "logback-classic" % "1.1.1",
    "com.sclasen"         %   "spray-aws_2.11"  % "0.3.4"
  )
}

resolvers ++= Seq(
    "Spray repository" at "http://repo.spray.io",
    "Typesafe repository" at "http://repo.typesafe.com/typesafe/releases/"
)

Revolver.settings

package com.example

import akka.actor.{ActorSystem, Props}
import akka.io.IO
import spray.can.Http
import akka.pattern.ask
import akka.util.Timeout
import scala.concurrent.duration._
import com.example.config.Configuration
import com.example.service._


import com.sclasen.spray.aws.dynamodb
import concurrent.Await
import concurrent.duration._
import com.amazonaws.services.dynamodbv2.model.ListTablesRequest

object Boot extends App with Configuration {

  // we need an ActorSystem to host our application in
  implicit val system = ActorSystem("on-spray-can")

  // A new actor system for host DB
  import com.sclasen.spray.aws._

  val dbsystem = ActorSystem("test")
  val props = DynamoDBClientProps("xxx", "yyy", Timeout(100 seconds), dbsystem, dbsystem)
  val client = new DynamoDBClient(props)
  try {
    val result = Await.result(client.sendListTables(new ListTablesRequest()), 100 seconds)
    println(result)
    result.getTableNames.size() should be >= 1
  } catch {
    case e: Exception =>
      println(e)
      e.printStackTrace()
  }

  // create and start our service actor
  val service = system.actorOf(Props[CustomerServiceActor], "demo-service")

  implicit val timeout = Timeout(5.seconds)
  // start a new HTTP server on port 80 with our service actor as the handler
  IO(Http) ? Http.Bind(service, host, port)
}

Update: I have try to change the import into : import com.sclasen.spray.aws._

but DynamoDBClientProps and DynamoDBClient still cannot be found...

by mingchuno at August 21, 2014 06:47 AM

Set fill-column for all cider output

I would like to set fill-column value for all cider output. I'm using:

(require 'cider)
(setq cider-show-error-buffer              nil
      cider-docview-fill-column            76
      cider-stacktrace-fill-column         76
      nrepl-buffer-name-show-port          nil
      cider-repl-display-in-current-window t
      cider-repl-result-prefix             ";; => ")

But when I call meta, I get this:

user> (meta #'str)
;; => {:added "1.0", :ns #<Namespace clojure.core>, :name str, :file "clojure/core.clj", :static true, :column 1, :line 511, :tag java.lang.String, :arglists ([] [x] [x & ys]), :doc "With no args, returns the empty string. With one arg x, returns\n  x.toString().  (str nil) returns the empty string. With more than\n  one arg, returns the concatenation of the str values of the args."}

Everything in one line. I bet there is some variable cider-...-fill-column that will help me. I googled it, but found only cider-docview-fill-column and cider-stacktrace-fill-column.

by Mark at August 21, 2014 06:17 AM

QuantOverflow

Cholesky Decomposition on Correlation Matrix for Correlated Asset Paths

I found a matlab example for modelling correlated asset paths: http://www.goddardconsulting.ca/matlab-monte-carlo-assetpaths-corr.html

In this model the author uses the matlab code chol() in order to calculate the cholesky decomposition on the correlation matrix. However, by default, chol(corr) returns the upper triangular matrix but in my understanding the lower triangular matrix is needed for generating correlated random numbers. This can be calculated by chol(corr,'lower'): http://www.mathworks.de/de/help/matlab/ref/chol.html

Now, is this simply a small error in the code example or did I misunderstand some theoretic basics?

Best

by Clems at August 21, 2014 06:11 AM

CompsciOverflow

What is the definition of a $\Pi_1$-sentence?

What is meant when somebody says that a problem can be expressed as a $\Pi_1$-sentence? I know that for the arithmetical hierarchy, a $\Pi^0_1$-sentence is a sentence of the form $\forall n_1\forall n_2\dots\forall n_k \psi$ where $\psi$ is a formula in the language of Peano arithmetic with only bounded quantifiers. And for the analytical hierarchy, a $\Pi_1^1$-sentence is a sentence of the form $\forall X_1\forall X_2\dots\forall X_k \psi$ where $\psi$ is a formula in the language of second-order arithmetic with no set quantifiers.

I found the following definition for this notation in section "5 Proving Independence" of an article on the possibility of undecidability:

Let’s define a $\Pi_1$-sentence to be any sentence of the form, “For all $x$, $P (x)$,” where $P$ is function that can be proven to be recursive in Peano Arithmetic, PA.

People talking about $\Pi_1$- and $\Pi_2$-sentences sometimes refer to Shoenfield's absoluteness theorem, which seems to talk about $\Pi^1_2$-sentences, i.e. refers to the analytical hierarchy. Can I deduce from this that people talking about $\Pi_1$-sentences use $\Pi_1$ as a shorthand for $\Pi^1_1$ (because $x^1=x$...)? But the quoted definition looks much more like the condition from the arithmetical hierarchy to me, even so I'm not sure whether it is really equivalent to it.

by Thomas Klimpel at August 21, 2014 05:58 AM

Show that the language of words with even sum of positions of a letter is regular

Let $\Sigma=\{a,b\}$, and let $S(a)$ be sum of the positions of $a$ of string $S$. I want to prove $$L=\{S\in \Sigma^{*} \mid S(a)=0(\bmod 2)\}$$ is regular.

What I was thinking is to do somehow keep track of sum of positions of $a (\bmod 2)$ For that I was thinking to do like take set of states as $\{0,1\}\times \{0,1\} \times \{0,1\}$. And starting state $\{0,0,0\}$. My aim to is to keep track sum of positions of $a$ at first component. So starting from initial state, if it read consecutive $x,b$s then it will go to $(0,0,x(\bmod 2))$ then after reading $y,a$s it goes to $(xy+y(y+1)/2(\bmod 2),y(\bmod 2),x(\bmod 2)$ after reading $z,b$s it goes to $(xy+y(y+1)/2 \bmod 2),y(\bmod 2),(x+y)z+(x+y)(x+y)/2(\bmod 2)$ ... and so on. And set accepting state ${0}\times {0,1}\times {0,1}$. I believe its working but I don't understand how to define on each state.

by James Yang at August 21, 2014 05:57 AM

Probabilistic hardness of approximation or solution of NP-hard optimization problems under a probabilistic generative model for input data

So in biology (DNA sequences), sequence alignment is a generalization of longest common subsequence where an alignment of two sequences is scored typically with a linear function of how many spaces are inserted into each sequence and how many times each possible pair of aligned characters appears in the alignment. Just like longest common subsequence, finding the optimal alignment of two strings under an arbitrary linear scoring scheme can be solved in quadratic time using dynamic programming. (Needleman-Wunsch algorithm). The longest subsequence problem and variants that use linear scoring schemes and ask for the optimal multiple sequence alignment are NP-hard when the number of input strings is not fixed.

However, in biology, there is a probabilistic generative model that generates related DNA sequences. Starting with an unknown root ancestor DNA sequence, bifurcations occur that create two daughter sequences (species) that are independently derived from the ancestral sequence by potentially adding some characters in random locations, deleting some characters, and changing some characters. Then the bifurcations continue with additional changes at each level until the modern day DNA sequences of extant species are obtained. Then we want to align the modern day species' sequences (e.g. find the longest common subsequence in the simplest case) without knowing the exact ancestral sequences. In this case, fossil records can help identify the bifurcation events and estimate the sequence mutation rates after each bifurcation. So a reasonable estimate of the generative model that generated the related modern day DNA sequences can sometimes be obtained.

Now, my question is, for such an NP-hard optimization problem with a well-defined probabilistic generative model that generates input data, has anyone studied the hardness of finding either an optimal or nearly optimal solution, where either the worst-case or expected running time depends on the parameters for the model that generates the input data? For example, if DNA mutation and insertion/deletion rates are very low for a particular group of species, then it should be fairly easy to get at least a nearly optimal alignment of all the DNA sequences using partial alignments and pruning and heuristics, without resorting to a full-blown exponential time solution.

by user2566092 at August 21, 2014 05:56 AM

Converting generalized NFAs to NFAs

I came across generalized nondeterministic finite automata (GNFAs) in Sipser's Introduction to the Theory of Computation. These are automata where transitions are labelled with regular expressions, rather than single symbols from the alphabet. I thought he would explain why GNFAs are allowed. I mean, an appropriate explanation would be that GNFAs are equivalent to NFAs, or GNFAs are equivalent to DFAs or some such argument. But I couldn't find any such explanation in the book.

Online, I read in this article that you can convert a GNFA to an NFA as follows:

For each transition arrow in the GNFA, we insert the complete automaton accepting the language generated by the transition arrow’s label as a “subautomaton;” this way, we can replace each regular expression by a set of states and character transitions

How is the automaton inserted?

Let's say we have a GNFA with an arrow going from state A to state B labelled with a regular expression R. To convert this GNFA to an NFA, do we get rid of that arrow, instead, take NFA N that recognizes L(R), and create an arrow from A to the start state of N labelled with the epsilon symbol, then create arrows from the accept states of N to B, each also labelled with the epsilon symbol?

Of course the accept states of N would no longer be accept states in the new machine, would they?

I know that GNFAs are equivalent to NFAs but I need a convincing proof, not just a short paragraph mentioning their equivalence.

by user2108462 at August 21, 2014 05:54 AM

Algorithms that are similar to Dynamic TIme Warping

Dynamic time warping (DTW) is an algorithm in time series analysis for measuring similarity between two temporal sequences which may vary in time or speed. Here are some explanations of DTW:

  1. Dynamic Time Warping by Wikipedia
  2. Dynamic Time Warping by M Müller (2007)

Is there any algorithm that can replace DTW for measuring similarity between two temporal sequences which may vary in time or speed?

by boogiedoll at August 21, 2014 05:46 AM

Lobsters

QuantOverflow

Searching for name business & Permit near Arlington Heights?

I'm moving Illinois region, towards the Heights, and that I am searching for one of name businesses & those personal Permit to join up my vehicle at. Does anybody know of 1 near Arlington Levels? It's difficult to reach the DMV due to my hours.

by Hollis Nieves at August 21, 2014 05:18 AM

StackOverflow

Clojure: difference between how special forms, functions and macros are implemented

i have just started with Clojure. I am reading this. I did not understand the difference between how special forms are implemented and how functions and macros are implemented where it says

Nearly all functions and macros are implemented in Clojure source code. The differences between functions and macros are explained later. Special forms are recognized by the Clojure compiler and not implemented in Clojure source code.

Can someone explain the difference between two things ? ( implemented in Clojure source code and not implemented in Clojure source code)

by Prathamesh Sonpatki at August 21, 2014 05:16 AM

Slick: query multiple tables/databases with getting column names

I have methods in my Play app that query database tables with over hundred columns. I can't define case class for each such query, because it would be just ridiculously big and would have to be changed with each alter of the table on the database.

I'm using this approach, where result of the query looks like this:

Map(columnName1 -> columnVal1, columnName2 -> columnVal2, ...)

Example of the code:

implicit val getListStringResult = GetResult[List[Any]] (
    r => (1 to r.numColumns).map(_ => r.nextObject).toList
)

def getSomething(): Map[String, Any] = DB.withSession {
    val columns = MTable.getTables(None, None, None, None).list.filter(_.name.name == "myTable").head.getColumns.list.map(_.column) 
    val result = sql"""SELECT * FROM myTable LIMIT 1""".as[List[Any]].firstOption.map(columns zip _ toMap).get
}

This is not a problem when query only runs on a single database and single table. I need to be able to use multiple tables and databases in my query like this:

def getSomething(): Map[String, Any] = DB.withSession {

    //The line below is no longer valid because of multiple tables/databases
    val columns = MTable.getTables(None, None, None, None).list.filter(_.name.name == "table1").head.getColumns.list.map(_.column) 
    val result = sql"""
        SELECT      * 
        FROM        db1.table1
        LEFT JOIN   db2.table2 ON db2.table2.col1 = db1.table1.col1
        LIMIT       1
    """.as[List[Any]].firstOption.map(columns zip _ toMap).get

}

The same approach can no longer be used to retrieve column names. This problem doesn't exist when using something like PHP PDO or Java JDBCTemplate - these retrieve column names without any extra effort needed.

My question is: how do I achieve this with Slick?

by Caballero at August 21, 2014 05:15 AM

Planet FreeBSD

Happy 20th birthday FreeBSD ports tree!

It all started with this commit from Jordan Hubbard on August 21, 1994:

Commit my new ports make macros
Still not 100% complete yet by any means but fairly usable at this stage.

Twenty years later the ports tree is still there and actively
maintained. A video was prepared to celebrate the event and to thank
all of you who give some of their spare time and energy to the project!

by culot at August 21, 2014 05:10 AM

StackOverflow

Writing Body Parser with Security Trait for multipartFormData, Play Framework

I'm trying to upload an image at the same time when I submit the form, after some research I tried using mutlipartFormData to accomplish the feat. This is my form submission function header after following the tutorials.

def insert = withInsert(parse.multipartFormData) { username => implicit request =>

I used security trait to check for user (withUser), login time (withAuth) and permission (withInsert)

def withUser(f: => String => Request[AnyContent] => Result) = {
        Security.Authenticated(username, onUnauthorized) { user =>
          Action(request => f(user)(request))
        }
    }

def withAuth(f: => String => Request[AnyContent] => Result) = withUser { user => request =>
    var timestamp = request.session.get("timestamp")
    timestamp.map { timestamp =>
        if(System.currentTimeMillis - timestamp.toLong < (3600*1000))
            f(user)(request)
        else
            onUnauthorized(request)
    }
    .getOrElse{
        onUnauthorized(request)
    }
}

def withInsert[A](p: BodyParser[A])(f: String => Request[A] => Result) = withAuth { username => request =>

        val permission = User.checkAuth("Page", username)

        if(permission.page_insert == 1)
            Action(p)(request => f(username)(request))
         else
            onPermissionDenied(request)
    }

def onPermissionDenied(request: RequestHeader) = Results.Redirect(routes.Page.index)

As the insert function needed a body parser, I modified the (withInsert) trait to support a body parser. But, then I got a compile error on this line.

Action(p)(request => f(username)(request))

type mismatch; found : play.api.mvc.Action[A] required: play.api.mvc.Result

I'm quite lost to what is wrong here, any help is greatly appreciated.

Edit:

I've tried to do exactly what the tutorial did, abandoning the usage of withAuth on the security trait

def withInsert[A](p: BodyParser[A])(f: String => Request[A] => Result) = { 
        Security.Authenticated(username, onUnauthorized) { user =>
            val permission = User.checkAuth("Page", user)

            if(permission.page_insert == 1)
                Action(p)(request => f(user)(request))
            else
                onPermissionDenied(request)
        }
    }

this code results in another compile error on the same line, but with different error

not found: value request

After removing the permission checking, it returns no compile error.

def withInsert[A](p: BodyParser[A])(f: String => Request[A] => Result) = { 
        Security.Authenticated(username, onUnauthorized) { user =>
            Action(p)(request => f(user)(request))
        }
    }

But I need the program to check for permission before running functions and not just username (Current user has logged in or not). Is there a way to do this? I need a workaround so that I could apply the permission checking and the body parser to the trait.

by zeroism at August 21, 2014 05:06 AM

/r/emacs

StackOverflow

Anorm string set from postgres ltree column

I have a table with one of the columns having ltree type, and the following code fetching data from it:

SQL("""select * from "queue"""")()
.map(
    row =>
        {
            val queue =
                Queue(
                    row[String]("path"),
                    row[String]("email_recipients"),
                    new DateTime(row[java.util.Date]("created_at")),
                    row[Boolean]("template_required")
                )
            queue
        }
).toList

which results in the following error:

RuntimeException: TypeDoesNotMatch(Cannot convert notification.en.incident_happened:class org.postgresql.util.PGobject to String for column ColumnName(queue.path,Some(path)))

queue table schema is the following:

CREATE TABLE queue
(
  id serial NOT NULL,
  template_id integer,
  template_version integer,
  path ltree NOT NULL,
  json_params text,
  email_recipients character varying(1024) NOT NULL,
  email_from character varying(128),
  email_subject character varying(512),
  created_at timestamp with time zone NOT NULL,
  sent_at timestamp with time zone,
  failed_recipients character varying(1024),
  template_required boolean NOT NULL DEFAULT true,
  attachments hstore,
  CONSTRAINT pk_queue PRIMARY KEY (id ),
  CONSTRAINT fk_queue__email_template FOREIGN KEY (template_id)
      REFERENCES email_template (id) MATCH SIMPLE
      ON UPDATE CASCADE ON DELETE RESTRICT
)
WITH (
  OIDS=FALSE
);
ALTER TABLE queue
  OWNER TO postgres;
GRANT ALL ON TABLE queue TO postgres;
GRANT SELECT, UPDATE, INSERT, DELETE ON TABLE queue TO writer;
GRANT SELECT ON TABLE queue TO reader;

Why is that? Isn't notification.en.incident_happened just an ordinary string? Or am I missing anything?

UPD:

The question still applies, but here is a workaround:

SQL("""select id, path::varchar, email_recipients, created_at, template_required from "queue"""")()

by Zapadlo at August 21, 2014 04:59 AM

QuantOverflow

How is the price of a bond actually determined?

How the price of a bond is actually determined? Is it the supply-demand that determines the price first and then the YTM is calculated on the back of this for that bond. Or is it that the changes to interest rate curve comes first and then the bond is priced using the typical discounting method and that becomes the price in stock market?

by Papal at August 21, 2014 04:46 AM

Wes Felter

StackOverflow

Jackson serialization - Type id handling not implemented for type T

I'm writing a simple file system model for a simulation. I am attempting to serialize the contents of my virtual hard drive at various times using Jackson's JSON serialization. It all seemed to work fine until I used a custom serializer for the files (to avoid some deep copies). Now I'm getting 'Type id handling not implemented for type DataFile' errors when I attempt to serialize.

To further complicate matters, I'm using scala, but I can use Java collections for the serialization if need be.

Can someone explain Type id handling? Do I need to put it on the interface (IDataFile in this case) or the concrete classes?

I have read this: http://programmerbruce.blogspot.com/2011/05/deserialize-json-with-jackson-into.html, though it appears that at least some of this is old. I'm using Jackson 2.4.2

Thanks!

by fbl at August 21, 2014 04:11 AM

Portland Pattern Repository

StackOverflow

Suppress the printing of the data an atom holds in the REPL? (or ref, agent, ...)

The following is perfectly valid Clojure code:

(def a (atom nil))
(def b (atom a))
(reset! a b)

it is even useful in situations where back references are needed.

However, it is annoying to work with such things in the REPL: the REPL will try to print the content of such references whenever you type a or b, and will, of course, generate a stack overflow error pretty quickly.

So is there any way to control/change the printing behaviour of atoms/refs/agents in Clojure? Some kind of cycle detection would be nice, but even the complete suppression of the deref'ed content would be really useful.

by amadeoh at August 21, 2014 03:57 AM

Building a tree from Stream using Scala

I want to build a tree which is read from a file of random height in this exact format,

       1
      2 3
     4 5 6
    . . . .
   . . . . .

using the following structure

case class Tree[+T](value : T, left :Option[Tree[T]], right : Option[Tree[T]])

The challenge I am facing is I have to read until the last line before I can build the tree because left and right is read only. The way I tried was,

  1. Read the first line, create an node with a value (1) with left and right set to None.
  2. Read the second line, create nodes with values (2) and (3), left and right set to None. This time, a new node (1) is created with left = node(2) and right = node(3).
  3. Read the third line, create nodes with values (4), (5) and (6), with left and right set to None. Create new node(2) and node(3) with node(2) -> node(4) and node(5) and node(3) -> node(5) and node(6) and finally, node(1) -> new node(2) and node(3)
  4. Repeat until end of line.

At the end of it, I should have this relationship,

         1
        /  \
       2    3
      / \   / \
     4   5 5  6
    / \ /\ /\ / \
   .  .. . .. .  .

Any good advice for me? Thanks

by thlim at August 21, 2014 03:47 AM

Issue with using wildcard parameter twice in a case class

As peers the example below, I am trying to make a case class that can hold items of type SomeResult[T] without having to know what T is. This works fine in the case of Rawr, which can hold a Set of SomeResult[_], however when I add a second field to try and work with on the same principle (i.e. a single element who's content we don't care about, and a set of elements), I get the following error

[error] /Users/matthewdedetrich/temp/src/main/scala/Main.scala:15: type arguments [_$2] do not conform to class SomeResult's type parameter bounds [A <: T]
[error] case class Bleh(oneThing:SomeResult[_],moreThings:Set[SomeResult[_]]) // This doesn't

Here is the sample code

trait T {

}

case class First(int:Int) extends T
case class Second(int:Int) extends T


case class SomeResult[A <: T](name:String, t:A)

case class Rawr(multipleThings:Set[SomeResult[_]]) // This works

case class Bleh(oneThing:SomeResult[_],moreThings:Set[SomeResult[_]]) // This doesn't

There is a suggestion to use a [+A <: T] as a type bound instead of a wildcard, however the following code doesn't work when doing this

val t = Set(First(3),Second(5))

def someFunc[A <: T](thing:A) = {
    thing match {
      case First(_) => SomeResult("a",First(10))
      case Second(_) => SomeResult("b",Second(15))
      case _ => throw new IllegalArgumentException("rawr")
    }
  }

  val z = t.map{
    case x:First => someFunc(First(5))
    case y:Second => someFunc(Second(5))
    case _ => throw new IllegalArgumentException("rawr")
  }

  val z2 = Rawr(z)

Which then provides the error

[error]  found   : scala.collection.immutable.Set[SomeResult[Product with Serializable with T]]
[error]  required: Set[SomeResult[T]]
[error] Note: SomeResult[Product with Serializable with T] <: SomeResult[T], but trait Set is invariant in type A.
[error] You may wish to investigate a wildcard type such as `_ <: SomeResult[T]`. (SLS 3.2.10)

Which is why I used wildcard types in the first place. Funnily enough, if you try to provide a return type to sumFunc, you get the exact same problem (where the scala compiler error suggests that you should use Wildcard types)

EDIT 2: I have actually managed to get the code to compile by doing this

  def someFunc[A <: T](thing:A):SomeResult[A] = {
    thing match {
      case First(_) => SomeResult("a",First(10)).asInstanceOf[SomeResult[A]]
      case Second(_) => SomeResult("b",Second(15)).asInstanceOf[SomeResult[A]]
      case _ => throw new IllegalArgumentException("rawr")
    }
  }

  def z[A <: T]:Set[SomeResult[A]] = t.map{
    case x:First => someFunc(First(5)).asInstanceOf[SomeResult[A]]
    case y:Second => someFunc(Second(5)).asInstanceOf[SomeResult[A]]
    case _ => throw new IllegalArgumentException("rawr")
  }

Im not sure if its "idiomatic" or "right", but its the only way to get the Serializable with Product out of the type signature. I have no idea why Scala infers this when the result type is clearly a subtype of T

by mdedetrich at August 21, 2014 03:38 AM

How to compile a spark-cassandra programs using scala?

Lately I started learning spark and cassandra, I know that we can use spark in both python and scala and java, and I 've read docs on this website: https://github.com/datastax/spark-cassandra-connector/blob/master/doc/0_quick_start.md, the thing is, after I create a program named testfile.scala with those codes the document says,(I don't know if I am right using .scala), however, i don't know how to compile it,can anyone guide me what to do with it? Here are the testfile.scala:

import com.datastax.spark.connector._
import com.datastax.spark.connector.streaming._


val conf = new SparkConf(true).set("spark.cassandra.connection.host", "127.0.0.1")

val sc = new SparkContext("spark://127.0.0.1:7077", "test", conf)

val ssc = new StreamingContext(conf, Seconds(n))

val stream = ssc.actorStream[String](Props[SimpleStreamingActor], actorName,          StorageLevel.MEMORY_AND_DISK)

val wc = stream.flatMap(_.split("\\s+")).map(x => (x, 1)).reduceByKey(_ + _).saveToCassandra("streaming_test", "words", SomeColumns("word", "count"))

val rdd = sc.cassandraTable("test", "kv")

println(rdd.count)

println(rdd.first)

println(rdd.map(_.getInt("value")).sum)

by user2342505 at August 21, 2014 03:33 AM

Generic Spray-Client

I'm trying to create a generic HTTP client in Scala using spray. The following is my HttpClient class trying it's best to be generic:

package services

import akka.actor.ActorSystem
import akka.event.Logging
import spray.client.pipelining._
import spray.http.{BasicHttpCredentials, HttpRequest}
import spray.httpx.marshalling.Marshaller
import spray.httpx.unmarshalling._

import scala.concurrent.Future
import utils.AllJsonFormats._
import models.api._

object HttpClient extends HttpClient

class HttpClient {

  implicit val system = ActorSystem("api-spray-client")
  import system.dispatcher
  val log = Logging(system, getClass)

  def httpSaveGeneric[T1:Marshaller,T2:Unmarshaller](uri: String, model: T1, username: String, password: String): Future[T2] = {
    val pipeline: HttpRequest => Future[T2] = logRequest(log) ~> sendReceive ~> logResponse(log) ~> unmarshal[T2]
    pipeline(Post(uri, model))
  }

  val genericResult = httpSaveGeneric[Space,Either[Failure,Success]](
    "http://", Space("123", IdName("456", "parent"), "my name", "short_name", Updated("", 0)), "user", "password")

}

utils.AllJsonFormats has the following declaration. It contains all the model formats. The same class is used on the "other end" i.e. I also wrote the API and used the same formatters there with spray-can and spray-json.

object AllJsonFormats
  extends DefaultJsonProtocol with SprayJsonSupport with MetaMarshallers with MetaToResponseMarshallers with NullOptions {

Of course that object has definitions for the serialization of the models.api.Space, models.api.Failure and models.api.Success.

The Space type seems fine, i.e. when I tell the generic method that it will be receiving and returning a Space, no errors. But once I bring an Either into the method call, I get the following compiler error:

could not find implicit value for evidence parameter of type spray.httpx.unmarshalling.Unmarshaller[Either[models.api.Failure,models.api.Success]].

My expectation was that the either implicit in spray.json.DefaultJsonProtocol, i.e. in spray.json.StandardFormts, would have me covered.

by atom.gregg at August 21, 2014 03:31 AM

How is val in scala different from var in java?

Anyone care to elaborate on how val in scala is different from const in java?
What are the technical differences? I believe I understand what "const" is in c++ and java. I get the feeling that "val" is somehow different and better in some sense but I just can't put my finger on it. Thanks

by Kuberan Naganathan at August 21, 2014 03:10 AM

/r/compsci

StackOverflow

Waiting for three seconds between HTTP requests to specific URL

Using spray, I want to have a system that waits for some seconds between it sends two HTTP requests to specific URL, because I don't want to mess up the server's traffic for my app's auto connections. How do you do it? I can make it by putting the command in every place where it needs to pause, but I figured it's not looking cool and hard to maintain afterwords. I would love if it can be abstracted into the level of ActorSystem. Thank you!

by Ryoichiro Oka at August 21, 2014 03:02 AM

Portland Pattern Repository

Lobsters

StackOverflow

How to find unused sbt dependencies?

My build.sbt has a lot of dependencies now. How do I know which dependencies are actually being used?

Maven seems to have dependency:analyse http://maven.apache.org/plugins/maven-dependency-plugin/ Is there something similar for sbt?

by Akhil at August 21, 2014 02:47 AM

/r/emacs

Wes Felter

StackOverflow

C++ functional & generic programming [with MySQL connector example]

I am going to use MySQL connector. They provide functions to access the result row. Some examples are getString(1), getInt(1), getDate(2). The number inside the parenthesis is about the index of the result.

So that I have to use the following code to access this example row: 'John', 'M', 34

string name = row.getString(1);
string sex = row.getString(2);
int age = row.getInt(3);

I would like to try generic programming for various reasons (mainly for fun). But it was quite disappointing that I can't make it happens even with much time used.

The final result that I want:

std::tie<name, sex, age> = row.getResult<string, string, int>();

This functions should call the corresponding MySQL API.

It is also good to see any answer similar to below, although the syntax is wrong.

std::tie<name, sex, age> = row.getResult([string, string, int]);

Please don't suggest using for-loop. Let's try something more generic & functional ;-)

by HKTonyLee at August 21, 2014 02:09 AM

/r/netsec

Wes Felter

"What was exciting about the XMPP protocol itself? Were people back then just excited to be in the..."

“What was exciting about the XMPP protocol itself? Were people back then just excited to be in the presence of vast amounts of XML? I mean, that’d explain a lot.”

- astrange looks back at Google Wave

August 21, 2014 01:56 AM

StackOverflow

why self-type class can declare class

I know Scala can only mixin traits, it makes sense for dependency injection and cake pattern. My question is why I can still declare a class which need another "class" but not trait.

Code:

class C
class D { self : C =>}

This is still complied successfully. I thought it should failed compiled, because at this point how can new instance D (C is class not trait).

Edit:

when try to instantiate D:

new D with C //compilation fail class C needs to be a trait to be mixed in.

by Cloud tech at August 21, 2014 01:54 AM

Planet FreeBSD

EuroBSDCon 2014 Travel Grant Deadline Extended

The deadline for submitting your application for a Travel Grant to EuroBSDCon 2014 has been extended. Please submit your application by Friday, August 22, 2014. Find out more at: https://www.freebsdfoundation.org/announcements#eurobsdcon2014

by Anne Dickison at August 21, 2014 01:48 AM

StackOverflow

What's the type of `=> String` in scala?

In scala, there is some call-by-name parameters:

def hello(who: => String) = println("hello, " + who)

What's the type of the parameter who?

It shows the function on scala REPL as:

hello: (who: => String)Unit

Is the type still => String? Is there any name for it? Or some documentation to describe the type?

Further questions raised by answer

Question 1

(When reading the spec of §3.3.1 (MethodTypes))

Method type is the type of a method, say I defined a method hello:

def hello: String = "abc"

The type of the it can be written as: => String, right? Although you can see the REPL response is:

scala> def hello:String = "abc"
hello: String

If I define a method which has parameters:

def goodname(name: String): String = name + "!"

What's the type of the method? It should be similar to String => String, but not. Because it's a method type, and String => String is a function type.

Question 2

(When reading the spec of §3.3.1 (MethodTypes))

I can understand this as:

def goodname(name: String): String = name + "!"
def print(f: String => String) = println(f("abc"))
print(goodname)

When I call print(goodname), the type of goodname will be converted to the function type String => String, right?

But for paramless method:

def hello: String = "abc"

What function type can it be converted? I tried:

def print(f: () => String) = println(f())

But this can't be compiled:

print(hello)

The error is:

error: type mismatch; found : String required: () => String

Could you give me an example that works?

Question 3

(When reading the spec of §6.26.2 (MethodConversions))

This Evaluation conversion is only happened when the type is not applied to argument. So, for code:

def myname:String = "abc"
def print(name: => String) = println(name)
print(myname)

My question is, when I call print(myname), is there conversion(I mean Evaluation conversion) happened? I guess, since the type of myname is just => String, so it can be passed to print directly.

If the print method has changed:

def myname:String = "abc"
def print(name: String) = println(name)
print(myname)

Here the Evaluation conversion is definitely happened, right?(From => String to String)

by Freewind at August 21, 2014 01:40 AM

QuantOverflow

Volatility of Futures

Apparently:

Under a constant interest rate, the futures price is given by a deterministic time function times the asset price (I think I understand this). This means that the volatility of the futures price should be the same as that of the underlying asset price.

Not really sure how this is true. Is there any more intuitive explanation as to why this would hold?

by 1234 at August 21, 2014 01:40 AM

StackOverflow

Does Akka have a dedicated selector for OP_ACCEPT?

In many NIO based framework, it use a dedicated selector for op_accept, and use other selectors for op_write and op_read? Does Akka use the same way?

by GrapeBaBa at August 21, 2014 01:33 AM

JDT weaving is currently disabled

I just installed Eclipse standard 4.4 Luna, and after installing the Scala IDE and friends I get

JDT Weaving is currently disabled. The Scala IDE needs JDT Weaving to be active,
or it will not work correctly. 

Activate JDT Weaving and Restart Eclipse? (Highly Recommended)

[OK] [Cancel]

Does anyone know how to do this?

Now my comments on this error message

  • In general error messages that tell you what to do, but not how to do it are frustrating.
  • The [OK] button implies that the dialog will enable it for me, but it does exactly the same as clicking the [Cancel] button. Consequently, the UI design is defective.
  • The preferences dialog in Luna does not show anything under JDT or Weaving.
  • The help search in Luna for "JTD Weaving" returns too much information to offer any simple solution.
  • My search via Google turns up interesting discussion on the problem, but fails to simply state the solution, or if there is one.

https://groups.google.com/forum/#!msg/scala-ide-user/7GdTuQHyP4Q/aiUt70lnzigJ

by Eric Kolotyluk at August 21, 2014 01:32 AM

arXiv Cryptography and Security

Simple explanation on why QKD keys have not been proved secure. (arXiv:1408.4780v1 [quant-ph])

A simple counter-example is given on the prevalent interpretation of the trace distance criterion as failure probability in quantum key distribution protocols. A summary of its ramifications is listed.

by <a href="http://arxiv.org/find/quant-ph/1/au:+Yuen_H/0/1/0/all/0/1">Horace Yuen</a> at August 21, 2014 01:30 AM

A Proposed System for Covert Communication to Distant and Broad Geographical Areas. (arXiv:1408.4751v1 [cs.CR])

A covert communication system is developed that modulates Morse code characteristics and that delivers its mes- sage economically and to geographically remote areas using radio and EchoLink. Our system allows a covert message to be sent to a receiving individual by hiding it in an existing carrier Morse code message. The carrier need not be sent directly to the receiving person, though the receiver must have access to the signal. Illustratively, we propose that our system may be used as an alternative means of implementing numbers stations.

by <a href="http://arxiv.org/find/cs/1/au:+Davis_J/0/1/0/all/0/1">Joshua Davis</a> at August 21, 2014 01:30 AM

A Covert Channel Using Named Resources. (arXiv:1408.4749v1 [cs.CR])

A network covert channel is created that uses resource names such as addresses to convey information, and that approximates typical user behavior in order to blend in with its environment. The channel correlates available resource names with a user defined code-space, and transmits its covert message by selectively accessing resources associated with the message codes. In this paper we focus on an implementation of the channel using the Hypertext Transfer Protocol (HTTP) with Uniform Resource Locators (URLs) as the message names, though the system can be used in conjunction with a variety of protocols. The covert channel does not modify expected protocol structure as might be detected by simple inspection, and our HTTP implementation emulates transaction level web user behavior in order to avoid detection by statistical or behavioral analysis.

by <a href="http://arxiv.org/find/cs/1/au:+Davis_J/0/1/0/all/0/1">Joshua Davis</a>, <a href="http://arxiv.org/find/cs/1/au:+Frost_V/0/1/0/all/0/1">Victor S. Frost</a> at August 21, 2014 01:30 AM

Directed Width Measures and Monotonicity of Directed Graph Searching. (arXiv:1408.4745v1 [cs.DM])

We consider generalisations of tree width to directed graphs, that attracted much attention in the last fifteen years. About their relative strength with respect to "bounded width in one measure implies bounded width in the other" many problems remain unsolved. Only some results separating directed width measures are known. We give an almost complete picture of this relation. For this, we consider the cops and robber games characterising DAG-width and directed tree width (up to a constant factor). For DAG-width games, it is an open question whether the robber-monotonicity cost (the difference between the minimal numbers of cops capturing the robber in the general and in the monotone case) can be bounded by any function. Examples show that this function (if it exists) is at least $f(k) > 4k/3$ (Kreutzer, Ordyniak 2008). We approach a solution by defining weak monotonicity and showing that if $k$ cops win weakly monotonically, then $O(k^2)$ cops win monotonically. It follows that bounded Kelly-width implies bounded DAG-width, which has been open since the definition of Kelly-width by Hunter and Kreutzer in 2008. For directed tree width games we show that, unexpectedly, the cop-monotonicity cost (no cop revisits any vertex) is not bounded by any function. This separates directed tree width from D-width defined by Safari in 2005, refuting his conjecture.

by <a href="http://arxiv.org/find/cs/1/au:+Kaiser_L/0/1/0/all/0/1">&#x141;ukasz Kaiser</a>, <a href="http://arxiv.org/find/cs/1/au:+Kreutzer_S/0/1/0/all/0/1">Stephan Kreutzer</a>, <a href="http://arxiv.org/find/cs/1/au:+Rabinovich_R/0/1/0/all/0/1">Roman Rabinovich</a>, <a href="http://arxiv.org/find/cs/1/au:+Siebertz_S/0/1/0/all/0/1">Sebastian Siebertz</a> at August 21, 2014 01:30 AM

High Level Hardware/Software Embedded System Design with Redsharc. (arXiv:1408.4725v1 [cs.SE])

As tools for designing multiple processor systems-on-chips (MPSoCs) continue to evolve to meet the demands of developers, there exist systematic gaps that must be bridged to provide a more cohesive hardware/software development environment. We present Redsharc to address these problems and enable: system generation, software/hardware compilation and synthesis, run-time control and execution of MPSoCs. The efforts presented in this paper extend our previous work to provide a rich API, build infrastructure, and runtime enabling developers to design a system of simultaneously executing kernels in software or hardware, that communicate seamlessly. In this work we take Redsharc further to support a broader class of applications across a larger number of devices requiring a more unified system development environment and build infrastructure. To accomplish this we leverage existing tools and extend Redsharc with build and control infrastructure to relieve the burden of system development allowing software programmers to focus their efforts on application and kernel development.

by <a href="http://arxiv.org/find/cs/1/au:+Skalicky_S/0/1/0/all/0/1">Sam Skalicky</a>, <a href="http://arxiv.org/find/cs/1/au:+Schmidt_A/0/1/0/all/0/1">Andrew G. Schmidt</a>, <a href="http://arxiv.org/find/cs/1/au:+French_M/0/1/0/all/0/1">Matthew French</a> at August 21, 2014 01:30 AM

Code Generation for High-Level Synthesis of Multiresolution Applications on FPGAs. (arXiv:1408.4721v1 [cs.CV])

Multiresolution Analysis (MRA) is a mathematical method that is based on working on a problem at different scales. One of its applications is medical imaging where processing at multiple scales, based on the concept of Gaussian and Laplacian image pyramids, is a well-known technique. It is often applied to reduce noise while preserving image detail on different levels of granularity without modifying the filter kernel. In scientific computing, multigrid methods are a popular choice, as they are asymptotically optimal solvers for elliptic Partial Differential Equations (PDEs). As such algorithms have a very high computational complexity that would overwhelm CPUs in the presence of real-time constraints, application-specific processors come into consideration for implementation. Despite of huge advancements in leveraging productivity in the respective fields, designers are still required to have detailed knowledge about coding techniques and the targeted architecture to achieve efficient solutions. Recently, the HIPAcc framework was proposed as a means for automatic code generation of image processing algorithms, based on a Domain-Specific Language (DSL). From the same code base, it is possible to generate code for efficient implementations on several accelerator technologies including different types of Graphics Processing Units (GPUs) as well as reconfigurable logic (FPGAs). In this work, we demonstrate the ability of HIPAcc to generate code for the implementation of multiresolution applications on FPGAs and embedded GPUs.

by <a href="http://arxiv.org/find/cs/1/au:+Schmid_M/0/1/0/all/0/1">Moritz Schmid</a>, <a href="http://arxiv.org/find/cs/1/au:+Reiche_O/0/1/0/all/0/1">Oliver Reiche</a>, <a href="http://arxiv.org/find/cs/1/au:+Schmitt_C/0/1/0/all/0/1">Christian Schmitt</a>, <a href="http://arxiv.org/find/cs/1/au:+Hannig_F/0/1/0/all/0/1">Frank Hannig</a>, <a href="http://arxiv.org/find/cs/1/au:+Teich_J/0/1/0/all/0/1">J&#xfc;rgen Teich</a> at August 21, 2014 01:30 AM

Making FPGAs Accessible to Scientists and Engineers as Domain Expert Software Programmers with LabVIEW. (arXiv:1408.4715v1 [cs.SE])

In this paper we present a graphical programming framework, LabVIEW, and associated language and libraries, as well as programming techniques and patterns that we have found useful in making FPGAs accessible to scientists and engineers as domain expert software programmers.

by <a href="http://arxiv.org/find/cs/1/au:+Andrade_H/0/1/0/all/0/1">Hugo A. Andrade</a>, <a href="http://arxiv.org/find/cs/1/au:+Hogg_S/0/1/0/all/0/1">Simon Hogg</a>, <a href="http://arxiv.org/find/cs/1/au:+Ahrends_S/0/1/0/all/0/1">Stephan Ahrends</a> at August 21, 2014 01:30 AM

Non-predetermined Model Theory. (arXiv:1408.4681v1 [math.LO])

This article introduce a new model theory call non-predetermined model theory where functions and relations need not to be determined already and they are determined through time.

by <a href="http://arxiv.org/find/math/1/au:+Ramezanian_R/0/1/0/all/0/1">Rasoul Ramezanian</a> at August 21, 2014 01:30 AM

Incremental Cardinality Constraints for MaxSAT. (arXiv:1408.4628v1 [cs.LO])

Maximum Satisfiability (MaxSAT) is an optimization variant of the Boolean Satisfiability (SAT) problem. In general, MaxSAT algorithms perform a succession of SAT solver calls to reach an optimum solution making extensive use of cardinality constraints. Many of these algorithms are non-incremental in nature, i.e. at each iteration the formula is rebuilt and no knowledge is reused from one iteration to another. In this paper, we exploit the knowledge acquired across iterations using novel schemes to use cardinality constraints in an incremental fashion. We integrate these schemes with several MaxSAT algorithms. Our experimental results show a significant performance boost for these algo- rithms as compared to their non-incremental counterparts. These results suggest that incremental cardinality constraints could be beneficial for other constraint solving domains.

by <a href="http://arxiv.org/find/cs/1/au:+Martins_R/0/1/0/all/0/1">Ruben Martins</a>, <a href="http://arxiv.org/find/cs/1/au:+Joshi_S/0/1/0/all/0/1">Saurabh Joshi</a>, <a href="http://arxiv.org/find/cs/1/au:+Manquinho_V/0/1/0/all/0/1">Vasco Manquinho</a>, <a href="http://arxiv.org/find/cs/1/au:+Lynce_I/0/1/0/all/0/1">Ines Lynce</a> at August 21, 2014 01:30 AM

EURETILE D7.3 - Dynamic DAL benchmark coding, measurements on MPI version of DPSNN-STDP (distributed plastic spiking neural net) and improvements to other DAL codes. (arXiv:1408.4587v1 [cs.DC])

The EURETILE project required the selection and coding of a set of dedicated benchmarks. The project is about the software and hardware architecture of future many-tile distributed fault-tolerant systems. We focus on dynamic workloads characterised by heavy numerical processing requirements. The ambition is to identify common techniques that could be applied to both the Embedded Systems and HPC domains. This document is the first public deliverable of Work Package 7: Challenging Tiled Applications.

by <a href="http://arxiv.org/find/cs/1/au:+Paolucci_P/0/1/0/all/0/1">Pier Stanislao Paolucci</a>, <a href="http://arxiv.org/find/cs/1/au:+Bacivarov_I/0/1/0/all/0/1">Iuliana Bacivarov</a>, <a href="http://arxiv.org/find/cs/1/au:+Rai_D/0/1/0/all/0/1">Devendra Rai</a>, <a href="http://arxiv.org/find/cs/1/au:+Schor_L/0/1/0/all/0/1">Lars Schor</a>, <a href="http://arxiv.org/find/cs/1/au:+Thiele_L/0/1/0/all/0/1">Lothar Thiele</a>, <a href="http://arxiv.org/find/cs/1/au:+Yang_H/0/1/0/all/0/1">Hoeseok Yang</a>, <a href="http://arxiv.org/find/cs/1/au:+Pastorelli_E/0/1/0/all/0/1">Elena Pastorelli</a>, <a href="http://arxiv.org/find/cs/1/au:+Ammendola_R/0/1/0/all/0/1">Roberto Ammendola</a>, <a href="http://arxiv.org/find/cs/1/au:+Biagioni_A/0/1/0/all/0/1">Andrea Biagioni</a>, <a href="http://arxiv.org/find/cs/1/au:+Frezza_O/0/1/0/all/0/1">Ottorino Frezza</a>, <a href="http://arxiv.org/find/cs/1/au:+Cicero_F/0/1/0/all/0/1">Francesca Lo Cicero</a>, <a href="http://arxiv.org/find/cs/1/au:+Lonardo_A/0/1/0/all/0/1">Alessandro Lonardo</a>, <a href="http://arxiv.org/find/cs/1/au:+Simula_F/0/1/0/all/0/1">Francesco Simula</a>, <a href="http://arxiv.org/find/cs/1/au:+Tosoratto_L/0/1/0/all/0/1">Laura Tosoratto</a>, <a href="http://arxiv.org/find/cs/1/au:+Vicini_P/0/1/0/all/0/1">Piero Vicini</a> at August 21, 2014 01:30 AM

Experiments Validating the Effectiveness of Multi-point Wireless Energy Transmission with Carrier Shift Diversity. (arXiv:1408.4539v1 [cs.NI])

This paper presents a method to seamlessly extend the coverage of energy supply field for wireless sensor networks in order to free sensors from wires and batteries, where the multi-point scheme is employed to overcome path-loss attenuation, while the carrier shift diversity is introduced to mitigate the effect of interference between multiple wave sources. As we focus on the energy transmission part, sensor or communication schemes are out of scope of this paper. To verify the effectiveness of the proposed wireless energy transmission, this paper conducts indoor experiments in which we compare the power distribution and the coverage performance of different energy transmission schemes including conventional single-point, simple multi-point and our proposed multi-point scheme. To easily observe the effect of the standing-wave caused by multipath and interference between multiple wave sources, 3D measurements are performed in an empty room. The results of our experiments together with those of a simulation that assumes a similar antenna setting in free space environment show that the coverage of single-point and multi-point wireless energy transmission without carrier shift diversity are limited by path-loss, standing-wave created by multipath and interference between multiple wave sources. On the other hand, the proposed scheme can overcome power attenuation due to the path-loss as well as the effect of standing-wave created by multipath and interference between multiple wave sources.

by <a href="http://arxiv.org/find/cs/1/au:+Maehara_D/0/1/0/all/0/1">Daiki Maehara</a>, <a href="http://arxiv.org/find/cs/1/au:+Tran_G/0/1/0/all/0/1">Gia Khanh Tran</a>, <a href="http://arxiv.org/find/cs/1/au:+Sakaguchi_K/0/1/0/all/0/1">Kei Sakaguchi</a>, <a href="http://arxiv.org/find/cs/1/au:+Araki_K/0/1/0/all/0/1">Kiyomichi Araki</a>, <a href="http://arxiv.org/find/cs/1/au:+Furukawa_M/0/1/0/all/0/1">Minoru Furukawa</a> at August 21, 2014 01:30 AM

Laplace Functional Ordering of Point Processes in Large-scale Wireless Networks. (arXiv:1408.4528v1 [cs.IT])

Stochastic orders on point processes are partial orders which capture notions like being larger or more variable. Laplace functional ordering of point processes is a useful stochastic order for comparing spatial deployments of wireless networks. It is shown that the ordering of point processes is preserved under independent operations such as marking, thinning, clustering, superposition, and random translation. Laplace functional ordering can be used to establish comparisons of several performance metrics such as coverage probability, achievable rate, and resource allocation even when closed form expressions of such metrics are unavailable. Applications in several network scenarios are also provided where tradeoffs between coverage and interference as well as fairness and peakyness are studied. Monte-Carlo simulations are used to supplement our analytical results.

by <a href="http://arxiv.org/find/cs/1/au:+Lee_J/0/1/0/all/0/1">Junghoon Lee</a>, <a href="http://arxiv.org/find/cs/1/au:+Tepedelenlioglu_C/0/1/0/all/0/1">Cihan Tepedelenlioglu</a> at August 21, 2014 01:30 AM

Monoids with tests and the algebra of possibly non-halting programs. (arXiv:1408.4498v1 [math.LO])

We study the algebraic theory of computable functions, which can be viewed as arising from possibly non-halting computer programs or algorithms, acting on some state space, equipped with operations of composition, {\em if-then-else} and {\em while-do} defined in terms of a Boolean algebra of conditions. It has previously been shown that there is no finite axiomatisation of algebras of partial functions under these operations alone, and this holds even if one restricts attention to transformations (representing halting programs) rather than partial functions, and omits {\em while-do} from the signature. In the halting case, there is a natural "fix", which is to allow composition of halting programs with conditions, and then the resulting algebras admit a finite axiomatisation. In the current setting such compositions are not possible, but by extending the notion of {\em if-then-else}, we are able to give finite axiomatisations of the resulting algebras of (partial) functions, with {\em while-do} in the signature if the state space is assumed finite. The axiomatisations are extended to consider the partial predicate of equality. All algebras considered turn out to be enrichments of the notion of a (one-sided) restriction semigroup.

by <a href="http://arxiv.org/find/math/1/au:+Jackson_M/0/1/0/all/0/1">Marcel Jackson</a>, <a href="http://arxiv.org/find/math/1/au:+Stokes_T/0/1/0/all/0/1">Tim Stokes</a> at August 21, 2014 01:30 AM

On Optimal Decision-Making in Ant Colonies. (arXiv:1408.4487v1 [cs.DC])

Colonies of ants can collectively choose the best of several nests, even when many of the active ants who organize the move visit only one site. Understanding such a behavior can help us design efficient distributed decision making algorithms. Marshall et al. propose a model for house-hunting in colonies of ant Temnothorax albipennis. Unfortunately, their model does not achieve optimal decision-making while laboratory experiments show that, in fact, colonies usually achieve optimality during the house-hunting process. In this paper, we argue that the model of Marshall et al. can achieve optimality by including nest size information in their mathematical model. We use lab results of Pratt et al. to re-define the differential equations of Marshall et al. Finally, we sketch our strategy for testing the optimality of the new model.

by <a href="http://arxiv.org/find/cs/1/au:+Movahedi_M/0/1/0/all/0/1">Mahnush Movahedi</a>, <a href="http://arxiv.org/find/cs/1/au:+Zamani_M/0/1/0/all/0/1">Mahdi Zamani</a> at August 21, 2014 01:30 AM

Bounds for variables with few occurrences in conjunctive normal forms. (arXiv:1408.0629v1 [math.CO] CROSS LISTED)

We investigate connections between SAT (the propositional satisfiability problem) and combinatorics, around the minimum degree (occurrence) of variables in various forms of redundancy-free boolean conjunctive normal forms (clause-sets).

Lean clause-sets do not have non-trivial autarkies, that is, it is not possible to satisfy some clauses and leave the other clauses untouched. The deficiency of a clause-set is the difference of the number of clauses and the number of variables. We prove a sharp upper bound on the minimum variable degree in dependency on the deficiency. If a clause-set does not fulfil this upper bound, then it must have a non-trivial autarky; we show that the autarky-reduction (elimination of affected clauses) can be done in polynomial time, while it is open to find the autarky itself in polynomial time.

Then we investigate this upper bound for the special case of minimally unsatisfiable clause-sets. Here we show that the bound can be improved.

We consider precise relations, and the investigations have a certain number-theoretical flavour. We try to build a bridge from logic to combinatorics (especially to hypergraph colouring), and thus we discuss thoroughly the background and open problems, and provide many examples and explanations.

by <a href="http://arxiv.org/find/math/1/au:+Kullmann_O/0/1/0/all/0/1">Oliver Kullmann</a>, <a href="http://arxiv.org/find/math/1/au:+Zhao_X/0/1/0/all/0/1">Xishun Zhao</a> at August 21, 2014 01:30 AM

Division by zero in non-involutive meadows. (arXiv:1406.2092v1 [math.RA] CROSS LISTED)

Meadows have been proposed as alternatives for fields with a purely equational axiomatization. At the basis of meadows lies the decision to make the multiplicative inverse operation total by imposing that the multiplicative inverse of zero is zero. Thus, the multiplicative inverse operation of a meadow is an involution. In this paper, we study `non-involutive meadows', i.e.\ variants of meadows in which the multiplicative inverse of zero is not zero, and pay special attention to non-involutive meadows in which the multiplicative inverse of zero is one.

by <a href="http://arxiv.org/find/math/1/au:+Bergstra_J/0/1/0/all/0/1">J. A. Bergstra</a>, <a href="http://arxiv.org/find/math/1/au:+Middelburg_C/0/1/0/all/0/1">C. A. Middelburg</a> at August 21, 2014 01:30 AM

Max-Sum Diversification, Monotone Submodular Functions and Dynamic Updates. (arXiv:1203.6397v2 [cs.DS] UPDATED)

Result diversification is an important aspect in web-based search, document summarization, facility location, portfolio management and other applications. Given a set of ranked results for a set of objects (e.g. web documents, facilities, etc.) with a distance between any pair, the goal is to select a subset $S$ satisfying the following three criteria: (a) the subset $S$ satisfies some constraint (e.g. bounded cardinality); (b) the subset contains results of high "quality"; and (c) the subset contains results that are "diverse" relative to the distance measure. The goal of result diversification is to produce a diversified subset while maintaining high quality as much as possible. We study a broad class of problems where the distances are a metric, where the constraint is given by independence in a matroid, where quality is determined by a monotone submodular function, and diversity is defined as the sum of distances between objects in $S$. Our problem is a generalization of the {\em max sum diversification} problem studied in \cite{GoSh09} which in turn is a generaliztion of the {\em max sum $p$-dispersion problem} studied extensively in location theory. It is NP-hard even with the triangle inequality. We propose two simple and natural algorithms: a greedy algorithm for a cardinality constraint and a local search algorithm for an arbitary matroid constraint. We prove that both algorithms achieve constant approximation ratios.

by <a href="http://arxiv.org/find/cs/1/au:+Borodin_A/0/1/0/all/0/1">Allan Borodin</a>, <a href="http://arxiv.org/find/cs/1/au:+Jain_A/0/1/0/all/0/1">Aadhar Jain</a>, <a href="http://arxiv.org/find/cs/1/au:+Lee_H/0/1/0/all/0/1">Hyun Chul Lee</a>, <a href="http://arxiv.org/find/cs/1/au:+Ye_Y/0/1/0/all/0/1">Yuli Ye</a> at August 21, 2014 01:30 AM

/r/emacs

An emacs theme gallery

I can confirm it works on Firefox, Chrome, Safari and IE (10, 11).

You can find a detailed description of the project here: https://github.com/pawelbx/emacs-theme-gallery

submitted by pawelb
[link] [comment]

August 21, 2014 12:58 AM

Planet Theory

Tractable Pathfinding for the Stochastic On-Time Arrival Problem

Authors: Mehrdad Niknami, Samitha Samaranayake, Alexandre Bayen
Download: PDF
Abstract: We present a new technique for fast computation of the route that maximizes the probability of on-time arrival in stochastic networks, also known as the path-based stochastic on-time arrival (SOTA) problem. We utilize the solution to the policy-based SOTA problem, which is of pseudopolynomial time complexity in the time budget of the journey, as a heuristic for efficiently computing the optimal path. We also introduce Arc-Potentials, an extension to the Arc-Flags pre-processing algorithm, which improves the efficiency of the graph pre-processing and reduces the computation time. Finally, we present extensive numerical results demonstrating the effectiveness of our algorithm and observe that its running time when given the policy (which can be efficiently obtained using pre-processing) is almost always linear in the length of the optimal path for our test networks.

August 21, 2014 12:40 AM

StackOverflow

Find implicit value by abstract type member

With a type like trait A[T], finding an implicit in scope is simply implicitly[A[SomeType]]

Can this be done and, if so, how is this done where the type-parameter is replaced with an abstract type member, like in trait A { type T }?

by megri at August 21, 2014 12:14 AM

HN Daily

August 20, 2014

StackOverflow

lein - how to use a downloaded library

Let's say I find a cool clojure library like https://github.com/clojurewerkz/buffy

Now I want to use it. And it only lives on github.

How do I do this? I would love a full start to finish hello world example.

I've read about compiling it as a jar and using that, or using :dependencies in my project.clj but so far no examples have been complete, and I'm new.

For example in python I'd git clone the repo into the root of my working tree and any file could just say import buffy

by CornSmith at August 20, 2014 11:52 PM

Planet Theory

Simons-Ber​keley Research Fellowship​s in Cryptograp​hy for Summer 2015

The Simons Institute for the Theory of Computing at UC Berkeley invites applications for Research Fellowships for the research program on Cryptography that will take place in Summer, 2015. These Fellowships are open to outstanding junior scientists (at most 6 years from PhD by 1 May, 2015).

Further details and application instructions can be found at simons.berkeley.edu/fellows-summer2015. General information about the Simons Institute can be found at simons.berkeley.edu, and about the Cryptography program at simons.berkeley.edu/programs/crypto2015.

Deadline for applications: 30 September, 2014.


by Guy Rothblum at August 20, 2014 11:36 PM

UnixOverflow

What are the differences between socket polling mechanisms of kqueue and epolling?

kqueue socket polling mechanism is used in FreeBSD and epolling in Linux. I would like to know what are the differences between the two mechanisms?

by jithinjustin at August 20, 2014 11:27 PM

StackOverflow

Converting a java.util.Set to java.util.List in Scala

While in a project that is a mix of Scala and Java, I need to convert a Java Set into a Java List while in the Scala portion of the code.

What are some efficient ways of doing this? I could potentially use JavaConverters to convert Java Set -> Scala Set -> Scala List -> Java List. Are there other options that would be more efficient?

Thanks

by jinchung at August 20, 2014 11:20 PM

/r/compilers

StackOverflow

Calculating percent difference between elements in a list with functional programming in Mathematica?

This stems from a related discussion, How to subtract specific elements in a list using functional programming in Mathematica?

How does one go about easily calculating percent differences between values in a list?

The linked question uses Differences to easily calculate absolute differences between successive elements in a list. However easy the built-in Differences function makes that particular problem, it still leaves the question as to how to perform different manipulations.

As I mentioned earlier, I am looking to now calculate percent differences. Given a list of elements, {value1, value2, ..., valueN}, how does one perform an operation like (value2-value1)/value1 to said list?

I've tried finding a way to use Slot or SlotSequence to isolate specific elements and then apply a custom function to them. Is this the most efficient way to do something like this (assuming that there is a way to isolate elements and perform operations on them)?

by Alec at August 20, 2014 11:09 PM

TheoryOverflow

Compute "must-pass" nodes between two nodes in a flow graph (a directed graph with a start vertex)

Suppose I have a flow graph, i.e., a directed graph with a start vertex. Let p and q be two different nodes of the graph, I would like to find the nodes that have to be passed through when a path goes through d and u sequentially.

For example, in the flow graph below, the must-pass node between 'b' and 'd' is 'c', and there is no must-pass nodes between a and d.

Given 2 nodes of a flow graph, is there a general algorithm that identifies must-pass nodes? Thank you.

here,

by zell at August 20, 2014 11:02 PM

Portland Pattern Repository

/r/compsci

Is it worth taking a lower level scientific computing class?

I'm a physics phd student with a bit of scientific computing experience. Over the next year I will be taking a computational fluids course which is relevant to my field of study. Now I'm planning on getting a graduate level minor in computer science too, and to do so I will need to take two more cs classes. Do you guys think I will learn anything that I wouldn't on my own from the introductory scientific computing classes? Or should I look towards higher level classes?

submitted by heart_of_gold1
[link] [1 comment]

August 20, 2014 10:57 PM

QuantOverflow

What is the effect of dividend yield being greater than the risk-free rate to American options pricing?

Even though dividends are discrete, literature often makes the assumption of continuous dividends (mostly in the case of indices but the individual stocks as well).

The dividend yield denoted by q is often considered as an adjustment to the risk free rate (i.e. r-q).

My question is, what happens to American Call options if r-q < 0? Is it now possible to exercise before maturity so it can no longer be calculated as a European option? Logic says you can early exercise but I am not sure.

Some footnote: In discrete dividend case we know that we should only exercise American Calls before maturity if the excess value of the option is less than the dividend. Otherwise value of the American Option will always be greater than the exercise price. This is mainly due to r > 0, and in the rare case of r < 0 American Puts become equivalent to European Puts.

by berkorbay at August 20, 2014 10:52 PM

/r/compsci

StackOverflow

Nesting json - Play 2.3

I'm trying to nest json like this:

case class Foo(id:Int, a:String, b:String)

def barJson =
  Json.obj("hello" -> "hi")

def getFooJson =
  Json.obj {
    "foos" -> Json.arr {
      fooTable.list.map { foo =>
        Json.toJson(foo) + barJson
      }
    }
  }

But I'm getting this error:

type mismatch;
[error]  found   : play.api.libs.json.JsObject
[error]  required: String

What am I doing wrong here & how can I fix it? The result I'm going after is something like this:

"foos":[
    {
      "a":"hi", 
      "b":"bye", 
      "bar": {
        "hello": "bye"
      }
    }, 
    {
      "a":"hi2", 
      "b":"bye2", 
      "bar": {
        "hello": "bye"
      }
    }
]

by goo at August 20, 2014 10:40 PM

/r/freebsd

What are the best practices for configuring /tmp on FreeBSD?

I noticed that FreeBSD 10.0 doesn't by default create a ramdisk for /tmp. Is there any reason why this is or why I should not do so?

Thanks for reading!

submitted by good_names_all_taken
[link] [7 comments]

August 20, 2014 10:33 PM

QuantOverflow

Approximation of different volatilities

Suppose I model the forward swap rate lognormal

$$dS_t = \sigma_{ln}S_tdW_t$$

On the other hand we could model it simply by a normal assumption:

$$dS_t = \sigma_{n}dW_t$$

I would like to know if there is a relationship for the volatilities $\sigma_n,\sigma_{ln}$? A friend told me, that he saw the approximation

$$\sigma_n\approx \sigma_{ln}S_t$$

However, neither my friend nor I was able to come up with a justification of this approximation. So is this a valid approximation? If so, why and if not, how else can I relate the two volatilities?

by user8 at August 20, 2014 10:22 PM

StackOverflow

Modelling producer-consumer semantics with typeclasses?

If some entities in a system can function as producers of data or events, and other entities can function as consumers, does it make sense to externalize these "orthogonal concerns" into Producer and Consumer typeclasses?

I can see that the Haskell Pipes library uses this approach, and appreciate this question may look pretty basic for people coming from a Haskell background, but would be interested in a Scala perspective and examples because I don't see a lot.

by James McCabe at August 20, 2014 10:18 PM

Java functional map() with threading

I have an array of many hundreds of thousands of elements and I need to run a time consuming operation on each. I'm hesitant to use Executor due to the shear number of elements, is there any way I can do the computation on all the elements utilising multithreading without rolling my own solution?

by user3780104 at August 20, 2014 09:51 PM

Lobsters

/r/netsec

StackOverflow

List of options: equivalent of sequence in Scala?

What is the equivalent of Haskell's sequence in Scala? I want to turn list of options into an option of list. It should come out as None if any of the options is None.

List(Some(1), None, Some(2)).???     --> None
List(Some(1), Some(2), Some(3)).???  --> Some(List(1, 2, 3))

by luntain at August 20, 2014 09:05 PM

How to access command line parameters in build definition?

I'd like to be able to modify some build tasks in response to command line parameters. How do I (can I?) access command line parameters from the Build.scala file?

by blueberryfields at August 20, 2014 09:04 PM

CompsciOverflow

Proving number of internal nodes in the subtree rooted at any node x of Red Black trees

Reading Lemma 13.1 from the book Introduction to Algorithms, 3rd Edition

To prove: A red black tree with n nodes has height at most 2 lg(n+1)

First it attempts to prove : the subtree rooted at any node x contains at least 2$^{bh(x)}$-1 internal nodes (with internal nodes referring to normal nodes those with keys and external nodes referring to null pointers pointing out of leaf nodes). It proceeds with consideration: x being an internal node with positive height and two children. Then it says these exact words:

Each child has a black height of either bh(x) or bh(x)-1, depending on whether its color is red of black, respectively. Since height of a child of a x is less than the height of x itself, we can apply the inductive hypothesis to conclude that each child has at least 2$^{bh(x)}$-1 internal nodes. Thus, the subtree rooted at x contains at least $(2^{bh(x)-1}-1)+(2^{bh(x)-1}-1)+1 = 2^{bh(x)}-1$ internal nodes.

I dont get this much. Especially first two sentences. My poor maths may be.

by Mahesha999 at August 20, 2014 08:58 PM

/r/compsci

AWS

New SSL Features for Amazon CloudFront - Session Tickets, OCSP Stapling, Perfect Forward Secrecy

You probably know that you can use Amazon CloudFront to distribute your content to users around the world with a high degree of security, low latency and high data transfer speed. CloudFront supports the use of secure HTTPS connections from the origin to the edge and from the edge to the client; if you enable this option data travels from the origin to your end users in a secure, encrypted form.

Today we are making some additional improvements to the performance and security of CloudFront's SSL implementation. These features are enabled automatically and work with the default CloudFront SSL certificate as well as custom (SNI and Dedicated IP) SSL certificates.

Performance Enhancements
We have improved the performance of SSL connections with the use of Session Tickets and OCSP Stapling. Both of these features are built in to the SSL protocol and you don't have to make any code or configuration changes in order to use them. In other words, you (and your users) are already benefitting from these improvements.

SSL Session Tickets - As part of the SSL handshake process, the client and the server exchange multiple packets as part of a negotiation ritual that results in agreement to use a particular encryption model (cipher) and certificate. This process entails multiple round trips and a fair amount of computation on both ends which adds some latency to the connection process. This process has to be repeated if the connection is broken. To avoid some of this rigmarole while keeping the connection secure, CloudFront now implements SSL Session Tickets. After the negotiation is complete, the SSL server creates an encrypted session ticket and returns it to the client. Later, the client can present this ticket to the server as an alternative to a full negotiation when resuming or restarting a connection. The ticket reminds the server of what they have already agreed to as part of an earlier SSL handshake.

OCSP Stapling - An SSL certificate must be validated before it can be used. The certificate authority (CA) for the certificate must be consulted in order to ensure that the certificate is legitimate and that it has not been revoked. In the absence of support for OCSP Stapling, the client (e.g. a web browser) will take care of this interaction with the CA, once again at the cost of some round trips and the associated latency they bring. CloudFront now implements OCSP Stapling. This approach moves the burden of domain name resolution (to locate the CA) and certificate validation over to CloudFront, where the results can be cached and then attached (stapled, hence the name) to one of the packets in the SSL handshake. The clients no longer need to handle the domain name resolution or certificate validation and benefits from the work done on the server.

Security Enhancements
We have added support for Perfect Forward Secrecy and newer SSL ciphers.

Perfect Forward Secrecy - This feature creates a new private key for each SSL session. In the event that a private key for a session was discovered, it could be used only to decode that session and no other, past or future.

Newer Ciphers - CloudFront now supports a set of advanced RSA-AES ciphers. The server and the client agree on a cipher automatically as part of the SSL handshake process.

Available Now
These new features are available now at no extra charge and you may already be using them today! See the CloudFront Pricing page for more information.

-- Jeff;

by Jeff Barr (awseditor@amazon.com) at August 20, 2014 08:49 PM

StackOverflow

Make S.M.A.R.T. hex dump readable

I have 3DM2 (3ware raid manager) installed on a server Running FreeBSD. In 3DM2 I can get a hex dump of harddisk s.m.a.r.t. data

(probably not needed in this question, bit it looks like this:

0A 00 01 0F 00 75 63 53 FD 63 08 00 00 00 03 03 00 61 61 00 00 00 00 00 00 00 04 32 00 64 64 70 00 00 00 00 00 00 05 33 00 64 64 00 00 00 00 00

etc.)

Is there a tool that I can use to convert is to something user-readable/understandable?

by Lexib0y at August 20, 2014 08:46 PM

Why my MacVIM and terminal vi looks different?

I'm using both console and GUI VIM. Cannot understand why my GUI vim shows different color palette and different parentheses colors (Rainbow parentheses plugin)

Console vim is in the left (and it seems to be better):

enter image description here

by Anton Pleshivtsev at August 20, 2014 08:33 PM

DragonFly BSD Digest

New dhclient and other improvements

DragonFly’s dhclient will now retry failed interfaces and handle being re-run gracefully.  This is a blessing for anyone who has had a flaky link.  Matthew Dillon’s made two other improvements for booting that will also improve boot time when networks go missing.

by Justin Sherrill at August 20, 2014 08:32 PM

TheoryOverflow

Finding a point outside of each of a set of polygons in a bounded space

I know there are algorithms for finding a point inside a simple polygon. Given a set of polygons inside a rectangle (think a bunch of polygons on a computer screen), is there an efficient algorithm for finding a point that is inside the rectangle but not inside any of the specified polygons? (Note that these polygons don't overlap, but may share a common border (or part of a border).)

by Paul Reiners at August 20, 2014 08:31 PM

StackOverflow

wget in path does not work in freebsd


I have just installed a freeBSD 7 in my VMWare.however I found no wget in this os.so I download wget-1.15.tar.gz from websit. and then install wget on my os.

then I meet this strange question below.

# wget
wget: Command not found.
# whereis wget
wget: /usr/local/bin/wget
# env | grep PATH
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/games:/usr/local/sbin:/usr/local/bin:/root/bin
# ln -s /usr/local/bin/wget /bin
# whereis wget
wget: /bin/wget
# wget
wget: Command not found.
# /bin/wget
wget: missing URL
Usage: wget [OPTION]... [URL]...

Try `wget --help' for more options.
# 

why when I type /bin/wget,system can find wget;when I type wget ,system can't find wget.you see that /bin already in my PATH.
thanks.

by Pudge at August 20, 2014 08:22 PM

CompsciOverflow

Help with linked list [migrated]

As i have had no formal education with computer science, please pardon if my question seems trivial.

I was reading on linked lists, and the only good source i could find was one from stanford cs-library. I was hoping to implement what i learned from it, and run it on my compiler. The program is to find the number of elements in a linked list of {1,2,3}. As i understood it this is what i did;

#include <stdio.h>
#include <stdlib.h>
struct node {
int data;
struct node* next;
};
int main(){
struct node* BuildOneTwoThree() {
struct node* head = NULL;
struct node* second = NULL;
struct node* third = NULL;
head = malloc(sizeof(struct node)); // allocate 3 nodes in the heap
second = malloc(sizeof(struct node));
third = malloc(sizeof(struct node));
head->data = 1; // setup first node
head->next = second; // note: pointer assignment rule
second->data = 2; // setup second node
second->next = third;
third->data = 3; // setup third link
third->next = NULL;
return head;

int Length(struct node* head) {
struct node* current = head;
int count = 0;
while (current != NULL) {
count++;
current = current->next;

}
printf("%d",count);
return count;
}


}
return 0;
 }

, it is returning blank. I don't understand where i made a mistake, what am i doing wrong?

by overflow at August 20, 2014 08:17 PM

Planet Clojure

Using A ref As A Mutable Global Flag

I have written a new, small Clojure program to compare this month’s and last month’s insurance report. This is similar to a project I did a year ago, except it involves one report our personnel department gets once a month, not two different reports. 

The program involves using Clojure’s jdbc interface, and is very much a typical database report that could have been written in Perl, or if the database had been Informix, in Informix 4GL. There’s nothing special about the program, except the code base was already in Clojure, and I wanted to keep the code base the same.

The only roadblock I ran into was setting status from the result of certain difference tests between last and this month, like whether a record wasn’t there this month, last month, whether or not the insurance product or premium had changed, or if someone had gone from an active to a retired status.

I tried figuring out a way to have a let binding contain return status from these different tests, so that these status values could be written into the report. After a while, I settled on a ref and dosync to set one global flag, so that later on in the program, had their been no errors, an appropriate message could be written to the file.

I don’t know whether I crossed into the mutable dark force, but, for one, I’m not convinced that carefully used mutable variables are a bad thing, especially, if you’ve designed the rest of your program not to take these shortcuts, because of coding laziness. Can you tell I’ve absorbed guilt from Clojure’s being immutable?


by Octopusgrabbus at August 20, 2014 08:11 PM

/r/emacs

StackOverflow

jackson-module-scala: how to read subtype?

With this jackson-module-scala wrapper

object Json {
  private val ma = new ObjectMapper with ScalaObjectMapper
  ma.registerModule(DefaultScalaModule)
  ma.setSerializationInclusion(Include.NON_NULL)

  def jRead[T: Manifest](value: String): T = ma.readValue[T](value)
  def jWrite(value: Any) = ma.writer.writeValueAsString(value)
  def jNode(value: String) = ma.readTree(value)
}

I try to read subtype (it is just simplified use case without real work):

object TestJTrait extends App {
  trait T1
  object TestJ { def apply[X <: T1: Manifest](s: String): X = jRead[X](s) }
  case class C1(i: Int) extends T1
  TestJ(jWrite(C1(42)))
}

But this attempt results in the error:

Exception in thread "main" com.fasterxml.jackson.databind.JsonMappingException: Can not instantiate abstract type [simple type, class scala.runtime.Nothing$] (need to add/enable type information?)
 at [Source: {"i":42}; line: 1, column: 2]
    at com.fasterxml.jackson.databind.JsonMappingException.from(JsonMappingException.java:148)
    at com.fasterxml.jackson.databind.deser.std.ThrowableDeserializer.deserializeFromObject(ThrowableDeserializer.java:73)
    at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:124)
    at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:3051)
    at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2160)
    at com.fasterxml.jackson.module.scala.experimental.ScalaObjectMapper$class.readValue(ScalaObjectMapper.scala:180)

Can anybody, please, suggest a workaround to get the intended behaveur?

by Andrew Gaydenko at August 20, 2014 08:06 PM

Interaction with spawned process in Scala

I can use Python to control gnuplot to print out plots in an interactive way as follows:

p = Popen(["/usr/local/bin/gnuplot"], shell=False, stdin=PIPE, stdout=PIPE)
p.stdin.write(r'set terminal gif;')
...
out, err = p.communicate()

How can I do the same thing with Scala? I have some skeleton code, but I'm not sure exactly how to fill in the missing gaps.

val gnuplot = "/usr/local/bin/gnuplot"
val pb = Process(gnuplot)
val pio = new ProcessIO(_ => (),
                        stdout => ...,
                        _ => ())
pb.run(pio)

by prosseek at August 20, 2014 08:05 PM

CompsciOverflow

Linear-time algorithm for solving a set of inequalites

We have a set $A$ of $n$ distinct numbers. We want to determine for all $x,y,z \in A$ whether the following relations hold.

$$ x+y>z\qquad x+z>y\qquad y+z>x $$

Anyone could describe $O(n)$ Algorithm?

by user3661613 at August 20, 2014 08:01 PM

DataTau

StackOverflow

Ansible IP address variable - host part

I have the following problem:

I'm writing playbook for setting IP address on the command line in Ansible. Lets say 10.10.10.x. I need to get the last part of my public IP lets say x.x.x.15 and assign it to the private: 10.10.10.15. Is there a variable for this? Can i capture some? I've tried to use something like:

shell: "ip addr show | grep inet ...." 
register: host_ip

But it is not what i need. It works, but only for a limited number of servers.

The whole thing should be like that:

"shell: /dir/script --options 10.10.10.{{ var }}"

and {{ var }} should be the host part of the public IP.

by plamer at August 20, 2014 07:58 PM

for comprehension from flatMap and future to future

I would like something like runProgram2 but currently that part does not compile. Is there a way to write it somewhat like runProgram2 but so it compiles..

package transformer

import scala.concurrent.{ExecutionContext, Promise, Future}
import ExecutionContext.Implicits.global
import java.util.concurrent.TimeUnit
import scala.concurrent.duration.Duration

object TestingForComprehensions2 {

    def main(args: Array[String]) = {
      val future1: Future[String] = runMyProgram()
      future1.onSuccess {
        case r:String =>       System.out.println("result="+r)
      }


      val future2: Future[String] = runMyProgram2()
      future2.onSuccess {
        case r:String =>       System.out.println("result="+r)
      }

      System.out.println("waiting")
      Thread.sleep(600000)
    }

    def runMyProgram() : Future[String] = {
      val future = serviceCall()
      val middle = serviceCallWrap(future)
      val future2 = middle.flatMap(serviceCall2)
      val future3 = future2.map(processAllReturnCodes)
      future3
    }

    def runMyProgram2() : Future[String] = {
      for {
        result1 <- serviceCall()
        middle = serviceCallWrap(result1)
        result2 <-  serviceCall2(middle)
      } yield processAllReturnCodes(result2)
    }

    def processAllReturnCodes(theMsg: String) : String = {
      "dean"+theMsg
    }

    def serviceCall() : Future[Int] = {
      val promise = Promise.successful(5)
      promise.future
    }

    def serviceCallWrap(f:Future[Int]) : Future[Int] = {
      f
    }

    def serviceCall2(count:Int) : Future[String] = {
      val promise = Promise.successful("hithere"+count)
      promise.future
    }

}

by Dean Hiller at August 20, 2014 07:50 PM

Planet Clojure

Validateur 2.3.1 is released

TL;DR

Validateur is a functional validations library inspired by Ruby’s ActiveModel. Validateur 2.3 is a minor feature release.

Changes Between 2.2.0 and 2.3.0

unnest

unnest is a helper function useful for building UIs that validate on the fly. Here’s a basic example. Let’s write some code to render a UI off of a nested map and build up live validation for that map off of component validators. Here are the components:

1
2
3
4
5
6
7
8
(def profile-validator
  (vr/validation-set
   (vr/presence-of #{:first-name :last-name})))

(def secret-validator
  (vr/validation-set
   (vr/length-of :password :within (range 5 15))
   (vr/length-of :phone :is 10)))

And then the composed, user account validator:

1
2
3
4
(def account-validator
  (vr/compose-sets
   (vr/nested :secrets secret-validator)
   (vr/nested :profile profile-validator)))

Next are the “rendering” functions. Imagine that these are input components responsible for validating their input and displaying errors when present. Our “render” phase will just print.

1
2
3
4
5
6
7
8
9
10
11
12
13
(defn render-profile [profile errors]
  (prn "Profile: " profile)
  (prn "Profile Errors: " errors))

(defn render-secrets [secrets errors]
  (prn "Secrets: " secrets)
  (prn "Secret Errors: " errors))

(defn submit-button
  "Renders a submit button that can only submit when no errors are
  present."
  [errors]
  (prn "All Errors: " errors))

The render-account function renders all subcomponents, performs global validation and routes the errors and data where each needs to go:

1
2
3
4
5
6
7
8
9
10
11
12
13
(defn render-account
  "This function accepts an account object, validates the entire thing
  using the subvalidators defined above, then uses unnested to pull
  out specific errors for each component.

  The entire validation error map is passed into submit-button,
  which might only allow a server POST on click of the full error map
  is empty."
  [{:keys [secrets profile] :as account}]
  (let [errors (account-validator account)]
    (render-profile profile (vr/unnest :profile errors))
    (render-secrets secrets (vr/unnest :secrets errors))
    (submit-button errors)))

Let’s see this function in action. Calling render-account with an invalid map triggers a render that shows off a bunch of errors:

1
2
3
4
5
6
7
8
9
10
11
(render-account
   {:secrets {:password "face"
              :phone "703555555512323"}
    :profile {:first-name "Queequeg"}})


"Profile: " {:first-name "Queequeg"}
"Errors: " {[:last-name] #{"can't be blank"}}
"Secrets: " {:password "face", :phone "703555555512323"}
"Errors: " {[:phone] #{"must be 10 characters long"}, [:password] #{"must be from 5 to 14 characters long"}}
"All Errors: " {[:profile :last-name] #{"can't be blank"}, [:secrets :phone] #{"must be 10 characters long"}, [:secrets :password] #{"must be from 5 to 14 characters long"}}

Calling render-account with a valid map prints only the data:

1
2
3
4
5
6
7
8
9
10
11
(render-account
 {:secrets {:password "faceknuckle"
            :phone "7035555555"}
  :profile {:first-name "Queequeg"
            :last-name "Kokovoko"}})

"Profile: " {:last-name "Kokovoko", :first-name "Queequeg"}
"Errors: " {}
"Secrets: " {:password "faceknuckle", :phone "7035555555"}
"Errors: " {}
"All Errors: " {}

nest

nest is a helper function that makes it easy to validate dynamic data that’s not part of the actual map you pass into the validator. For example, say you wanted to validate all user accounts, then build up a map of userid –> validation errors:

1
2
3
4
5
6
7
8
(for [account (get-all-accounts)]
  (vr/nest (:id account)
           (account-validator account)))

{[100 :profile :first-name] "can't be blank"
 [200 :profile :last-name] "can't be blank"
 ;; etc
 }

Full Change Log

Validateur change log is available on GitHub.

Validateur is a ClojureWerkz Project

Validateur is part of the group of libraries known as ClojureWerkz, together with

  • Langohr, a Clojure client for RabbitMQ that embraces the AMQP 0.9.1 model
  • Monger, a Clojure MongoDB client for a more civilized age
  • Elastisch, a minimalistic Clojure client for ElasticSearch
  • Cassaforte, a Clojure Cassandra client built around CQL
  • Neocons, a client for the Neo4J REST API
  • Welle, a Riak client with batteries included
  • Quartzite, a powerful scheduling library

and several others. If you like Validateur, you may also like our other projects.

Let us know what you think on Twitter or on the Clojure mailing list.

About The Author

Michael on behalf of the ClojureWerkz Team

by The ClojureWerkz Team at August 20, 2014 07:49 PM

/r/netsec

Planet Clojure

Mailer 1.2.0 is Released

TL;DR

ClojureWerkz Mailer is an ActionMailer-inspired mailer library for Clojure.

1.2.0 is a minor feature release.

Changes Between 1.1.0 and 1.2.0

Improved Template Rendering Exceptions

Template rendering exceptions now have a better error message.p

Contributed by Lei.

Mailer is a ClojureWerkz Project

Mailer is part of the group of libraries known as ClojureWerkz, together with

  • Langohr, a Clojure client for RabbitMQ that embraces the AMQP 0.9.1 model
  • Elastisch, a minimalistic Clojure client for ElasticSearch
  • Cassaforte, a Clojure client for Cassandra built around CQL
  • Monger, a Clojure MongoDB client for a more civilized age
  • Welle, a Riak client with batteries included
  • Neocons, a Clojure client for the Neo4J REST API

and several others. If you like Mailer, you may also like our other projects.

Let us know what you think on Twitter or on the Clojure mailing list.

About the Author

Michael on behalf of the ClojureWerkz Team.

by The ClojureWerkz Team at August 20, 2014 07:48 PM

Lobsters

StackOverflow

Accessing the entire Ansible Inventory from group playbook

I am trying to get a list of the IP's of all the hosts in my inventory file from a playbook that only runs in a single group.

Assume the following inventory file:

[dbservers]
db1.example.com
db2.example.com

[webservers]
www1.example.com
www2.example.com

And the playbook:

---

- hosts: dbservers
  roles:
  - dosomething

And the dosomething role:

- name: print all host ips
  template: src=hosts.j2 dest=/tmp/hosts.txt

And the hosts.j2 template:

{% for host in hostvars %}

{{ hostvars[host].ansible_eth0.ipv4.address }}

{% endfor %}

Problem:

When running this, I only ever get the dbserver ip's listed, not all ip's

Question:

How can I gain access to the entire inventory from within this playbook? Changing the hosts to all in the playbook works, but then the dosomething playbook also runs on all hosts, which is not what I want. I only want the list on the dbservers

Appreciated.

by Deonvdv at August 20, 2014 07:46 PM

Twitter

Fighting spam with BotMaker

Spam on Twitter is different from traditional spam primarily because of two aspects of our platform: Twitter exposes developer APIs to make it easy to interact with the platform and real-time content is fundamental to our user’s experience.

These constraints mean that spammers know (almost) everything Twitter’s anti-spam systems know through the APIs, and anti-spam systems must avoid adding latency to user-visible operations. These operating conditions are a stark contrast to the constraints placed upon more traditional systems, like email, where data is private and adding latency of tens of seconds goes unnoticed.

So, to fight spam on Twitter, we built BotMaker, a system that we designed and implemented from the ground up that forms a solid foundation for our principled defense against unsolicited content. The system handles billions of events every day in production, and we have seen a 40% reduction in key spam metrics since launching BotMaker.

In this post we introduce BotMaker and discuss our overall architecture. All of the examples in this post are used to illustrate the use of BotMaker, not actual rules running in production.

BotMaker architecture

Goals, challenges and BotMaker overview
The goal of any anti-spam system is to reduce spam that the user sees while having nearly zero false positives. Three key principles guided our design of Botmaker:

  • Prevent spam content from being created. By making it as hard as possible to create spam, we reduce the amount of spam the user sees.
  • Reduce the amount of time spam is visible on Twitter. For the spam content that does get through, we try to clean it up as soon as possible.
  • Reduce the reaction time to new spam attacks. Spam evolves constantly. Spammers respond to the system defenses and the cycle never stops. In order to be effective, we have to be able to collect data, and evaluate and deploy rules and models quickly.

BotMaker achieves these goals by receiving events from Twitter’s distributed systems, inspecting the data according to a set of rules, and then acting accordingly. BotMaker rules, or bots as they are known internally, are decomposed into two parts: conditions for deciding whether or not to act on an event, and actions that dictate what the caller should do with this particular event. For example, a simple rule for denying any Tweets that contain a spam url would be:

Condition:
HasSpamUrl(GetUrls(tweetText))

Action:
Deny()

The net effect of this rule is that BotMaker will deny any Tweets that match this condition.

The main challenges in supporting this type of system are evaluating rules with low enough latency that they can run on the write path for Twitter’s main features (i.e., Tweets, Retweets, favorites, follows and messages), supporting computationally intense machine learning based rules, and providing Twitter engineers with the ability to modify and create new rules instantaneously.

For the remainder of this blog post, we discuss how we solve these challenges.

When we run BotMaker

The ideal spam defense would detect spam at the time of creation, but in practice this is difficult due to the latency requirements of Twitter. We have a combination of systems (see figure above) that detect spam at various stages.

  1. Real time (Scarecrow): Scarecrow detects spam in real time and prevents spam content from getting into the system, and it must run with low latency. Being in the synchronous path of all actions enables Scarecrow to deny writes and to challenge suspicious actions with countermeasures like captchas.
  2. Near real time (Sniper): For the spam that gets through Scarecrow’s real time checks, Sniper continuously classifies users and content off the write path. Some machine learning models cannot be evaluated in real time due to the nature of the features that they depend on. These models get evaluated in Sniper. Since Sniper is asynchronous, we can also afford to lookup features that have high latency.
  3. Periodic jobs: Models that have to look at user behavior over extended periods of time and extract features from massive amounts of data can be run periodically in offline jobs since latency is not a constraint. While we do use offline jobs for models that need data over a large time window, doing all spam detection by periodically running offline jobs is neither scalable nor effective.

The BotMaker rule language

In addition to when BotMaker runs, we have put considerable time into designing an intuitive and powerful interface for guiding how BotMaker runs. Specifically: our BotMaker language is type safe, all data structures are immutable, all functions are pure except for a few well marked functions for storing data atomically, and our runtime supports common functional programming idioms. Some of the language highlights include:

  • Human readable syntax.
  • Functions that can be combined to compose complex derived functions.
  • New rules can be added without any code changes or recompilation.
  • Edits to production rules get deployed in seconds.

Sample bot

Here is a bot that demonstrates some of the above features. Lets say we want to get all users that are receiving blocks due to mentions that they have posted in the last 24 hours.
Here is what the rule would look like:

Condition:

  Count(
    Intersection(
      UsersBlocking(spammerId, 1day),
      UsersMentionedBy(spammerId, 1day)
    )
  ) >= 1

Actions:

  Record(spammerId)

UsersBlocking and UsersMentionedBy are functions that return lists of users, which the bot intersects and gets a count of the result. If the count is more than one, then the user is recorded for analysis.

Impact and lessons learned

This figure shows the amount of spam we saw on Twitter before enabling spam checks on the write path for Twitter events. This graph spans 30 days with time on the x-axis and spam volume on the y-axis. After turning on spam checking on the write paths, we saw a 55% drop in spam on the system as a direct result of preventing spam content from being written.

BotMaker has also helped us reduce our response time to spam attacks significantly. Before BotMaker, it took hours or days to make a code change, test and deploy, whereas using BotMaker it takes minutes to react. This faster reaction time has dramatically improved developer and operational efficiency, and it has allowed us to rapidly iterate and refine our rules and models, thus reducing the amount of spam on Twitter.

Once we launched BotMaker and started using it to fight spam, we saw a 40% reduction in a metric that we use to track spam.

Conclusion
BotMaker has ushered in a new era of fighting spam at Twitter. With BotMaker, Twitter engineers now have the ability to create new models and rules quickly that can prevent spam before it even enters the system. We designed BotMaker to handle the stringent latency requirements of Twitter’s real-time products, while still supporting more computationally intensive spam rules.

BotMaker is already being used in production at Twitter as our main spam-fighting engine. Because of the success we have had handling the massive load of events, and the ease of writing new rules that hit production systems immediately, other groups at Twitter have started using BotMaker for non-spam purposes. BotMaker acts as a fundamental interposition layer in our distributed system. Moving forward, the principles learned from BotMaker can help guide the design and implementation of systems responsible for managing, maintaining and protecting the distributed systems of today and the future.

August 20, 2014 07:44 PM

StackOverflow

How do I run SBT from within Eclipse?

So far I've been running IntelliJ IDEA Community Edition for my Scala projects, but as my projects are expanding in complexity, I stumble upon more and more roadblocks with the IDE.

Like for example the simple fact that IDEA doesn't allow for web-development or Java EE development what so ever, which means using the Play Framework or TomEE in Community Edition leads to nothing but dead ends and frustration.

The only reason I switched to IDEA in the first place, is because of its excellent plugin system, allowing me to run SBT seamlessly as the primary scala compiler and library downloading tool with ease.

Searching around on Google, however I can only seem to find mentions about the eclipse plugin for sbt, that makes an sbt project Eclipse friendly, which is the exact opposite of what I'm really looking for.

I'm not willing to spend €89 per year for a student licence after all the pain it's put me through so far...

So my question is; is there a plugin for Eclipse that allows me to use SBT the same way as in IDEA? Or am I forced to go through the console?

by Electric Coffee at August 20, 2014 07:37 PM

/r/compsci

Can you recommend some books on the following topics?

Hi guys, I'm a long time lurker and 4th year compsci major. I want to ask you guys if you know any good books on the following topics:

Neural Networks

Computer Vision

Image Processing

Preferably books that help introduce the topic for someone who has very little knowledge of them. Also, what's a good resource to help learn MATLAB? I know I should have probably picked it up by now, but better late than never.

Thank you all so much!

submitted by TheCiN
[link] [3 comments]

August 20, 2014 07:17 PM

StackOverflow

more readable scala pattern for matching success

I find the Success case is frequently buried in a match of many errors and one success. Is there another way to write this more cleanly such that sucess stands out perhaps by having all errors in a partial function? Or perhaps, there is another way to write it but just cleaner. I am in general just looking for other ideas/solutions that could be done.

results.responseCode match {
  case Success =>
    // TODO make this less smelly. can results.results be None?
    val searchResults = results.results.get.results
    SomeService.getUsersFromThriftResults(
      userRepo,
      searchResults,
      Seq(WithCounts)) map { userResults =>
      val renderableStatuses = getStatuses(searchResults, userResults.userMap)
      new JsonAction(transformedQuery, renderableStatuses)
    }
  case ErrorInvalidQuery =>
    throw new SomeBadRequestException("invalid query")
  case ErrorOverCapacity |
       ErrorTimeout =>
    throw new SomeServiceUnavailableException("service unavailable")
  //TODO: take care of these errors differently
  //          case ErrorInvalidWorkflow |
  //               ErrorBackendFailure |
  //               ErrorEventNotFound |
  //               PartialSuccess |
  case _ =>
    throw new SomeApplicationException("internal server error")
}

by Dean Hiller at August 20, 2014 07:13 PM

What's the type of `{ "abc" }`

Scala code:

{ "abc" }

What the type of it? Is it => String, or just String?

by Freewind at August 20, 2014 07:11 PM

/r/clojure

StackOverflow

Apply something to certain value of a hash, return the whole hash

What is the proper way of doing so in Ruby in a functional and immutable way:

a = { price: 100, foo: :bar, bar: :baz }

def reduced_price(row)
  row.merge(price: row[:price] / 2)
end

reduced_price(a) # => { price: 50, foo: :bar, bar: :baz }

I don't want to mutate anything and I don't like the consctruction row.merge(key: row[:key]) because it repeats the :key and refers to row twice. If there would be something like:

{a: 1, b: 2}.apply_to_key(:a) { |x| x * 10 } # => {a: 10, b: 2}

it would be great.

To sum up, I want a method that, when given a key, updates a single value of a hash by that key using the previous value, and then returns the whole hash.

by Hnatt at August 20, 2014 07:01 PM

Any popular and good Scala library for Apache Cassandra?

Actually I know good Java high level API and ORM for Apache Cassandra - Hector. But can't find any native good solution for Scala. Anybody know any actual project with good quality, activity and ORM support for SuperColumns?

by abdmob at August 20, 2014 06:55 PM

Convert expression to polish notation in Scala

I would like to convert an expression such as: a.meth(b) to a function of type (A, B) => C that performs that exact computation.

My best attempt so far was along these lines:

def polish[A, B, C](symb: String): (A, B) => C = { (a, b) =>
// reflectively check if "symb" is a method defined on a
// if so, reflectively call symb, passing b
}

And then use it like this:

def flip[A, B, C](f : (A, B) => C): (B, A) => C = {(b, a) => f(a,b)}
val op = flip(polish("::"))
def reverse[A](l: List[A]): List[A] = l reduceLeft op

As you can pretty much see, it is quite ugly and you have to do a lot of type checking "manually".

Is there an alternative ?

by Radu Stoenescu at August 20, 2014 06:52 PM

for comprehension with futures in scala translation to flatMap

I have been looking at this How are Scala Futures chained together with flatMap? and the corresponding article as well on translating for comprehension. I am slowly adding stuff to my for comprehension and am stuck as I guess the code I thought would translate to is not correct.

Here I have a runProgram and runProgram2 which I thought would be equivalent and are not because runProgram2 does not compile. Can someone explain the equiavalent of this for comprehension...

NOTE: yes I know that future.flatMap is typically for collapsing Future[Future[String]] but this is a trimmed down version of my file(perhaps I trimmed it down too far).

def main(args: Array[String]) = {
  val future1: Future[String] = runMyProgram()

  //val future2: Future[String] = runMyProgram2()

}

def runMyProgram() : Future[String] = {
  val future = serviceCall()
  future.flatMap(processAllReturnCodes)
}

//    def runMyProgram2() : Future[String] = {
//      val future = serviceCall()
//      for {
//        result <-  future
//      } yield processAllReturnCodes(result)
//    }

def processAllReturnCodes(count: Int) : Future[String] = {

  val promise = Promise.successful("done")
  promise.future
}

def serviceCall() : Future[Int] = {
  val promise = Promise.successful(5)
  promise.future
}

def serviceCall2() : Future[String] = {
  val promise = Promise.successful("hithere")
  promise.future
}

by Dean Hiller at August 20, 2014 06:49 PM

Lobsters

TheoryOverflow

Dynamic Programming with two optimization goals

I am working on the problem of distributed database query planning. Existing work [1] uses dynamic programming to search the potential query plan space and find the one with minimal cost. However, I am interested in an additional value (e.g. security risk) of a query plan, and would like to minimize this value also.

My current idea is to first minimize the cost goal to a certain extent, and then minimize the second goal. More specifically, we can let the user specify a Delta value, and then the dynamic programming algorithm can find a set of near optimal candidates whose costs fall in [optimal cost, optimal cost + Delta]. Then, we can minimize the second goal in this set.

My question is, is there any algorithmic work that offers similar capabilities? Or is there any method to modify a dynamic programming algorithm so that it can consider two goals simultaneously? Other relevant information will also be appreciated. Thank you!

References:

[1] Kossmann, Donald, and Konrad Stocker. "Iterative dynamic programming: a new class of query optimization algorithms." ACM Transactions on Database Systems (TODS) 25.1 (2000): 43-82.

by ZillGate at August 20, 2014 06:44 PM

Planet Theory

The spectrum of the infinite tree

The spectral norm of the infinite {d}-regular tree is {2 \sqrt {d-1}}. We will see what this means and how to prove it.

When talking about the expansion of random graphs, abobut the construction of Ramanujan expanders, as well as about sparsifiers, community detection, and several other problems, the number {2 \sqrt{d-1}} comes up often, where {d} is the degree of the graph, for reasons that tend to be related to properties of the infinite {d}-regular tree.

If {G} is a {d}-regular graph, {A} is its adjacency matrix and

\displaystyle  d = \lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_n \geq -d

are the eigenvalues of {A} in non-increasing order, then a measure of the expansion of {G} is the parameter

\displaystyle  \sigma_2 = \max_{i=2,\ldots,n} \ | \lambda_i | = \max \{ | \lambda_2 | , | \lambda_n| \}

which is the second largest singular value of {A}. One way to think about the above parameter is that the “best possible {d}-regular expander,” if we allow weights, is the graph whose adjacency matrix is {\frac dn J}, where {J} is the matrix with ones in every entry. The parameter {\sigma_2} measures the distance between {A} and {\frac dn J} according to the spectral norm. (The spectral norm is a good one to consider when talking about graphs, because it bounds the cut norm, it is efficiently computable, and so on.)

If {G} is {d}-regular and bipartite, then {\lambda_n = -d} and an appropriate measure of expansion is {\max_{i=2,\ldots,n-1} |\lambda_i|}, which is just {\lambda_2}.

Nilli proved that, in a {d}-regular graph, {\lambda_2 \geq 2 \sqrt{d-1} - O( \sqrt d / diam(G))}, where {diam(G) \geq \log_d n} is the diameter of {G}. Nilli’s construction is a variant of the way you prove that the spectral norm of the infinite tree is at least {2\sqrt {d-1}}. Lubotzky, Phillips and Sarnak call a {d}-regular graph Ramanujan if {\sigma_2 \leq 2 \sqrt {d-1}} (or if {\lambda_2 \leq 2 \sqrt{d-1}} in the case of bipartite graphs). So Ramanujan graphs are the best possible expanders from the point of view of the spectral definition.

Lubotzky, Phillips and Sarnak given an efficient construction of an infinite family of {d}-regular Ramanujan graphs when {d-1} is prime and {d-1 \equiv 1 \pmod 4}, and this has been generalized by Morgenstern to all {d} such that {d-1} is a prime power.

Marcus, Spielman and Srivastava show the existence of infinitely many Ramanujan bipartite expanders for every degree. (For degree {d}, their construction gives graphs with any number of nodes of the form {d \cdot 2^k}.) Their proof uses the fact that the spectral norm of the infinite {d}-regular tree is at most {2 \sqrt {d-1}}.

Friedman, in an outstanding tour the force, has given a 128-page proof of a conjecture of Alon, that a random {d}-regular graph will satisfy, with high probability, {\sigma_2 \leq 2 \sqrt {d -1} + o_n(1)}. His paper is long, in part, because everything is explained very well. I am taking the analysis of the spectral norm of the infinite tree presented in this post from his paper.

(Notice that it is still an open question to construct, or even to prove the existence of, an infinite family of non-bipartite Ramanujan graphs of degree, say, {d=7}, or to show, for any degree at all, (except 2!) that Ramanujan graphs of that degree exist for all number of vertices in a dense sets of integers.)

Now let’s talk about the spectrum of the infinite tree. First of all, although finite trees are terrible expanders, it is intuitive that an infinite tree is an excellent expander. For an infinite graph {G=(V,E)} with a countable number of vertices and finite degree we can define the (non-normalized) expansion as

\displaystyle  \phi(G) = \inf_{S \ \rm finite} \ \ \frac {E(S,V-S)}{ |S|}

In a {d}-regular infinite tree, the expansion is {d-1}, because, if we take any set {S} of {n} vertices, there are at most {n-1} edges in the subgraph induced by {S} (because it’s a forest with {n} vertices) and so there are at least {dn - n + 1} edges leaving {S}. It is easy to see that every other infinite {d}-regular graph has expansion at most {d-1} because, for every {n}, we can run a DFS for {n} steps starting from an arbitrary vertex {v} until we either discover {n} vertices reachable from {v} (including {v}), or we find a connected component of size {\leq n}. In the former case, let {S} be the {n} vertices found by the DFS: the set {S} induces a connected subgraph, so there are {\geq n-1} edges inside {S} and {\leq dn - n +1} edges leaving {S}. In the latter case, the expansion is zero.

What about the spectral expansion of the infinite tree? If {G=(V,E)} is a finite {d}-regular graph, then the largest eigenvalue of its adjacency matrix is {d}, and the corresponding eigenvector is the vector {(1,\ldots,1)}. By the spectral theorem we have

\displaystyle  \sigma_2 = \max_{x \in {\mathbb R}^V \ : \ \sum_v x_v = 0} \ \ \frac { | 2 \sum_{\{ u,v\} \in E} x_u x_v | } {\sum_v x_v^2}

When we have an infinite {d}-regular graph, the all-1 vector is not an eigenvector any more (because it has infinite norm), and the relevant quantity becomes the spectral norm

\displaystyle  \sigma (G) = \sup_{x \in {\mathbb R}^V : \ \sum_v x_v^2 < \infty } \ \ \frac { | 2 \sum_{\{ u,v\} \in E} x_u x_v | } {\sum_v x_v^2}

We will not need it, but I should remark that there is a Cheeger inequality for infinite graphs, which is actually slightly easier to prove than for finite graphs.

Since we want to prove that, for the infinite {d}-regular tree {T= ( V,E)} we have {\sigma \leq 2 \sqrt { d-1}}, we need to argue that for every vector {x \in {\mathbb R}^V} such that {\sum_v x_v^2 < \infty } we have

\displaystyle  2| \sum_{\{ u,v \} \in E} x_u x_v | \leq 2 \sqrt { d -1 } \cdot \sum_v x_v^2 \ \ \ \ \ (1)

If we fix a root {r}, so that we can talk about the parent and the children of each node, and if we call {C_u} the set of children of {v}, then we want to show

\displaystyle  2 | \sum_u \sum_{v\in C_u} x_u x_v | \leq 2 \sqrt { d -1 } \cdot \sum_v x_v^2 \ \ \ \ \ (2)

Since we have an inequality with a summation on the left-hand-side and a square root on the right-hand-side, this looks like a job for Cauchy-Schwarz! The trick to get a one-line proof is to use the right way to break things up. One thing that comes to mind is to use {2|x_u x_v| \leq x_u^2 + x_v^2}, but this does not go anywhere, and it would give an upper bound of {d \sum_v x_v^2}. This means that the bound {2|x_u x_v| \leq x_u^2 + x_v^2} is often loose, which must happen because {x_u} and {x_v} are often different in magnitude. To see why this should be the case, note that if we call {c:= \sum_v x^2_v}, considering that there are {d\cdot (d-1)^{t-1}} vertices at distance {t} from the root, the typical vertex {v} at distance {t} from the root satisfies {x_v^2 = O(1/(d-1)^t)}, and we may think that if {x_v} is a child of {x_u} it should often be the case that {x^2_v} is about a factor of {(d-1)} smaller than {x_u^2}. If that were the case, then a tighter form of Cauchy-Schwarz would be {2 |x_u| \cdot |x_v| \leq \frac 1 {\sqrt {d-1}} x_u^2 + \sqrt{d-1} x_v^2}. Let us try that bound:

\displaystyle  2 \left| \sum_u \sum_{v\in C_u} x_u x_v \right| \leq \sum_u \sum_{v \in C_u} \frac 1 {\sqrt {d-1}} x_u^2 + \sqrt{d-1} x_v^2

\displaystyle  = \frac d {\sqrt {d-1}} x_r^2 + \sum_{v \neq r} \left( \frac 1 {\sqrt {d-1}} \cdot (d-1) + \sqrt{d-1} \right) x_v^2

\displaystyle  \leq 2 \sum_v \sqrt{d-1} x_v^2

which works! To justify the identity in the middle line, note that, in the sum, the root appears {d} times as a parent and never as a child, and every other vertex appears once as a child and {d-1} times as a parent.

This proves that the spectral norm of the infinite {d}-regular tree is at most {2 \sqrt {d-1}}. To prove that it is at least this much, for every {\epsilon >0} we must find a vector {x} such that

\displaystyle  \left| \sum_u \sum_{v\in C_u} x_u x_v \right| \geq ( 2 \sqrt { d -1 } -\epsilon) \cdot \sum_v x_v^2

For such vectors, the Cauchy-Schwarz argument about is nearly tight, so it means that in such vectors, if {u} is the parent of {v}, then {|x_u|} needs to be about {\sqrt {d-1}} times larger than {|x_v|}. So let us start from this condition: we pick a vertex {r} to be the root and we set {x_r = 1}, if {v} is a child of {r} we set {x_v = 1/\sqrt d}, and if {v} is at distance {t>1} from the root we set {x_v = 1/ ( \sqrt { d \cdot (d-1)^{t-1}} )}. This means that if we sum {x_v^2} over all the vertices at distance {t} from the root we get {1}, for all {t}, which means that {\sum_v x_v^2 = \infty}. So let us cut off the construction at some distance {k}, so that {x} is defined as follows:

  • {x_r = 1}
  • {x_v = \frac 1 {\sqrt{ d (d-1)^{t-1}}}} if {v} is at distance {t} from {r} and {1\leq t \leq k-1}
  • {x_v = 0} if {v} is at distance {\geq k} from {r}.

We immediately see

\displaystyle  \sum_v x_v^2 = k

Then we do the calculations and we see that

\displaystyle  \sum_u \sum_{v\in C_u} x_u x_v = \sqrt d + \sum_{t=1}^{k-2} d \cdot (d-1)^{t-1} \cdot \frac {1}{\sqrt{d \cdot (d-1)^{t-1}}} \cdot \frac {1}{\sqrt{d \cdot (d-1)^{t}}}

\displaystyle  \sqrt d + (k-2) \sqrt {d-1 } \geq (k-1) \sqrt {d-1}

so, for every {k}, we can construct a vector {x} such that

\displaystyle  2 \sum_u \sum_{v\in C_u} | x_u x_v | \geq 2 \cdot \frac {k-1}{k} \cdot \sqrt{d-1} \sum_v x_v^2

and we are done!


by luca at August 20, 2014 06:43 PM

/r/netsec

StackOverflow

Why does activator/sbt add Scala version to pure Java library dependencies?

I'm using Typesafe Activator's last version (1.2.8 with Scala 2.11.x).

When I add a dependency "org.apache.httpcomponents" %% "httpclient" % "4.4-alpha1" to my project in build.sbt, something like this:

libraryDependencies ++= Seq(
  "com.typesafe.akka" %% "akka-actor" % "2.3.4",
  "com.typesafe.akka" %% "akka-testkit" % "2.3.4",
  "org.scalatest" %% "scalatest" % "2.1.6" % "test",
  "junit" % "junit" % "4.11" % "test",
  "com.novocode" % "junit-interface" % "0.10" % "test",
  "org.apache.httpcomponents" %% "httpclient" % "4.4-alpha1" // my added dependency
)

... and try to update the project ( in activator's cli ) I get an error:

[error] (*:update) sbt.ResolveException: unresolved dependency: org.apache.httpcomponents#httpclient_2.11;4.4-alpha1: not found

I know scala's version aren't binary compatible, but I try to get a pure-java library org.apache.httpcomponent#httpclient! Why does activator add "_2.11" at the end of artifactId and make wrong urls...? How to resolve it?

by Reza Same'e at August 20, 2014 06:22 PM

In Scala Timestamp of now plus 5 minutes

In Scala, how would I make a Timestamp of a time that is 5 minutes in the future? I'm using a java.util.Date class.

by trevorgrayson at August 20, 2014 06:21 PM

/r/clojure

/r/compsci

What are the real world negative implications of P=NP?

Although the question has been asked many times before, it seems that everyone glosses over the negative implications of P=NP. My question is, if P=NP, how would that negatively affect banking, economies, governments, etc...? Would it spark WW3? If anybody had the capability to produce human level (and beyond) AIs how would people use this technology?

edit: Let me expand. I am assuming there is a proof that P=NP and an efficient low degree polynomial-time algorithm to solve any NP problem in polynomial time.

The inherent problem I want to address with this question is the balance between the enormous positive impacts and the immediate negative shock to the system, due to the collapse of RSA encryption, and the immediate power grab that would naturally ensue. How would we deal with these?

Could the immediate shock from a P=NP proof be the solution to the Fermi Paradox? Civilizations tends to develop technology for security and privacy, and then they prove P=NP, and the power grab results in a world wide nuclear war that destroys life on the planet? If you think about it on a grand scale, a global nuclear war as the "crescendo" and "grand finale" to evolution of life would be quite fitting in some sense. Am I being too negative?

(I would, personally, like to hear from doomers and people willing to seriously consider the negative impacts, rather than people saying it will all work out and everything will be okay. The point is, what if it isn't? And what is the real likelihood of such a scenario if we do not wear rose colored glasses? How can we make sure it does turn out to be okay for humanity?)

edit2:

Furthermore, I am assuming the hierarchy collapses in the sense that P=NP=co-NP=NP-complete, and EXP=NEXP=co-NEXP=NEXP-complete, etc.... That is, putting it in layman's terms, if the solution to a problem can be "quickly" verified by a computer, the computer can also solve that problem "quickly".

submitted by ThunderRedr
[link] [32 comments]

August 20, 2014 05:59 PM

Planet Emacsen

StackOverflow

Detect duplicate POSTs

I'm working on a play framework 2 based prototype where one of the requirements is to filter duplicate post requests from clients. Its a prototype and trying to hack one out quick and dirty. I have the following data structure and code to keep track of duplicates.

public class DuplicityService {

    private static ConcurrentHashMap<String,List<String>> keyIps = new ConcurrentHashMap<String,List<String>>();

    public static boolean isDuplicate(String key,String ip){
        List<String> value = keyIps.get(key);
        return value != null && value.contains(ip);
    }

    public static void remove(String key){
        keyIps.remove(key);
    }

    public static void add(String key,String ip){
        List<String> value = keyIps.get(key);
        if(value == null){
            value = new ArrayList<String>();

        }
        value.add(ip);
        keyIps.put(key, value);
    }
}

I use this in my Controllers as such

def submitResponse(qkey:String) = CorsAction(parse.json){
               req =>
      val json = req.body
      val key = json.\("Key").as[String]
       ....
        if(DuplicityService.isDuplicate(key,req.remoteAddress)){
          ...
          BadRequest("Duplicate Response  " + key)
        }
        else{
             ...
              DuplicityService.add(key,req.remoteAddress)

              Ok(json)
             ...
          }  
  }

And remove the key from the concurrent hashmap in a separate controller method

def publish(key: String) = Authenticated{

    ...
    DuplicityService.remove(key)
    ...
  }

Now the problem is that while testing manually on my local machine it works fine. I'm able to correctly identify duplicate post requests from same IP address.

However, on heroku, this doesn't work. I'm able to make duplicate post requests from same client.

I have a basic instance of a Heroku play 2 server.

Any pointers or help will be much appreciated.

PS: Without using a database, are there better ways to do something what I'm attempting.

Thanks

by smk at August 20, 2014 05:53 PM

How to run spark scala program

I wrote a 'Scala' program for spark. However, i am not certain on how to compile and run the same from UNIX command line. Should i include a jar file before edits?

I am a newbie, please help.

by user2806611 at August 20, 2014 05:52 PM

swagger-codegen install on mac

I am trying desperately to get swagger-codegen working on my Macbook Pro with OS X Mountain Lion.

Upgraded Java to 1.7.

java version "1.7.0_45"
Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode)

Installed Homebrew.

Uninstalled Mono and Xamarin per "brew doctor".

Installed Xcode command line tools per "brew doctor".

Executed "brew update"

Installed scala "brew install scala".

Installed sbt "brew install sbt"

Executed "sbt"

[info] Loading project definition from /Users/jecz/Apache/swagger-codegen-master/project
[info] Set current project to swagger-codegen (in build file:/Users/jecz/Apache/swagger-codegen-master/)

Executed "./sbt assembly"

Detected sbt version 0.13.0
[info] Loading project definition from /Users/jecz/Apache/swagger-codegen-master/project
[info] Set current project to swagger-codegen (in build file:/Users/jecz/Apache/swagger-codegen-master/)
[warn] Credentials file /Users/jecz/.ivy2/.credentials does not exist
[info] ResourceExtractorTest:
[info] ResourceExtractor
[info] - should get 3 apis from a resource listing
[info] ApiExtractorTest:
[info] ApiExtractor
[info] - should verify the deserialization of the store api
[info] ResourceListingValidationTest:
[info] - should not have base path
[info] - should fail resource listing without apiVersion
[info] - should fail with missing paths in a ResourceListing
[info] ApiListingReferenceValidationTest:
[info] - should deserialize an ApiListingReference
[info] - should serialize an ApiListingReference
[info] ApiDescriptionValidationTest:
[info] - should fail to deserialize an ApiDescription with path, method, return type
[info] OperationValidationTest:
[info] - should fail to deserialize an Operation with missing param type
[info] - should serialize an operation
[info] ResponseMessageValidationTest:
[info] - should deserialize an ResponseMessage
[info] - should serialize an operation
[info] ParameterValidationTest:
[info] - should deserialize another param
[info] - should deserialize a parameter
[info] - should serialize a parameter
[info] ModelValidationTest:
[info] - should deserialize a model
[info] - should serialize a model
[info] ModelRefValidationTest:
[info] - should deserialize a model ref
[info] - should serialize a model ref
[info] ModelPropertyValidationTest:
[info] - should deserialize a model property with allowable values and ref
[info] - should serialize a model property with allowable values and ref
[info] - should deserialize a model property with allowable values
[info] - should serialize a model property with allowable values
[info] - should deserialize a model property
[info] - should serialize a model property
[info] AllowableValuesValidationTest:
[info] - should deserialize allowable value list
[info] - should serialize allowable values list
[info] - should deserialize allowable values range
[info] - should serialize allowable values range
[info] ResourceListingSerializersTest:
[info] - should deserialize an ResourceListing with no apis
[info] - should serialize an ApiListingReference with no apis
[info] - should deserialize an ResourceListing
[info] - should serialize an ApiListingReference
[info] ApiListingReferenceSerializersTest:
[info] - should deserialize an ApiListingReference
[info] - should serialize an ApiListingReference
[info] ApiDescriptionSerializersTest:
[info] - should deserialize an ApiDescription with no ops
[info] - should serialize an ApiDescription with no operations
[info] - should deserialize an ApiDescription
[info] - should serialize an ApiDescription
[info] OperationSerializersTest:
[info] - should deserialize an Operation
[info] - should serialize an operation
[info] ErrorResponseSerializersTest:
[info] - should deserialize an ResponseResponse
[info] - should serialize an operation
[info] ParameterSerializersTest:
[info] - should deserialize another param
[info] - should deserialize a parameter
[info] - should serialize a parameter
[info] ModelSerializationTest:
[info] - should deserialize a model
[info] - should serialize a model
[info] ModelRefSerializationTest:
[info] - should deserialize a model ref
[info] - should serialize a model ref
[info] ModelPropertySerializationTest:
[info] - should deserialize a model property with allowable values and ref
[info] - should serialize a model property with allowable values and ref
[info] - should deserialize a model property with allowable values
[info] - should serialize a model property with allowable values
[info] - should deserialize a model property
[info] - should serialize a model property
[info] AllowableValuesSerializersTest:
[info] - should deserialize allowable value list
[info] - should serialize allowable values list
[info] - should deserialize allowable values range
[info] - should serialize allowable values range
[info] ResourceListingValidationTest:
[info] - should fail resource listing without base path
[info] - should fail resource listing without apiVersion
[info] - should fail with missing paths in a ResourceListing
[info] ApiListingReferenceValidationTest:
[info] - should deserialize an ApiListingReference
[info] - should serialize an ApiListingReference
[info] ApiDescriptionValidationTest:
[info] - should fail to deserialize an ApiDescription with path, method, return type
[info] OperationValidationTest:
[info] - should fail to deserialize an Operation with missing param type
[info] - should serialize an operation
[info] ResponseMessageValidationTest:
[info] - should deserialize an ResponseMessage
[info] - should serialize an operation
[info] ParameterValidationTest:
[info] - should deserialize another param
[info] - should deserialize a parameter
[info] - should serialize a parameter
[info] ModelValidationTest:
[info] - should deserialize a model
[info] - should serialize a model
[info] ModelRefValidationTest:
[info] - should deserialize a model ref
[info] - should serialize a model ref
[info] ModelPropertyValidationTest:
[info] - should deserialize a model property with allowable values and ref
[info] - should serialize a model property with allowable values and ref
[info] - should deserialize a model property with allowable values
[info] - should serialize a model property with allowable values
[info] - should deserialize a model property
[info] - should serialize a model property
[info] AllowableValuesValidationTest:
[info] - should deserialize allowable value list
[info] - should serialize allowable values list
[info] - should deserialize allowable values range
[info] - should serialize allowable values range
[info] SwaggerModelTest:
[info] Swagger Model
[info] - should deserialize ResourceListing
[info] - should deserialize ApiListing
[info] - should deserialize ApiListing with AllowableValues
[info] - should maintain model property order when deserializing
[info] - should deserialize models
[info] ResourceExtractorTest:
[info] ResourceExtractor
[info] - should get 3 apis from a resource listing
[info] ApiExtractorTest:
[info] ApiExtractor
[info] - should verify the deserialization of the store api
09:49:13,840 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback.groovy]
09:49:13,840 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback-test.xml]
09:49:13,840 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Found resource [logback.xml] at [file:/Users/jecz/Apache/swagger-codegen-master/target/scala-2.9.1/classes/logback.xml]
09:49:13,991 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - debug attribute not set
09:49:14,000 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.ConsoleAppender]
09:49:14,012 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [STDOUT]
09:49:14,114 |-WARN in ch.qos.logback.core.ConsoleAppender[STDOUT] - This appender no longer admits a layout as a sub-component, set an encoder instead.
09:49:14,114 |-WARN in ch.qos.logback.core.ConsoleAppender[STDOUT] - To ensure compatibility, wrapping your layout in LayoutWrappingEncoder.
09:49:14,114 |-WARN in ch.qos.logback.core.ConsoleAppender[STDOUT] - See also http://logback.qos.ch/codes.html#layoutInsteadOfEncoder for details
09:49:14,116 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [com.wordnik] to DEBUG
09:49:14,116 |-INFO in ch.qos.logback.classic.joran.action.RootLoggerAction - Setting level of ROOT logger to ERROR
09:49:14,116 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [STDOUT] to Logger[ROOT]
09:49:14,117 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - End of configuration.
09:49:14,119 |-INFO in ch.qos.logback.classic.joran.JoranConfigurator@7b1b9e3a - Registering current configuration as safe fallback point

[info] BasicScalaGeneratorTest:
[info] BasicScalaGenerator
[info] - should process a response declaration
[info] - should process a string response
[info] - should process a string array
[info] - should process an unmapped response type
[info] - should get the invoker package
[info] - should get the api package
[info] - should get the model package
[info] - should convert to a declared type
[info] - should convert a string a declaration
[info] - should honor the import mapping
[info] - should quote a reserved var name
[info] - should create a declaration with a List of strings
[info] - should create a declaration with a List of ints
[info] - should create a declaration with a List of floats
[info] - should create a declaration with a List of doubles
[info] - should create a declaration with a List of complex objects
[info] - should verify an api map with query params
[info] - should verify an api map with query params with default values
[info] - should create an api file
[info] BasicGeneratorTest:
[info] BasicGenerator
[info] - should get operations
[info] - should verify ops are grouped by path correctly
[info] - should create a model map
[info] - should create a model file
[info] BasicJavaGeneratorTest:
[info] BasicJavaGenerator
[info] - should process a response declaration
[info] - should process a string response
[info] - should process a string array
[info] - should process an upper-case string array
[info] - should process an unmapped response type
[info] - should get the invoker package
[info] - should get the api package
[info] - should get the model package
[info] - should convert to a declared type
[info] - should convert a string a declaration
[info] - should honor the import mapping
[info] - should quote a reserved var name
[info] - should create a declaration with a List of strings
[info] - should create a declaration with a List of ints
[info] - should create a declaration with a List of floats
[info] - should create a declaration with a List of doubles
[info] - should create a declaration with a List of complex objects
[info] CodegenConfigTest:
[info] PathUtil
[info] - should convert an api name
[info] - should convert a path
[info] CodegenConfig
[info] - should process a response declaration
[info] - should process an unchanged response
[info] - should process an mapped response type
[info] - should get the invoker package
[info] - should get the api package
[info] - should get the model package
[info] - should convert to a declared type
[info] - should honor the import mapping
[info] - should quote a reserved var name
[info] CoreUtilsTest:
[info] CoreUtils
[info] - should verify models are extracted
[info] - should verify operation names
[info] - should find required models
[info] - should find required models from a nested list
[info] PathUtilTest:
[info] PathUtil
[info] - should convert an api name
[info] - should convert a path
[info] - should get determine the basePath implicitly
[info] ResourceListingSerializersTest:
[info] - should deserialize an ResourceListing with no apis
[info] - should serialize an ApiListingReference with no apis
[info] - should deserialize an ResourceListing
[info] - should serialize an ApiListingReference
[info] ApiListingReferenceSerializersTest:
[info] - should deserialize an ApiListingReference
[info] - should serialize an ApiListingReference
[info] ApiDescriptionSerializersTest:
[info] - should deserialize an ApiDescription with no ops
[info] - should serialize an ApiDescription with no operations
[info] - should deserialize an ApiDescription
[info] - should serialize an ApiDescription
[info] OperationSerializersTest:
[info] - should deserialize an Operation
[info] - should deserialize an Operation with an array property
[info] - should serialize an operation
[info] - should deserialize an Operation with array
[info] ErrorResponseSerializersTest:
[info] - should deserialize an Response
[info] - should serialize an operation
[info] ParameterSerializersTest:
[info] - should deserialize another param
[info] - should deserialize a parameter
[info] - should serialize a parameter
[info] ModelSerializationTest:
[info] - should deserialize a model
[info] - should deserialize a model with a set
[info] - should serialize a model
[info] ModelRefSerializationTest:
[info] - should deserialize a model ref
[info] - should serialize a model ref
[info] ModelPropertySerializationTest:
[info] - should deserialize a model property with allowable values and ref
[info] - should serialize a model property with allowable values and ref
[info] - should deserialize a model property with allowable values
[info] - should serialize a model property with allowable values
[info] - should deserialize a model property
[info] - should serialize a model property
[info] - should extract model properties
[info] - should extract model properties with arrays
[info] AllowableValuesSerializersTest:
[info] - should deserialize allowable value list
[info] - should serialize allowable values list
[info] - should deserialize allowable values range
[info] - should serialize allowable values range
[info] CoreUtilsTest:
[info] CoreUtils
[info] - should verify models are extracted
[info] - should verify operation names
[info] - should find required models
[info] - should find required models from a nested list
[info] BasicCSharpGeneratorTest:
[info] BasicCSharpGenerator
[info] - should perserve the name date
[info] - should process a string array
[info] Passed: Total 194, Failed 0, Errors 0, Passed 194
[info] Including from cache: json4s-ast_2.9.1-3.2.5.jar
[info] Including from cache: jackson-annotations-2.2.2.jar
[info] Including from cache: jackson-core-2.2.2.jar
[info] Including from cache: json4s-core_2.9.1-3.2.5.jar
[info] Including from cache: jackson-databind-2.2.2.jar
[info] Including from cache: paranamer-2.5.6.jar
[info] Including from cache: commons-io-2.3.jar
[info] Including from cache: scala-inflector_2.9.1-1.3.5.jar
[info] Including from cache: json4s-jackson_2.9.1-3.2.5.jar
[info] Including from cache: mockito-all-1.9.0.jar
[info] Including from cache: scallop_2.9.1-0.9.4.jar
[info] Including from cache: scalate-core_2.9-1.6.1.jar
[info] Including from cache: scalate-util_2.9-1.6.1.jar
[info] Including from cache: scala-compiler-2.9.1.jar
[info] Including from cache: scala-library-2.9.1.jar
[info] Including from cache: scalap-2.9.1.jar
[info] Including from cache: slf4j-api-1.6.1.jar
[warn] Merging 'META-INF/MANIFEST.MF' with strategy 'discard'
[warn] Strategy 'discard' was applied to a file
[info] Checking every *.class/*.jar file's SHA-1.
[info] Assembly up to date: /Users/jecz/Apache/swagger-codegen-master/target/scala-2.9.1/swagger-codegen.jar
[success] Total time: 26 s, completed Nov 1, 2013 9:49:34 AM

Executed "./bin/scala-petstore.sh"

 Please set scalaVersion := "2.10.3" in build.sbt and run ./sbt assembly

by jecz at August 20, 2014 05:44 PM

/r/netsec

Dave Winer

QuantOverflow

multiperiod optimization using R

I'm interested in multistage optimization problems. Are there any good R packages around to solve such problems over time? I'm not at all an expert in it, so maybe someone knows a good paper / lecture notes to start with? I know classical optimization (linear optimization, convex optimiziation etc) but I've never had to deal with optimization over time. Any reference, theoretical and regarding the implementation are very welcome. I know that this is a very general question, but this is due to my (not yet) attained knowledge. If further clarification is needed I'm happy to share thix. Many thanks in advance

EDIT

Let's take for example the following paper, there we have a optimization problem of the form:

$$\max \sum_{i=1}^{n+1}r^L_ix_i^L$$

such that

$$ x^l_i=r^{l-1}_i x_i^{l-1}-y_i^l+z^l_i,\hspace{2pt} i=1,\dots n,l=1,\dots,L$$ $$ x^l_i=r^{l-1}_{n+1} x_{n+1}^{l-1}+\sum_{i=1}^n(1-\mu^l_i)y_i^l-\sum_{i=1}^n(1+\nu_i^l)z^l_i$$ $$y^l_i\ge 0,\hspace{2pt} i=1,\dots n,l=1,\dots,L$$ $$x^l_i\ge 0,\hspace{2pt} i=1,\dots n,l=1,\dots,L$$ $$z^l_i\ge 0,\hspace{2pt} i=1,\dots n,l=1,\dots,L$$ where some $x_i^l$ is the value (in dollar) of an asset $i$ at time $l$, $r_i^l$ is the asset return, $y^l_i$ and $z^l_i$ are the amount of asset sold and bought. $\mu^l_i $ and $\nu_i^l$ have also economical interpretation, but are not that important for the question. Assuming everthing is deterministic, we can solve this problem using interior points / simplex method since it is an "simple" LP. However the theory I'm looking for should give me ideas if it is optimal to solve at every time $l$ the subproblem (maximize $\sum_{i=1}^{n+1}r^l_ix^l_i$ under the corresponding constraints or is this not a good idea. I have heard / read that one could solve such kind of problem using stochastic programming, but still I'm interested in knowing how to subdivide (if possible) such kind of problems.

by user8 at August 20, 2014 05:18 PM

TheoryOverflow

sporadic server, defferable server : how to fix period and capacity in practice

I have two periodic tasks T1(c1=1,p1=4), T2(c2=2,p2=8), and 3 aperiodic tasks T3(r3=1,c3=3), T4(r4=7,c4=2) T5(r5=11,c5=5).

using Rate Monotonic method, and just with those information available, how can I fix in practice the period and capacity using the two server methods : sporadic server, defferable server

by Makouda at August 20, 2014 05:13 PM

StackOverflow

how to cache the results of an sbt TaskKey?

I have an expensive task that I need to reference in my tests

lazy val exampleSources = TaskKey[Seq[File]]("exampleSources", "for use in tests")

exampleSources := (updateClassifiers in Test).value.select(
  artifact = artifactFilter(classifier = "sources")
)

(and then I can pass exampleSources.value as a parameter to my forked tests)

However, every time I run a test, this task is called, and updateClassifiers (expensive) is called. But I'm happy caching the value on first call and then using that for the session.

Without writing the cache myself, is there any way to do this using built-in sbt objects?

by fommil at August 20, 2014 05:10 PM

Scalatra: using defaults in operation within route code?

I have the following code:

val find =
  (apiOperation[InventoryResponse]("find")
    produces "application/json"
    summary "Main Inventory Search Endpoint"
    notes "Provides main inventory search capabilities"
    parameter queryParam[String]("serverName").description("Calling Server Name").optional.defaultValue("anonymous")
    parameter queryParam[String]("serverIP").description("Calling Server IP Address").optional.defaultValue("127.0.0.1"))

And in my route code:

get("/find", operation(find)) {

  val requestServer = new RequestServer(
    name = params.getOrElse("serverName", "anonymous"),
    caller_ip = params.getOrElse("serverIP", "127.0.0.1")
  )

  // More stuff
}

You'll see that I duplicate the default values in both places. I'd love to not do that, of course. Is there a way to use the first code block's defaults in my route? Is there code that can be built off of find to access default values?

by Christopher Ambler at August 20, 2014 05:08 PM

SBT plugin: How to list files output by incremental recompilation

I am writing a plugin for SBT that requires a list of the class files generated by the last run of the Scala compiler.

This list of class files is then passed into a program that performs some bytecode transformations. Since this transformation process can be slow, I only want the class files written by the last run of the Scala compiler (i.e. those that there modified), not all class files in the output directory.

How can I obtain a list of the files last generated by the compile task?

by Andrew Bate at August 20, 2014 05:08 PM

DataTau

StackOverflow

Macro to access source code of function at runtime

Using Scala macros I would like to get access to source code of function f.

Here is simplified example of my problem:

def logFImplementation(f: => Boolean) {
    val sourceCodeOfF: String = ... // <-- how to get source code of f??

    println("Scala source of f=" + sourceCodeOfF)
}

so for:

logFImplementation { val i = 5; List(1, 2, 3); true }

it should print:

Scala source of f=val i: Int = 5; immutable.this.List.apply[Int](1, 2, 3); true

(Right now I tested Macro to access source code text at runtime and it works great for { val i = 5; List(1, 2, 3); true }.logValueImpl but for f.logValueImpl it just prints f.)

Thanks for any advice.

by Artur Stanek at August 20, 2014 04:58 PM

ansible - read inventory hosts and variables to group_vars/all file

I have a dummy doubt that keeps me stuck for a long time. I have a very banal inventory file with hosts and variables:

[lb]
10.112.84.122

[tomcat]
10.112.84.124

[jboss5]
10.112.84.122

...

[tests:children]
lb
tomcat
jboss5

[default:children]
tests

[tests:vars]
data_base_user=NETWIN-4.3
data_base_password=NETWIN
data_base_encrypted_password=
data_base_host=10.112.69.48
data_base_port=1521
data_base_service=ssdenwdb
data_base_url=jdbc:oracle:thin:@10.112.69.48:1521/ssdenwdb

The problem is that i need to access all these hosts and variables, in the inventory file, from the group_vars/all file.

I've tried the following manners to access the host ip:

{{ lb }}
"{{ hostvars[lb] }}"
"{{ hostvars['lb'] }}"
{{ hostvars[lb] }}

To access a host variable i tried:

"{{ hostvars[tests].['data_base_host'] }}"

All of them are wrong!!! Can anyone help me finding out the best practice to acess hosts and variables, not from a playbook but from a variables file?

Edit:

Ok. Lets clarify.

Problem: Use a host declared in the inventory file in a variable file, lets say: group_vars/all.

Example: I have a db host with IP:10.112.83.37

inventory file:

[db]
10.112.83.37
In the group:vars/all file i want to use that IP to build a variable.

group_vars/all

data_base_url=jdbc:oracle:thin:@{{ db }}:1521/ssdenwdb
In a template i use the variable built in the group_vars/all file:

template file:

oracle_url = {{ data_base_url }}

The problem is that the {{ db }} variable in the group_vars/all file is not replaced by the db host IP. The user can only edit the inventory file.

by user3332697 at August 20, 2014 04:28 PM

Lobsters

QuantOverflow

multi factor equity model exposures not as expected

I'm researching an equity multi factor model.

It contains three factors, say A, B & C. The factors are weighted as such,

         75%        25% 
  (60% A + 40% B) + C

The common factors such as book to price & momentum etc are all constrained and should not contribute to the total return, or marginally. Ideally the return should all come from the model specified above.

Looking at the monthly returns over the past year factor B has a much larger exposure than factor A which I can't understand why. Where would be the best place to start examining why factor B has a much large exposure than factor A?

by mHelpMe at August 20, 2014 04:20 PM

StackOverflow

"eldoc error: (void-function -cons-to-list)" instead of documentation in minibuffer

I'm trying to get good Clojure IDE for purposes of learning. I'm using:

  • Clojure 1.6.0;
  • Emacs 24.3.1;
  • Leiningen 2.4.2;
  • Cider 0.7.0.

Everything is fine, except for a little, but quite disappointing detail: when I enter name of a function in REPL, minibuffer does not provide information regarding invocation parameters, but shows the following message:

eldoc error: (void-function -cons-to-list)

This is very frustrating, because I do not really know the language and I would like to see some hints. If you experienced something like this and solved the problem, please share your knowledge.

by Mark at August 20, 2014 04:07 PM

What can I use scala.Singleton for?

To clarify: I am NOT asking what I can use a Singleton design pattern for. The question is about largely undocumented trait provided in scala.

What is this trait for? The only concrete use case I could find so far was to limit a trait to objects only, as seen in this question: Restricting a trait to objects?

This question sheds some light on the issue Is scala.Singleton pure compiler fiction?, but clearly there was another use case as well!

Is there some obvious use that I can't think of, or is it just mainly compiler magicks?

by Zavior at August 20, 2014 04:06 PM

Check if string is neither empty not space in shell script

I am trying to run the following shell script which is supposed to check if a string is neither space nor empty. However, I am getting the same output for all the 3 mentioned strings. I have tried using the "[[" syntax as well but to no avail.

Here is my code:

str="Hello World"
str2=" "
str3=""

if [ ! -z "$str" -a "$str"!=" " ]; then
        echo "Str is not null or space"
fi

if [ ! -z "$str2" -a "$str2"!=" " ]; then
        echo "Str2 is not null or space"
fi

if [ ! -z "$str3" -a "$str3"!=" " ]; then
        echo "Str3 is not null or space"
fi

I am getting the following output:

# ./checkCond.sh 
Str is not null or space
Str2 is not null or space

by Shubhanshu Mishra at August 20, 2014 04:06 PM

Halfbakery

/r/compsci

/r/clojure

High Scalability

Part 2: The Cloud Does Equal High performance

This a guest post by Anshu Prateek, Tech Lead, DevOps at Aerospike and Rajkumar Iyer, Member of the Technical Staff at Aerospike.

In our first post we busted the myth that cloud != high performance and outlined the steps to 1 Million TPS (100% reads in RAM) on 1 Amazon EC2 instance for just $1.68/hr. In this post we evaluate the performance of 4 Amazon instances when running a 4 node Aerospike cluster in RAM with 5 different read/write workloads and show that the r3.2xlarge instance delivers the best price/performance.

Several reports have already documented the performance of distributed NoSQL databases on virtual and bare metal cloud infrastructures:

by Todd Hoff at August 20, 2014 03:57 PM

StackOverflow

Exception when trying to refresh Clojure code in cider

I am using clojure in Emacs with cider and the cider repl (0.7.0). This is pretty fine, but whenever I run cider-referesh (or hit C-c C-x), I get an exception:

ClassNotFoundException clojure.tools.namespace.repl  java.net.URLClassLoader$1.run (URLClassLoader.java:372)

1. Unhandled java.lang.ClassNotFoundException
   clojure.tools.namespace.repl

           URLClassLoader.java:  372  java.net.URLClassLoader$1/run
           URLClassLoader.java:  361  java.net.URLClassLoader$1/run
         AccessController.java:   -2  java.security.AccessController/doPrivileged
           URLClassLoader.java:  360  java.net.URLClassLoader/findClass
       DynamicClassLoader.java:   61  clojure.lang.DynamicClassLoader/findClass
              ClassLoader.java:  424  java.lang.ClassLoader/loadClass
              ClassLoader.java:  357  java.lang.ClassLoader/loadClass
                    Class.java:   -2  java.lang.Class/forName0
                    Class.java:  340  java.lang.Class/forName
                       RT.java: 2065  clojure.lang.RT/classForName
                 Compiler.java:  978  clojure.lang.Compiler$HostExpr/maybeClass
                 Compiler.java:  756  clojure.lang.Compiler$HostExpr/access$400
                 Compiler.java: 6583  clojure.lang.Compiler/macroexpand1
                 Compiler.java: 6613  clojure.lang.Compiler/macroexpand
                 Compiler.java: 6687  clojure.lang.Compiler/eval
                 Compiler.java: 6666  clojure.lang.Compiler/eval
                      core.clj: 2927  clojure.core/eval
                      main.clj:  239  clojure.main/repl/read-eval-print/fn
                      main.clj:  239  clojure.main/repl/read-eval-print
                      main.clj:  257  clojure.main/repl/fn
                      main.clj:  257  clojure.main/repl
                   RestFn.java: 1096  clojure.lang.RestFn/invoke
        interruptible_eval.clj:   56  clojure.tools.nrepl.middleware.interruptible-eval/evaluate/fn
                      AFn.java:  152  clojure.lang.AFn/applyToHelper
                      AFn.java:  144  clojure.lang.AFn/applyTo
                      core.clj:  624  clojure.core/apply
                      core.clj: 1862  clojure.core/with-bindings*
                   RestFn.java:  425  clojure.lang.RestFn/invoke
        interruptible_eval.clj:   41  clojure.tools.nrepl.middleware.interruptible-eval/evaluate
        interruptible_eval.clj:  171  clojure.tools.nrepl.middleware.interruptible-eval/interruptible-eval/fn/fn
                      core.clj: 2402  clojure.core/comp/fn
        interruptible_eval.clj:  138  clojure.tools.nrepl.middleware.interruptible-eval/run-next/fn
                      AFn.java:   22  clojure.lang.AFn/run
       ThreadPoolExecutor.java: 1142  java.util.concurrent.ThreadPoolExecutor/runWorker
       ThreadPoolExecutor.java:  617  java.util.concurrent.ThreadPoolExecutor$Worker/run
                   Thread.java:  745  java.lang.Thread/run

What is the reason for this, and how can I fix it?

by Arne at August 20, 2014 03:53 PM

Using Free with a non-functor in Scalaz

In the "FP in Scala" book there's this approach for using an ADT S as an abstract instruction set like

sealed trait Console[_]
case class PrintLine(msg: String) extends Console[Unit]
case object ReadLine extends Console[String]

and composing them with a Free[S, A] where S would later be translated to an IO monad. Can this be done with Scalaz's Free type? It seems that all run methods require a functor instance for S.

by estolua at August 20, 2014 03:52 PM

CompsciOverflow

SLR(1) and LALR(1) in given grammar [on hold]

Infact i ran into multiple choice question in recent exam on Compiler Course. Suppose T1, T2 is created with SLR(1) and LALR(1) for Grammar G. if G be a SLR(1) Grammar which of the following is TRUE?

a) just T1 has meaning for G.

b) T1 and T2 has not any difference.

c) total Number of non-error element in T1 is lower than T2

d) total Number of error element in T1 is lower than T2

My solution:

if grammar is SLR(1) so is LALR(1) then table (size) of LALR(1) is the same with SLR(1). so (b) is correct.

by user3661613 at August 20, 2014 03:48 PM

Grammars: is there some connection between non-terminals $S$ and $S'$?

Given a grammar such as the following, does $S'$ have some special meaning or does it just denote another non-terminal like $B$, $A$, $P$, $Q$ etc.?

$$\begin{align*} S &\to aBS'\\ B &\to b\\ S'&\to bA \end{align*}$$

by Plengo at August 20, 2014 03:41 PM

/r/compsci

Ask CompSci: How do you stay up to date on the latest research/happenings in the comp sci world?

So I just got done reading a few Simon Singh books, and I noticed that everyone he talks about got ideas from reading a paper that landed on their desk written by someone else in the field. So my question is, where do you go to get this info? Where do you go to keep tabs on what's happening in comp sci?

submitted by Killobyte
[link] [3 comments]

August 20, 2014 03:33 PM

/r/netsec

/r/scala

Lobsters

QuantOverflow

How to design a custom equity backtester?

I was thinking about writing my own backtester and I realize I have to make some assumptions. So I was hoping I could post what I am planning on doing and hopefully some of you can give me some ideas on how to make it better (I'm sure there is a lot that can be improved).

First of all, my strategy involves holding stocks for usually some days, I am not doing (probably any) intra-day trading.

So here is what I was thinking. First, I would buy some minute OHLC stock quotes covering the stocks I am interested in (thinking about buying some from pitrading.com, is their quality acceptable?). Then if the algorithm triggers a buy or sell at some bar, I would "execute" the order using the high or low of the very next bar (attempting to be as pessimistic as possible here). One thing I am curious about is bid/ask, so I was thinking about maybe adding/subtracting a few cents to take this into account when buying/selling. I would just see what these values have been recently (difference between bid/ask and quote for some recent data on these stocks and then just use these numbers as I wouldn't be backtesting that far back). I would assume that I can buy/sell all I want then at that price.

Lastly I would include the cost of commission in the trade. I would neglect any effect my trade would have on the market. Is there any rough guideline using volume to estimate how much you would have to buy/sell to have an effect?

I would also simulate stop-loss sell orders and they, too, would be executed at the next bar low after the price passed the threshold.

That's it, it will be pretty simple to implement. I am just hoping to make it conservative so it can give me some insight into how well my program works.

Any thoughts or criticisms about this program would be greatly appreciated. I am new at this and I am sure there are some details I am missing.

by user667 at August 20, 2014 03:16 PM

Lambda the Ultimate Forum

Function Types and Dylan 2016

Function Types and Dylan 2016

Moving towards Dylan 2016, the Dylan community would like to address some weaknesses in the language specification and what can be readily expressed in Dylan code. In this post, we'll look at function types as well as provide a brief introduction to some details of the type system implementation within the Open Dylan compiler.

One of the big holes in the Dylan type system is the inability to specify function types. What this means is that you can only say that a value is of type and can't indicate anything about the desired signature, types of arguments, return values, etc. This is unfortunate for a number of reasons:

  • Poor static type safety. The compiler can verify very little involving a function value. It can't warn when the wrong number of arguments or the wrong types of arguments are passed.
  • Less clear interfaces. The type signature of a function must be documented clearly rather than being expressed clearly within the code.
  • Optimization is more difficult. Since the compiler can't perform as many checks at compile time, more checks need to be performed at run-time, which limits the amount of optimization that can be performed by the compiler and restricts the generated code to using slower paths for function invocation.

In addition, function types may allow us to improve type inference. This is something that people have long wanted to have in the Dylan language.

August 20, 2014 03:16 PM

StackOverflow

How do I create a case-insensitive lexer

I'm trying to create a SQL lexer (well, a full parser but you have to start somewhere) and I'm not sure how to proceed. I want to write something like this:

def nextToken(input: List[Char]) = input match {
  case 'S'::'E'::'L'::'E'::'C'::'T'::tail => (SELECT, tail)
  case _ => ??? // etc.
}

But SQL is case insensitive. I could uppercase all the characters in input, but that would also uppercase strings. What I really need is a way to do case insensitive comparisons, and then be left with the correct tail (remainder List[Char] after matching a token). Is there a way to do this easily in Scala 2.10.x?

by kelloti at August 20, 2014 03:07 PM

Scala API: view IndexedSeq[T] as Map[Int, T]

Simply speaking, is there a way in Scala collections library which would provide map-like view for indexed sequence, using indices as keys?

I have following trait (limit on 16 elems is intended and enforced by external API)

trait Container[T >: Null]
{
    private val ElemsLimit = 16 // block's meta is 4-bit
    private var table: Seq[T] = null

    protected def register(elems: (Int, T)*)(implicit manifest: Manifest[T]) =
    {
        if (table != null)
            throw new IllegalStateException("Already initialized")
        val array = Array.fill[T](ElemsLimit)(null)
        elems foreach { el => array(el._1) = el._2 }
        table = array
    }

    def elem(idx: Int) = table(idx)
    def allElems = table.zipWithIndex.filter(_  != null) // some mapView instead of zipWithIndex
}

I know that I can construct immutable map, and frankly speaking it will work just fine for my purposes. I can also write MapView for this myself. Though I'm really interested if there's existing solution somewhere. Or, maybe, there's array-backed immutable map which I missed.

Thanks.

by Target-san at August 20, 2014 03:06 PM

Is it possible to have macro annotation parameters (and how to get them)?

I have some data source that requires wrapping operations in transactions, which have 2 possible outcomes: success and failure. This approach introduces quite a lot of boilerplate code. What I'd like to do is something like this (the same remains true for failures (something like @txFailure maybe)):

@txSuccess(dataSource)
def writeData(data: Data*) {
  dataSource.write(data)
}

Where @txSuccess is a macro annotation that, after processing will result in this:

def writeData(data: Data*) {
  val tx = dataSource.openTransaction()

  dataSource.write(data)

  tx.success()
  tx.close()
}

As you can see, this approach can prove quite useful, since in this example 75% of code can be eliminated due to it being boilerplate.

Is that possible? If yes, can you give me a nudge in the right direction? If no, what can you recommend in order to achieve something like that?

by cdshines at August 20, 2014 02:55 PM

/r/netsec

TheoryOverflow

Is this NP-Hard or does a known optimal polynomial time solution exist?

Suppose we have 10 items, each of a different cost

Items: {1,2,3,4,5,6,7,8,9,10}

Cost: {2,5,1,1,5,1,1,3,4,10}

and 3 customers

{A,B,C}.

Each customer has a requirement for a set of items. He will either buy all the items in the set or none. There's just one copy of each item. For example, if

A requires {1,2,4}, Total money earned = 2+5+1= 8

B requires {2,5,10,3}, Total money earned = 5+5+10+1 = 21

C requires {3,6,7,8,9}, Total money earned = 1+1+1+3+4 = 10

So, if we sell A his items, B won't purchase from us because we don't have item 2 with us anymore. We wish to earn maximum money. By selling B, we can't sell to A and C. So, if we sell A and C, we earn 18. But just by selling B, we earn more, i.e., 21.

We thought of a bitmasking solution, which is exponential in order though and only feasible for small set of items. And other heuristic solutions which gave us non-optimal answers. But after multiple tries we couldn't really come up with any fast optimal solution.

We were wondering if this is a known problem, or similar to any problem? Or is this problem NP Hard and thus a polynomial optimal solution doesn't exist and we're trying to achieve something that's not possible?

Also, does the problem change if all the items cost the same?

Thanks a lot.

by Hoxeni at August 20, 2014 02:50 PM

/r/clojure

StackOverflow

Use `this` in a generated macro method

This is a follow-up on my previous question.

I would like something like the code below to work. I want to be able to generate a macro-generated method:

case class Cat()

test[Cat].method(1)

Where the implementation of the generated method itself is using a macro (a "vampire" method):

// macro call
def test[T] = macro testImpl[T]

// macro implementation
def testImpl[T : c.WeakTypeTag](c: Context): c.Expr[Any] = {
  import c.universe._
  val className = newTypeName("Test")

  // IS IT POSSIBLE TO CALL `otherMethod` HERE?
  val bodyInstance = q"(p: Int) => otherMethod(p * 2)"

  c.Expr { q"""
    class $className  {
      protected val aValue = 1

      @body($bodyInstance)
      def method(p: Int): Int = macro methodImpl[Int]

      def otherMethod(p: Int): Int = p
    }
    new $className {}
  """}
}

// method implementation
def methodImpl[F](c: Context)(p: c.Expr[F]): c.Expr[F] = {
  import c.universe._

  val field = c.macroApplication.symbol
  val bodyAnnotation = field.annotations.filter(_.tpe <:< typeOf[body]).head
  c.Expr(q"${bodyAnnotation.scalaArgs.head}.apply(${p.tree.duplicate})")
}

This code fails to compile with:

[error] no-symbol does not have an owner
last tree to typer: This(anonymous class $anonfun)
[error]               symbol: anonymous class $anonfun (flags: final <synthetic>)
[error]    symbol definition: final class $anonfun extends AbstractFunction1$mcII$sp with Serializable
[error]                  tpe: examples.MacroMatcherSpec.Test.$anonfun.type
[error]        symbol owners: anonymous class $anonfun -> value <local Test> -> class Test -> method e1 -> class MacroMatcherSpec -> package examples
[error]       context owners: value $outer -> anonymous class $anonfun -> value <local Test> -> class Test -> method e1 -> class MacroMatcherSpec -> package examples
[error]
[error] == Enclosing template or block ==
[error]
[error] DefDef( // val $outer(): Test.this.type
[error]   <method> <synthetic> <stable> <expandedname>
[error]   "examples$MacroMatcherSpec$Test$$anonfun$$$outer"
[error]   []
[error]   List(Nil)
[error]   <tpt> // tree.tpe=Any
[error]   $anonfun.this."$outer " // private[this] val $outer: Test.this.type,    tree.tpe=Test.this.type
[error] )

I am really bad at deciphering what this means but I suspect that it is related to the fact that I can't reference this.otherMethod in the body of the vampire method. Is there a way to do that?

If this works, my next step will be to have this kind of implementation for otherMethod:

def otherMethod(p: Int) = new $className { 
  override protected val aValue = p 
}

by Eric at August 20, 2014 02:40 PM

Install Apache Spark on Windows 8

Can some kind, generous person please post a step-by-step guide to install and run Apache Spark on Windows 8? And be very detailed with each step, please.

I am a business analyst and have no command line experience. But I was able to install and run several Spark jobs on my wife's Mac at the scala> command prompt -- I can easily use SBT, but I could not get everything configured correctly on Windows 8. (Yes I will buy a Mac next time).

Thank you.

p.s. I tried to follow the video guide at: https://spark.apache.org/screencasts/1-first-steps-with-spark.html but that is for Mac.

STEP BY STEP GUIDE It may be very helpful for other users as well if somebody can post a step by step guide, since instructions like "run blah blah" are hard to follow. I need each exact step please.

I am not posting the steps I have taken and the errors I get, because what I really want is a step-by-step guide starting from the very beginning. Thank you.

by user3439308 at August 20, 2014 02:39 PM

Lobsters

StackOverflow

What's the right ZMQ pattern?

I would like to implement a system where:

  • there is one server
  • there are many clients
  • the clients send requests to the server.

Obviously, the REQ/REP pattern would be the right one to use. But:

  • I want the clients to be able to send multiple requests, without waiting for the response.
  • I want the server to process multiple requests in parallel.

So as far as I know, the correct pattern for this would be DEALER/ROUTER, is this correct? Or is there a better approach?

The client should be able to send many requests and should receive the corresponding responses asynchronously.

Thanks in advance

by user2297996 at August 20, 2014 02:31 PM

/r/clojure

StackOverflow

Using Scalaz stream, how to convert A => Task[B] to Process1[A,B]

I am encoding a http request to a remote server as a function which takes an id and yields a Task[JValue].

I would like to convert that function into a Process1, to simplify my program (By simplify, i mean use Processes as building blocks as much as possible)

I would like to convert the function

    reqDoc(id:A):Task[B]

(where A is the type of the Id, and B is the type of the response) into

    reqDoc:Process1[A,B]

by Atle at August 20, 2014 02:27 PM

What is Hindley-Milner?

I encountered this term Hindley-Milner, and I'm not sure if grasp what it means.

I've read the following posts:

But there is no single entry for this term in wikipedia where usually offers me a concise explanation.
Note - one has now been added

What is it?
What languages and tools implement or use it?
Would you please offer a concise answer?

by yehnan at August 20, 2014 02:26 PM

Ways for heartbeat message

I am trying to set a heartbeat over a network, i.e. having an actor send a message to the network on a fixed period of time. I would like to know if you have any better solution than the one I used below as I feel is pretty ugly, considering synchronisation contraints.

import akka.actor._
import akka.actor.Actor
import akka.actor.Props
import akka.actor.ScalaActorRef
import akka.pattern.gracefulStop
import akka.util._
import java.util.Calendar
import java.util.concurrent._
import java.text.SimpleDateFormat
import scala.Array._
import scala.concurrent._
import scala.concurrent.duration._
import scala.concurrent.ExecutionContext.Implicits.global

sealed trait Message
case class Information() extends Message//does really need to be here?
case class StartMessage() extends Message
case class HeartbeatMessage() extends Message
case class StopMessage() extends Message
case class FrequencyChangeMessage(
    f: Int
) extends Message

class Gps extends Actor {
    override def preStart() {
        val child = context.actorOf(Props(new Cadencer(500)), name = "cadencer")
    }
    def receive = {
        case "beat" =>
            //TODO
        case _      =>
            println("gps: wut?")
    }
}

class Cadencer(p3riod: Int) extends Actor {
    var period: Int = _
    var stop: Boolean = _
    override def preStart() {
        period = p3riod
        stop = false
        context.system.scheduler.scheduleOnce(period milliseconds, self, HeartbeatMessage)
    }
    def receive = {
        case StartMessage =>
            stop = false
            context.system.scheduler.scheduleOnce(period milliseconds, self, HeartbeatMessage)
        case HeartbeatMessage =>
            if (false == stop) {
                context.system.scheduler.scheduleOnce(0 milliseconds, context.parent, "beat")
                context.system.scheduler.scheduleOnce(period milliseconds, self, HeartbeatMessage)
            }
        case StopMessage =>
            stop = true
        case FrequencyChangeMessage(f) =>
            period = f
        case _  =>
            println("wut?\n")
            //throw exception
    }
}

object main extends App {
    val system = akka.actor.ActorSystem("mySystem")
    val gps = system.actorOf(Props[Gps], name = "gps")
}

What I called cadencer here sends to a target actor and to itself an HeartbeatMessage ; to itself to transmit the order to resend one after a given amount of time, and thus going on with the process till a StopMessage (flipping the stop to true). Good?

Is even having a separated actor efficient rather than having it within a greater one?

by wipman at August 20, 2014 02:21 PM

Restricting a trait to objects?

Is there a way to restrict a trait so that it can only be mixed into objects? E.g.

trait OnlyForObjects {
  this: ... =>
}

object Foo extends OnlyForObjects  // --> OK

class Bar extends OnlyForObjects   // --> compile error

by Jo Kade at August 20, 2014 02:09 PM

/r/netsec

StackOverflow

How to convert a generic HList to a List

I have these:

trait A[T]
class X
class Y

object B {
   def method[H :< HList](h: H) = h.toList[A[_]]
}

Parameter h of method will always be a HList of A[T], like new A[X] :: new A[Y] :: HNil.

I would like to convert the HList to a List[A[_]].

How can I get this with generic code, because trait HList doesn't have the toList method()?

by david.perez at August 20, 2014 02:03 PM

How do I submit a form for a model that contains a list of other models with Salat & Play framework?

I have a model. It contains a list of another model:

case class Account(
  _id: ObjectId = new ObjectId,
  name: String,
  campaigns: List[Campaign]
)

case class Campaign(
  _id: ObjectId = new ObjectId,
  name: String
)

I have a form and action for display and creating new Accounts:

  val accountForm = Form(
    mapping(
      "id" -> ignored(new ObjectId),
      "name" -> nonEmptyText,
      "campaigns" -> list(
        mapping(
          "id" -> ignored(new ObjectId),
          "name" -> nonEmptyText
        )(Campaign.apply)(Campaign.unapply)
      )
    )(Account.apply)(Account.unapply)
  )

  def accounts = Action {
    Ok(views.html.accounts(AccountObject.all(), accountForm, CampaignObject.all()))
  }

  def newAccount = Action {
    implicit request =>    
    accountForm.bindFromRequest.fold(
      formWithErrors => BadRequest(views.html.accounts(AccountObject.all(), formWithErrors, CampaignObject.all())),
      account => {
        AccountObject.create(account)
        Redirect(routes.AccountController.accounts)
      }
    )
  }

Finally, here is my view for Accounts:

@(accounts: List[models.mongodb.Account], account_form: Form[models.mongodb.Account], campaign_list: List[models.mongodb.Campaign])

@import helper._
@args(args: (Symbol, Any)*) = @{
    args
}
@main("Account List") {
    <h1>@accounts.size Account(s)</h1>
    <ul>
    @accounts.map { account =>
        <li>
            @account.name
        </li>
    }
    </ul>
    <h2>Add a New Account</h2>
    @form(routes.AccountController.newAccount()) {
        <fieldset>
            @inputText(account_form("name"), '_label -> "Account Name")
            @select(
                account_form("campaigns"),
                options(campaign_list.map(x => x.name):List[String]),
                args(
                    'class -> "chosen-select",
                    'multiple -> "multiple",
                    Symbol("data-placeholder") -> "Add campaigns",
                    'style -> "width:350px;"
                ): _*
            )
            <input type="submit" value="Create">
        </fieldset>
    }
}

The problem is when I submit this form, it submits it with a list of strings for the campaigns field. This gives me a 400 error when I post the form submission.

I would like to either submit submit the form with a list of campaigns instead of strings or have the form submit with a list of strings, then process the strings into a list of campaigns in my controller. Which way would be better and how would I do it? Thanks!

by Di Zou at August 20, 2014 01:59 PM

Joda Time: how to parse time and set default date as today's date

I have to parse dates in formats:

HH:mm 
dd MMM 
dd MMM yyyy

I've managed to handle the last two of them:

val dateParsers = Array(
    DateTimeFormat.forPattern("dd MMM").getParser,
    DateTimeFormat.forPattern("dd MMM yyyy").getParser,
    ISODateTimeFormat.timeParser().getParser
)

val formatter = new DateTimeFormatterBuilder().append(null, dateParsers).toFormatter.withZoneUTC
DateTime.parse(updatedString, formatter.withDefaultYear(currentYear).withLocale(ruLocale))

Everything is ok with dd MMM and dd MMM yyyy, but when I'm trying parse time like 05:40 I'm getting 01-01-1970 date instead of today's date. What is the simplest method to set default date as today's date in parser?

by Chelovek Chelovechnii at August 20, 2014 01:57 PM

/r/scala

/r/systems

TheoryOverflow

LLAR(1) and SLR(1) tricky question [on hold]

Infact i ran into multiple choice question in recent exam on Compiler Course. Suppose T1, T2 is created with SLR(1) and LALR(1) for Grammar G. if G be a SLR(1) Grammar which of the following is TRUE?

a) just T1 has meaning for G.

b) T1 and T2 has not any difference.

c) total Number of non-error element in T1 is lower than T2

d) total Number of error element in T1 is lower than T2 (because of lower space in table)

My solution:

if grammar is SLR(1) so is LALR(1) then table (size) of LALR(1) is the same with SLR(1). so (b) is correct.

by Moukh Sohanio at August 20, 2014 01:45 PM

/r/systems

UnixOverflow

USB audio only outputs white noise

I am running OpenBSD/i386 5.1 on a 5 year old laptop. The speakers and headphone port work, but the headphone port is a little loose so I am trying to install an external USB sound card (Fiio E17 USB DAC). No problems using it on Windows.

The device is detected and I created a node for it in /dev with sh /dev/MAKEDEV audio1, then linked the rest of the devices to point to the new sound card. So far so good, I am able to run cat /dev/urandom > /dev/audio and I hear white noise. However, I am not able to run any other audio through it.

My tail /var/log/messages after plugging the device in:

Aug 30 10:03:55 s96j /bsd: uhidev0 at uhub1
Aug 30 10:03:55 s96j /bsd:  port 1 configuration 1 interface 0 "FiiO FiiO USB DAC-E17" rev 1.10/0.01 addr 2
Aug 30 10:03:55 s96j /bsd: uhidev0: iclass 3/0
Aug 30 10:03:55 s96j /bsd: uhid0 at uhidev0: input=18, output=27, feature=0
Aug 30 10:03:55 s96j /bsd: uaudio0 at uhub1
Aug 30 10:03:55 s96j /bsd:  port 1 configuration 1 interface 1 "FiiO FiiO USB DAC-E17" rev 1.10/0.01 addr 2
Aug 30 10:03:56 s96j /bsd: uaudio0: ignored setting with type 8193 format
Aug 30 10:03:56 s96j /bsd: uaudio0: audio rev 1.00, 2 mixer controls
Aug 30 10:03:56 s96j /bsd: audio1 at uaudio0

My list of relevant devices from /dev:

lrwxr-xr-x  1 root  wheel         6 Aug 30 09:44 audio -> audio1
crw-rw-rw-  1 root  wheel   42, 128 Aug 30 10:07 audio0
crw-rw-rw-  1 root  wheel   42, 129 Aug 30 10:15 audio1
crw-rw-rw-  1 root  wheel   42, 130 Aug 30 06:40 audio2
lrwxr-xr-x  1 root  wheel         9 Aug 30 09:44 audioctl -> audioctl1
crw-rw-rw-  1 root  wheel   42, 192 Aug 30 06:40 audioctl0
crw-rw-rw-  1 root  wheel   42, 193 Aug 30 09:44 audioctl1
crw-rw-rw-  1 root  wheel   42, 194 Aug 30 06:40 audioctl2
lrwxr-xr-x  1 root  wheel         6 Aug 30 09:45 mixer -> mixer1
crw-rw-rw-  1 root  wheel   42,  16 Aug 30 06:40 mixer0
crw-rw-rw-  1 root  wheel   42,  17 Aug 30 09:44 mixer1
crw-rw-rw-  1 root  wheel   42,  18 Aug 30 06:40 mixer2
lrwxr-xr-x  1 root  wheel         6 Aug 30 09:45 sound -> sound1
crw-rw-rw-  1 root  wheel   42,   0 Aug 30 06:40 sound0
crw-rw-rw-  1 root  wheel   42,   1 Aug 30 09:44 sound1
crw-rw-rw-  1 root  wheel   42,   2 Aug 30 06:40 sound2

A simple test from the FAQ to determine if data is passing over the device:

# cat > /dev/audio < /dev/zero &
[1] 21098
# audioctl play.{seek,samples,errors}
play.seek=61712
play.samples=1146080
play.errors=0
# audioctl play.{seek,samples,errors}
play.seek=52896
play.samples=1542800
play.errors=0
# audioctl play.{seek,samples,errors}
play.seek=61712
play.samples=1957152
play.errors=0

My audioctl -a:

name=USB audio
version=
config=uaudio
encodings=slinear_le:16:2:1,slinear_le:24:3:1
properties=independent
full_duplex=0
fullduplex=0
blocksize=8816
hiwat=7
lowat=1
output_muted=0
monitor_gain=0
mode=
play.rate=44100
play.sample_rate=44100
play.channels=2
play.precision=16
play.bps=2
play.msb=1
play.encoding=slinear_le
play.gain=127
play.balance=32
play.port=0x0
play.avail_ports=0x0
play.seek=8816
play.samples=131988
play.eof=0
play.pause=0
play.error=1
play.waiting=0
play.open=0
play.active=0
play.buffer_size=65536
play.block_size=8816
play.errors=2267
record.rate=44100
record.sample_rate=44100
record.channels=2
record.precision=16
record.bps=2
record.msb=1
record.encoding=slinear_le
record.gain=127
record.balance=32
record.port=0x0
record.avail_ports=0x0
record.seek=0
record.samples=0
record.eof=0
record.pause=0
record.error=0
record.waiting=0
record.open=0
record.active=0
record.buffer_size=65536
record.block_size=8816
record.errors=0

And lastly, my mixerctl -a:

outputs.aux.mute=off
outputs.aux=255,255

Again I am able to cat /dev/urandom > /dev/audio and get white noise, but nothing else I've tried lets me output other sounds or music. I also tried cat sample.au > /dev/audio but that was silent as well.

Any suggestions or help would be greatly appreciated! Worst case, hopefully someone can use the steps I outlined here to troubleshoot their own sound devices.

by ssh2ksh at August 20, 2014 01:42 PM

StackOverflow

Save Gatling results as a JSON or xml file

Recently I have started using Gatling but to integrate Gatling with Jenkins I need the out put in the JSON or xml format. How can I achieve this?

by swapnil at August 20, 2014 01:33 PM

Scala Dependency injection

I'm not asking for opinion here but facts

I'm trying to pick a new DI. I have had some experience with Guice. Overall i would said that one advantage of it, is that when from scala you need to integrate with Java, Guice does the job. So for interoperability it's a clear plus.

If we put aside this interoperability issue, can anyone, give me a brief comparison between

scaladi, guice, Macwire?

I'm still new at understanding scaldi. One thing that i found surprising is the idea of having to move around around your injector trough an implicit parameter. I never almost did that in guice. I either wire up everything in my main, or use assisted injection, hence passing around the factories to the class that needs some specific instance.

If someone could further elaborate on that design choice i would appreciate.

Many thanks,

-M-

Here is something strange i found with macWire:

trait Interf {
  def name:String
}

class InterfImpl(val name:String) extends Interf

trait AModule {

  import com.softwaremill.macwire.MacwireMacros._


   //lazy val aName: String = "aName"
   lazy val theInterf: Interf = wire[InterfImpl]

}

object injector extends AModule

println(injector.theInterf.name)

I get a strange value. I don't know what macWire is doing at that level. I though it could make a compile error or something. Indeed, i did not give any String value.

by MaatDeamon at August 20, 2014 01:26 PM

mongodb database with scala play 2.0 tutorial

Is there a tutorial how I can use mongodb database with scala play 2.0?

On the official website (playframework.org) there seems to be only the SQL example.

by play at August 20, 2014 01:12 PM

scala REPL - scala is not installed

I have installed sbt as told in following statements:

  1. Download sbt from here: http://scalasbt.artifactoryonline.com/scalasbt/sbt-native-packages/org/scala-sbt/sbt/0.12.4/sbt.tgz
  2. Unpack the archive to a directory of your choice
  3. Add the bin/ directory to the PATH environment variable. Open the file ~/.bashrc in an editor (create it if it doesn’t exist) and add the following line export PATH=/PATH/TO/YOUR/sbt/bin:$PATH

But, when I type scala in terminal, it says scala in not installed! Though, sbt -h works fine. How to resolve the issue?

by Ru11 at August 20, 2014 01:08 PM

leiningen - how to add dependencies for local jars?

I want to use leiningen to build and develop my clojure project. Is there a way to modify project.clj to tell it to pick some jars from local directories?

I have some proprietary jars that cannot be uploaded to public repos.

Also, can leiningen be used to maintain a "lib" directory for clojure projects? If a bunch of my clojure projects share the same jars, I don't want to maintain a separate copy for each of them.

Thanks

by signalseeker at August 20, 2014 12:51 PM

Planet Emacsen

Irreal: A Followup on Leaving Gmail

In my post about Chen Bin’s guide to using Gnus with Gmail, I mentioned that in my own quest to move my email operations to Emacs, I was looking at three packages: mew, mu4e, and gnus. In the comments, I got a couple more recommendations. David recommended Wanderlust as a mature and full featured solution. Sam recommended that I look at Notmuch. Both useful additions to my list and I’m glad to have them even though they complicate my decision making.

Sam also provided a link to a post by the invaluable Christopher Wellons that compares Notmuch and mu4e. Wellons’ post is interesting because it’s principally about moving off of Gmail and onto his own server that he would access using an Emacs-based email client. I found this particularly interesting because that’s my end goal: no email middlemen that offer the NSA and others easy access to my email.

If you’re OK with Gmail but would just like to compose messages in Emacs, Artur Malabarba has got you covered with his gmail-message-mode that lets you hot key from your browser to Emacs when you want to compose an email. Malabarba’s got it working with Chrome, Firefox, and Conkeror. He uses Markdown to compose messages but it could probably be patched to use Org-mode fairly easily. In any event if you’re interested in integrating Gmail and Emacs, give Malabarba’s post a look.

by jcs at August 20, 2014 12:30 PM

StackOverflow

What's wrong with my implementation of the memoize function?

Theoretically, memoization applied to a referentially transparent like the Fibonacci should speed things up considerably.

Function.prototype.memoize = function () {
  var cache = {},
      slice = Array.prototype.slice,
      originalFunction = this;
  return function () {
    var key = slice.call(arguments);
    if (key in cache) {
      return cache[key];
    } else {
      return (cache[key] = originalFunction.apply(this, key));
    }
  };
};

var Y = function (f) {
  return function (x) {
    return f(Y(f))(x);
  };
};

var almostFibonacci = function (f) {
  return function (n) {
    return n == 0 || n == 1 ? n : f(n - 1) + f(n - 2);
  };
};

console.log(Y(almostFibonacci).memoize()(50));

The above implementation does not. Computing the 50th Fibonacci number which ought to be quite doable in case of a memoized version seems impossible in this case. The scratchpad on Firefox keeps dying when I try.

I have a feeling that maybe the closure isn't working. Any idea what I'm doing wrong?

http://jsfiddle.net/x18ewhsu/

by Siddharth at August 20, 2014 12:28 PM

/r/systems

/r/netsec

StackOverflow

Multiplying numbers on horizontal, vertial, and diagonal lines

I'm currently working on a project Euler problem (www.projecteuler.net) for fun but have hit a stumbling block. One of the problem provides a 20x20 grid of numbers and asks for the greatest product of 4 numbers on a straight line. This line can be either horizontal, vertical, or diagonal.

Using a procedural language I'd have no problem solving this, but part of my motivation for doing these problems in the first place is to gain more experience and learn more Haskell.
As of right now I'm reading in the grid and converting it to a list of list of ints, eg -- [[Int]]. This makes the horizontal multiplication trivial, and by transposing this grid the vertical also becomes trivial.

The diagonal is what is giving me trouble. I've thought of a few ways where I could use explicit array slicing or indexing, to get a solution, but it seems overly complicated and hackey. I believe there is probably an elegant, functional solution here, and I'd love to hear what others can come up with.

by untwisted at August 20, 2014 12:21 PM

QuantOverflow

How can theta be so large on this option?

The AAPL Sep 95 put currently has a theta of -.21. The put midpoint is .84. 84/21 = 4 days. However, the put has nearly a month before expiration, at which time it will be zero. Not 4 days from now.

What am I doing wrong or missing in the above calculation?

by 4thSpace at August 20, 2014 12:06 PM

StackOverflow

Spark throwing Out of Memory error

I have a single test node with 8 GB ram on which I am loading barely 10 MB of data(from csv files) into Cassandra(on the same node itself). Im trying to process this data using spark(running on the same node).

Please note that for SPARK_MEM, Im allocating 1 GB of RAM and SPARK_WORKER_MEMORY I'm allocating the same. The allocation of any extra amount of memory results in spark throwing a "Check if all workers are registered and have sufficient memory error", which is more often than not indicative of Spark trying to look for extra memory(as per SPARK_MEM and SPARK_WORKER_MEMORY properties) and coming up short.

When I try to load and process all data in the Cassandra table using spark context object, I'm getting an error during processing. So, I'm trying to use a looping mechanism to read chunks of data at a time from one table, process them and put them in another table.

My source code has the following structure

var data=sc.cassandraTable("keyspacename","tablename").where("value=?",1)
data.map(x=>tranformFunction(x)).saveToCassandra("keyspacename","tablename")

for(i<-2 to 50000){
    data=sc.cassandraTable("keyspacename","tablename").where("value=?",i)
    data.map(x=>tranformFunction(x)).saveToCassandra("keyspacename","tablename")    
}

Now, this works for a while, for around 200 loops, and then this throws an error: java.lang.OutOfMemoryError: unable to create a new native thread.

I've got two questions:

Is this the right way to deal with data?
How can processing just 10 MB of data do this to a cluster?

by Mkl Rjv at August 20, 2014 12:06 PM

DataTau

CompsciOverflow

How to simulate a die given a fair coin

Suppose that you're given a fair coin and you would like to simulate the probability distribution of repeatedly flipping a fair (six-sided) die. My initial idea is that we need to choose appropriate integers $k,m$, such that $2^k = 6m$. So after flipping the coin $k$ times, we map the number encoded by the k-length bitstring to outputs of the die by dividing the range $[0,2^k-1]$ into 6 intervals each of length $m$. However, this is not possible, since $2^k$ has two as its only prime factor but the prime factors of $6m$ include three. There should be some other simple way of doing this, right?

by probability_guy at August 20, 2014 11:52 AM

QuantOverflow

Stock Price Evolution Equation Clarifications [on hold]

What is the dimension of Δt? Is it second [s]? What are the expected return and the expected volatility? How can I calculate them for any given stock? Should I use the return and volatility values for the moment in which I need to simulate the price? Is ε a random variable | ε ∈ [0,1) with an expected value of 1/2?

enter image description here

  • S(0): The stock price today.
  • S(Δt): The stock price at a (small) time into the future.
  • Δt: A small increment of time.
  • μ: The expected return.
  • σ: The expected volatility.
  • ε: A (random) number sampled from a standard normal distribution.

by FormlessCloud at August 20, 2014 11:37 AM

/r/compsci

Solving bridge and torch puzzle with dynamic programming

I'm trying to solve a bridge and torch like problem with dynamic programming. More about this problem can be found on wikipedia (http://en.wikipedia.org/wiki/Bridge_and_torch_problem). The story goes like this:

Four people come to a river in the night. There is a narrow bridge, but it can only hold two people at a time. They have one torch and, because it's night, the torch has to be used when crossing the bridge. Person A can cross the bridge in one minute, B in two minutes, C in five minutes, and D in eight minutes. When two people cross the bridge together, they must move at the slower person's pace. The question is, can they all get across the bridge in 15 minutes or less?

Now I've managed to solve the problem using a graph, but I don't see how I can solve this type of problem using dynamic programming. How do you split up the problem in to subproblems? And how do the solutions of the subproblems lead to the optimal solution of the whole problem? What are the stages and states?

Does somebody know how to solve this using DP? And maybe tell me how to solve this puzzle with Java?

submitted by Lesso_
[link] [17 comments]

August 20, 2014 11:31 AM

StackOverflow

How to write it with for-comprehension instead of nested flatMap calls?

I am trying to translate examples from this article to Scala.

So I defined a monadic class Parser with success as return function.

class Parser[A](val run: String => List[(A, String)]) {
  def apply(s: String) = run(s)
  def flatMap[B](f: A => Parser[B]): Parser[B] = {
    val runB = {s: String => for((r, rest) <- run(s); rb <- f(r)(rest)) yield rb}
    new Parser(runB)
  }
}

def success[A](a:A):Parser[A] = {
  val run = {s:String => List((a, s))}
  new Parser(run)
}

I defined also a new parser item to return the 1st character of the input.

def item(): Parser[Char] = {
  val run = {s: String => if (s.isEmpty) Nil else List((s.head, s.tail))}
  new Parser(run)
}

Now I am defining a new parser item12: Parser[(Char, Char)] to return a pair of 1st and 2nd characters

 def item12():Parser[(Char, Char)] = 
   item().flatMap(a => (item().flatMap(b => success(a, b))))

I would like to write it with for-comprehension instead of nested flatMap calls. I understand that I need to define a map method for the Parser. How would you do that ?

by Michael at August 20, 2014 11:08 AM

Akka Dead Letters with Ask Pattern

I apologize in advance if this seems at all confusing, as I'm dumping quite a bit here. Basically, I have a small service grabbing some Json, parsing and extracting it to case class(es), then writing it to a database. This service needs to run on a schedule, which is being handled well by an Akka scheduler. My database doesn't like when Slick tries to ask for a new AutoInc id at the same time, so I built in an Await.result to block that from happening. All of this works quite well, but my issue starts here: there are 7 of these services running, so I would like to block each one using a similar Await.result system. Every time I try to send the end time of the request back as a response (at the end of the else block), it gets sent to dead letters instead of to the Distributor. Basically: why does sender ! time go to dead letters and not to Distributor. This is a long question for a simple problem, but that's how development goes...

ClickActor.scala

    import java.text.SimpleDateFormat
    import java.util.Date
    import Message._
    import akka.actor.{Actor, ActorLogging, Props}
    import akka.util.Timeout
    import com.typesafe.config.ConfigFactory
    import net.liftweb.json._
    import spray.client.pipelining._
    import spray.http.{BasicHttpCredentials, HttpRequest, HttpResponse, Uri}
    import akka.pattern.ask
    import scala.concurrent.{Await, Future}
    import scala.concurrent.duration._

case class ClickData(recipient : String, geolocation : Geolocation, tags : Array[String],
                     url : String, timestamp : Double, campaigns : Array[String],
                     `user-variables` : JObject, ip : String,
                     `client-info` : ClientInfo, message : ClickedMessage, event : String)
  case class Geolocation(city : String, region : String, country : String)
  case class ClientInfo(`client-name`: String, `client-os`: String, `user-agent`: String,
                      `device-type`: String, `client-type`: String)
  case class ClickedMessage(headers : ClickHeaders)
    case class ClickHeaders(`message-id` : String)

class ClickActor extends Actor with ActorLogging{

  implicit val formats = DefaultFormats
  implicit val timeout = new Timeout(3 minutes)
  import context.dispatcher

  val con = ConfigFactory.load("connection.conf")
  val countries = ConfigFactory.load("country.conf")
  val regions = ConfigFactory.load("region.conf")

  val df = new SimpleDateFormat("EEE, dd MMM yyyy HH:mm:ss -0000")
  var time = System.currentTimeMillis()
  var begin = new Date(time - (12 hours).toMillis)
  var end = new Date(time)

  val pipeline : HttpRequest => Future[HttpResponse] = (
    addCredentials(BasicHttpCredentials("api", con.getString("mailgun.key")))
      ~> sendReceive
    )

  def get(lastrun : Long): Future[String] = {

    if(lastrun != 0) {
      begin = new Date(lastrun)
      end = new Date(time)
    }

    val uri = Uri(con.getString("mailgun.uri")) withQuery("begin" -> df.format(begin), "end" -> df.format(end),
      "ascending" -> "yes", "limit" -> "100", "pretty" -> "yes", "event" -> "clicked")
    val request = Get(uri)
    val futureResponse = pipeline(request)
    return futureResponse.map(_.entity.asString)
  }

  def receive = {
    case lastrun : Long => {
      val start = System.currentTimeMillis()
      val responseFuture = get(lastrun)
      responseFuture.onSuccess {
        case payload: String => val json = parse(payload)
          //println(pretty(render(json)))
          val elements = (json \\ "items").children
          if (elements.length == 0) {
            log.info("[ClickActor: " + this.hashCode() + "] did not find new events between " +
              begin.toString + " and " + end.toString)
            sender ! time
            context.stop(self)
          }
          else {
            for (item <- elements) {
              val data = item.extract[ClickData]
              var tags = ""
              if (data.tags.length != 0) {
                for (tag <- data.tags)
                  tags += (tag + ", ")
              }
              var campaigns = ""
              if (data.campaigns.length != 0) {
                for (campaign <- data.campaigns)
                  campaigns += (campaign + ", ")
              }
              val timestamp = (data.timestamp * 1000).toLong
              val msg = new ClickMessage(
                data.recipient, data.geolocation.city,
                regions.getString(data.geolocation.country + "." + data.geolocation.region),
                countries.getString(data.geolocation.country), tags, data.url, timestamp,
                campaigns, data.ip, data.`client-info`.`client-name`,
                data.`client-info`.`client-os`, data.`client-info`.`user-agent`,
                data.`client-info`.`device-type`, data.`client-info`.`client-type`,
                data.message.headers.`message-id`, data.event, compactRender(item))
              val csqla = context.actorOf(Props[ClickSQLActor])
              val future = csqla.ask(msg)
              val result = Await.result(future, timeout.duration).asInstanceOf[Int]
              if (result == 1) {
                log.error("[ClickSQLActor: " + csqla.hashCode() + "] shutting down due to lack of system environment variables")
                context.stop(csqla)
              }
              else if(result == 0) {
                log.info("[ClickSQLActor: " + csqla.hashCode() + "] successfully wrote to the DB")
              }
            }
            sender ! time
            log.info("[ClickActor: " + this.hashCode() + "] processed |" + elements.length + "| new events in " +
              (System.currentTimeMillis() - start) + " ms")
          }
      }
    }
  }
}

Distributor.scala

import akka.actor.{Props, ActorSystem}
import akka.event.Logging
import akka.util.Timeout
import akka.pattern.ask
import scala.concurrent.duration._
import scala.concurrent.Await

class Distributor {

  implicit val timeout = new Timeout(10 minutes)
  var lastClick : Long = 0

  def distribute(system : ActorSystem) = {
    val log = Logging(system, getClass)

    val clickFuture = (system.actorOf(Props[ClickActor]) ? lastClick)
    lastClick = Await.result(clickFuture, timeout.duration).asInstanceOf[Long]
    log.info(lastClick.toString)

    //repeat process with other events (open, unsub, etc)
  }
}

by Brian Bagdasarian at August 20, 2014 11:04 AM

Portland Pattern Repository

StackOverflow

/r/scala

QuantOverflow

Probability of stock closing over a certain price

A stock has beta of 2.0 and stock specific daily volatility of 0.02. Suppose that yesterday's closing price was 100 and today the market goes up by 1%. What's the probability of today's closing price being at least 103?

by Ginger at August 20, 2014 10:50 AM

StackOverflow

Playframework unusual CPU load

Recently we started using PlayFramework and seeing some unusual activity in CPU load.

Machine details and other configurations:

32G Machine
12  Cores
PlayFramework 2.2.0
java -Xms1024m -Xmx1024m -XX:MaxPermSize=256m -XX:ReservedCodeCacheSize=128m
java applications are running within a docker container(Docker version 0.8.0).

There are 6 play server running behind nginx

PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
31752 root      20   0 7876m 1.2g  14m S  716  3.8 150:55.28 java
26282 root      20   0 7862m 1.2g  14m S   48  3.8 310:51.65 java
56449 root      20   0 7789m 389m  13m S    2  1.2   0:33.10 java
40006 root      20   0 7863m 1.2g  14m S    2  3.8  17:56.41 java
42896 root      20   0 7830m 1.2g  14m S    1  3.8  15:10.30 java
52119 root      20   0 7792m 1.2g  14m S    1  3.7   8:48.38 java

The request rate is at max 100Req/s.

Has anyone faced this similar issues before? Please let me know.

by Shrikar at August 20, 2014 10:49 AM

Detecting the index in a string that is not printable character with Scala

I have a method that detects the index in a string that is not printable as follows.

def isPrintable(v:Char) = v >= 0x20 && v <= 0x7E
val ba = List[Byte](33,33,0,0,0)
ba.zipWithIndex.filter { v => !isPrintable(v._1.toChar) } map {v => v._2}
> res115: List[Int] = List(2, 3, 4)

The first element of the result list is the index, but I wonder if there is a simpler way to do this.

by prosseek at August 20, 2014 10:44 AM

Fred Wilson

Fifty Three

Another year, another birthday.

For the past fifteen years, I’ve been spending my birthday on the beach with my family. That seems like the ideal way to do it. I hope that tradition lasts as long as I do.

The weather has been spectacular on the east end of long island this week and we spent most of yesterday afternoon on a boat in Sag Harbor.

Today, I plan to do some yoga, play some golf with my son, and have a family dinner tonight.

I don’t really enjoy receiving presents. The best present for me is to be somewhere awesome surrounded by my family. I’ve already received that present.

But if you feel that you must send me something, please make a small donation to CSNYC here. I would appreciate that very much.

by Fred Wilson at August 20, 2014 10:24 AM

Undeadly

Google offers 5 EuroBSDCon 2014 travel grants for female computer scientists

Via the EuroBSDCon 2014 organizers comes the news that Google will be sponsoring 5 female computer scientists to attend the EuroBSDCon 2014 conference. The announcement follows:

Google EMEA Women in Tech Conference and Travel grants for female computer scientists

As part of Google's ongoing commitment to encourage women to excel in computing and technology, Google is pleased to offer Women in Tech Travel and Conference Grants to attend the EuroBSDcon 2014 conference.

5 grants, are offered which include:

  • Free registration for the conference
  • Up to 1000 EUR towards travel costs (to be paid after the conference)

Read more...

August 20, 2014 10:09 AM

/r/dependent_types

StackOverflow

Play framework- design suggestion for validation

I need to validate if a certain newly added entity has already been added. I believe the standard way to do it is to add constraints. What I'm looking for, though, is to also tell the user what possible matches exist. Usually this would be a globalError, but given that globalError is the the first argument of verifying which is usually constant text, it wouldn't serve my purpose buecase I would like it to be a custom value computed from the return value of the validator.

Should I do what I want to do by not adding the validator and by allowing the binding to succeed and in the Right part of the returned Either? I can first create the new element and then upon finding that potential similarities exist redirect the user? I feel like this shouldn't be the controller's job, and this is too much of design logic inside controller, this should technically all lie inside the model itself. And the model should only tell the controller that something is wrong and the 'type of wrongness' and the controller can take appropriate action. It shouldn't have to contain code to determine if there is anything wrong.

Can someone please suggest what should be a good way to do it?

by user247077 at August 20, 2014 10:02 AM

QuantOverflow

ADR vs Foriegn Stock Price Arbitraguers

So I am sure you all know about the whole Argentina default that has been in the papers lately, no need to delve into it. This so called "technical" default has lead some interesting investment opportunities (Soros Doubles YPF Stake) and a conundrum that I cannot answer.

The Telecommuncation company,, Telecom Argentina, has ADR's listed on the NYSE. They are also locally traded on the Buenos Aires Stock Exchange. The ADR price, denominated in $USD, is currently ~19 USD. The locally traded stock is worth ~45 ARS (~5 USD) last I checked on Bloomberg at work today, creating a 14 USD discrepancy.

There might some voting right differences but the share are pretty much at parity and certainly not enough to explain a 14 USD difference. Many of you might argue that spread is due to currency but that is not the case, the ARS is pegged to the USD at ~8.20 ARS for each USD. Country risk premium?? maybe so but wouldn't that be priced in mostly the currency/bonds which then would be reflected in the stock? To give you an idea, the stock trades at 12X Earnings on the B.A.S. Exchange but at 8X here on th NYSE, something is not adding up.

Aren't there quant shops that do cross country arbitrage between ADR's and their respective domiciled stock. And why aren't they arbitraging this spread? There is clearly something wrong.

enter image description here

by jessica at August 20, 2014 09:37 AM

StackOverflow

Put Clojure Ring middlewares in correct order

I'm having trouble with my Clojure server middlewares. My app has the following requirements:

  • Some routes should be accessible with no problems. Others require basic authentication, so I'd like to have an authentication function that sits in front of all the handler functions and makes sure the request is verified. I've been using the ring-basic-authentication handler for this, especially the instructions on how to separate your public and private routes.

  • However, I'd also like the params sent in the Authorization: header to be available in the route controller. For this I've been using Compojure's site function in compojure.handler, which puts variables in the :params dictionary of the request (see for example Missing form parameters in Compojure POST request)

However I can't seem to get both 401 authorization and the parameters to work at the same time. If I try this:

; this is a stripped down sample case:

(defn authenticated?
  "authenticate the request"
  [service-name token]
  (:valid (model/valid-service-and-token service-name token)))

(defroutes token-routes
  (POST "/api/:service-name/phone" request (add-phone request)))

(defroutes public-routes
  controller/routes
  ; match anything in the static dir at resources/public
  (route/resources "/"))

(defroutes authviasms-handler
  public-routes
  (auth/wrap-basic-authentication 
             controller/token-routes authenticated?))

;handler is compojure.handler
(def application (handler/site authviasms-handler))

(defn start [port]
  (ring/run-jetty (var application) {:port (or port 8000) :join? false}))

the authorization variables are accessible in the authenticated? function, but not in the routes.

Obviously, this isn't a very general example, but I feel like I'm really spinning my wheels and just making changes at random to the middleware order and hoping things work. I'd appreciate some help both for my specific example, and learning more about how to wrap middlewares to make things execute correctly.

Thanks, Kevin

by Kevin Burke at August 20, 2014 09:36 AM

Installation of cider-nrepl

I've installed CIDER 0.7.0 and now when I start it inside of Emacs (via M-x cider-jack-in RET), I get the following warning:

WARNING: CIDER's version (0.7.0) does not match cider-nrepl's version (not installed)

I've downloaded cider-nrepl and found out that it consists of closure code, not emacs lisp code. Since I've started exploring Clojure world just today, and there is no installation instructions on the project page, could you tell me how can I install cider-nrepl?

by Mark at August 20, 2014 09:31 AM

CompsciOverflow

What is regular about regular languages? [duplicate]

This question already has an answer here:

I am new to automata theory. I am well aware of the definition of regular language in automata, that is "a language is called a regular language if some finite automaton recognizes/accepts it" [MS]. However, I'm confused why such language (set of strings) is called regular.

  1. what is regular about regular language?

  2. Is there any relation between regular set in automata theory and regular set in mathematics?

by Sanjay Singh at August 20, 2014 09:30 AM

TheoryOverflow

exact cover set problem

i am searching an heuristic algorithm for a weighted exact cover problem shown here: http://en.wikipedia.org/wiki/Exact_cover

On my research i only found algorithm which calculates all solutions without any cost function, like http://en.wikipedia.org/wiki/Knuth%27s_Algorithm_X

Do you know some algorithm for that?

My problem has a Universe of size 200 and about 500 000 subsets, so it is not possible to calculate all solutions.

--------------EDIT--------------

Example:

Universe = { 1, 2 ,3 ,4 ,5 ,6 ,7}

Sets=[

{1,2,3,4}, Cost 10

{5,6,7}, Cost 20

{5}, Cost 5

{6,7}, Cost 10 ]

On this example i have to possible solutions {1,2,3,4} and {5,6,7} with cost 30 and the second solution is {1,2,3,4}, {5} and {6,7} with cost 25.

by Hunk at August 20, 2014 09:21 AM

StackOverflow

Creating custom ScalaFX controls

What exactly is the right way to create a custom ScalaFX control? I'm coming from Swing and Scala Swing, where custom components are simply created by extending Component or Panel. But when I try to extend ScalaFX's Control, I can't extend it without a JavaFX Control delegate. Should I just create custom ScalaFX components by extending the base JavFX classes instead of the ScalaFX classes?

by tempestfire2002 at August 20, 2014 09:21 AM

Return second string if first is empty?

Here is an idiom I find myself writing.

def chooseName(nameFinder: NameFinder) = {
  if(nameFinder.getReliableName.isEmpty) nameFinder.getReliableName
  else nameFinder.secondBestChoice
}

In order to avoid calling getReliableName() twice on nameFinder, I add code that makes my method look less elegant.

def chooseName(nameFinder: NameFinder) = {
  val reliableName = nameFinder.getReliableName()
  val secondBestChoice = nameFinder.getSecondBestChoice()
  if(reliableName.isEmpty) reliableName
  else secondBestChoice
}

This feels dirty because I am creating an unnecessary amount of state using the vals for no reason other than to prevent a duplicate method call. Scala has taught me that whenever I feel dirty there is almost always a better way.

Is there a more elegant way to write this?

Here's two Strings, return whichever isn't empty while favoring the first

by Cory Klein at August 20, 2014 09:00 AM

Compile error when adding List as parameter

This code :

package neuralnetwork

object hopfield {
  println("Welcome to the Scala worksheet")

  object Neuron {
    def apply() = new Neuron(0, 0, false, Nil, "")
    def apply(l : List[Neuron]) = new Neuron(0, 0, false, l, "")
  }

  case class Neuron(w: Double, tH: Double, var fired: Boolean, in: List[Neuron], id: String)

  val n2 = Neuron
  val n3 = Neuron
  val n4 = Neuron
  val l = List(n2,n3,n4)
  val n1 = Neuron(l)



}

causes compile error :

type mismatch; found : List[neuralnetwork.hopfield.Neuron.type] required: List[neuralnetwork.hopfield.Neuron]

at line : val n1 = Neuron(l)

Why should this occur ? What is incorrect about implementation that is preventing the List being added ?

by blue-sky at August 20, 2014 08:58 AM

/r/compsci

Linux Virus

With the increasing influence of linux as one of the most widely used operating systems supporting infrastructure and economy, how is it that Linux developers have solved the problems of malware or virus attacks so elegantly.

submitted by DKBOS
[link] [8 comments]

August 20, 2014 08:58 AM

StackOverflow

Is PartialFunction orElse looser on its type bounds than it should be?

Let's define a PartialFunction[String, String] and a PartialFunction[Any, String]

Now, given the definition of orElse

def orElse[A1 <: A, B1 >: B](that: PartialFunction[A1, B1]): PartialFunction[A1, B1] 

I would expect not to be able to compose the the two, since

AString
A1Any

and therefore the bound A1 <: A (i.e. Any <: String) doesn't hold.

Unexpectedly, I can compose them and obtain a PartialFunction[String, String] defined on the whole String domain. Here's an example:

val a: PartialFunction[String, String] = { case "someString" => "some other string" }
// a: PartialFunction[String,String] = <function1>

val b: PartialFunction[Any, String] = { case _ => "default" }
// b: PartialFunction[Any,String] = <function1>

val c = a orElse b
// c: PartialFunction[String,String] = <function1>

c("someString")
// res4: String = some other string

c("foo")
// res5: String = default

c(42)
// error: type mismatch;
//   found   : Int(42)
//   required: String

Moreover, if I explicitly provide the orElse type parameters

a orElse[Any, String] b
// error: type arguments [Any,String] do not conform to method orElse's type parameter bounds [A1 <: String,B1 >: String]

the compiler finally shows some sense.

Is there any type system sorcery I'm missing that causes b to be a valid argument for orElse? In other words, how come that A1 is inferred as String?

If the compiler infers A1 from b then it must be Any, so where else does the inference chain that leads to String start?


Update

After playing with the REPL I noticed that orElse returns an intersection type A with A1 when the types don't match. Example:

val a: PartialFunction[String, String] = { case "someString" => "some other string" }
// a: PartialFunction[String,String] = <function1>

val b: PartialFunction[Int, Int] = { case 42 => 32 }
// b: PartialFunction[Int,Int] = <function1>

a orElse b
// res0: PartialFunction[String with Int, Any] = <function1>

Since (String with Int) <:< String this works, even though the resulting function is practically unusable. I also suspect that String with Any is unified into Any, given that

import reflect.runtime.universe._
// import reflect.runtime.universe._   

typeOf[String] <:< typeOf[String with Any]
// res1: Boolean = true

typeOf[String with Any] <:< typeOf[String]
// res2: Boolean = true

So that's why mixing String and Any results into String.

That being said, what is going on under the hood? Under which logic are the mismatching types unified?

Update 2

I've reduced the issue to a more general form:

class Foo[-A] {
  def foo[B <: A](f: Foo[B]): Foo[B] = f
}

val a = new Foo[Any]
val b = new Foo[String]

a.foo(b) // Foo[String] Ok, String <:< Any
b.foo(a) // Foo[String] Shouldn't compile! Any <:!< String
b.foo[Any](a) // error: type arguments [Any] do not conform to method foo's type parameter bounds [A <: String]

by Gabriele Petronella at August 20, 2014 08:57 AM

/r/compsci

StackOverflow

How resolve sbt dependencies in background in Intellij Idea when open new project?

When I open sbt project with Intellij Idea it stars a long running dependency resolving process that blocks UI.

enter image description here

Is there a way to just open a project and "put" dependency resolving process into background (like when I open maven project)?

by Cherry at August 20, 2014 08:26 AM

"Flattening" a List in Scala & Haskell

Given a List[Option[Int]]:

scala> list
res8: List[Option[Int]] = List(Some(1), Some(2), None)

I can get List(1,2), i.e. extract the list via flatMap and flatten:

scala> list.flatten
res9: List[Int] = List(1, 2)

scala> list.flatMap(x => x)
res10: List[Int] = List(1, 2)

Given the following [Maybe Int] in Haskell, how can I perform the above operation?

I tried the following unsuccessfully:

import Control.Monad

maybeToList :: Maybe a -> [b]
maybeToList Just x  = [x]
maybeToList Nothing = []

flatten' :: [Maybe a] -> [a]
flatten' xs = xs >>= (\y -> y >>= maybeToList)

by Kevin Meredith at August 20, 2014 08:18 AM

Getting Started with Playframework 2.0 and Selenium

I am using Play framework 2.0. I would like to write some browser-based acceptance test using Selenium, but I have never used Selenium before must less integrated it with Play or Scala.

What is a basic setup that I can copy and work from?

by Jacob Groundwater at August 20, 2014 07:37 AM

TheoryOverflow

Highway dimension

I'm interested in understanding some recent theoretical results on pathfinding. Specifically this paper:

http://research.microsoft.com/apps/pubs/default.aspx?id=201061

I understand from the paper that in certain types of graphs (road networks for example) a small set of nodes are sufficient to cover a large number of shortest paths. I also understand that the size of the hitting set that forms such a cover can differ from node to node but is upper bounded by some integer h which is called the highway dimension of the graph.

Two questions:

i) The concept of highway dimension is defined in this paper (Defi 3.4, page 5) using a distance parameter r and a hitting set H which exists for every vertex v in the graph. Specifically, the authors say (slightly paraphrasing) "for all r > 0 and all v there exists a H whose size is bounded by h (the highway dimension of the graph)"

I don't know how to interpret this: is the highway-dimension h constant for all tuples (r, v) or does the value depend on the choice of r? I tend toward the former interpretation but the paper seems ambiguous on this point.

ii) The definition also makes reference to paths P of length > r which can be reached from the vertex v with distance no more than 2r. I don't understand the significance of the (r, 2r) construction. Would it not be simpler to construct a hitting set that covers all paths which begin at v and have length > r? What is gained by this more complicated definition?

by Daniel at August 20, 2014 07:25 AM

Is is decidable to check whether two PDA's accept same language or not? [on hold]

Whether two PDA's accept the same language or not. This problem is decidable or undecidable.?

by Ujjwal Saini at August 20, 2014 07:24 AM

StackOverflow

set parameter in play framework action within template

I have a single input form in a play scala template:

@helper.form(action=routes.Application.searchResult()) {
     <input type="text" name="userQuery" value="@userQuery">
}

and would like to pass an extra parameter, '@channel' to the searchResult action, which it takes as an optional argument.

@channel is passed as an argument to the current template. What's the simplest way to do this?

I tried replacing

routes.Application.searchResult()

with

routes.Application.searchResult(channel=channel)

with no success

by f13ts at August 20, 2014 07:18 AM

TheoryOverflow

If problem is np-complete then what are its subproblems, p or np? [on hold]

let A is an NP complete problem. and A is reduced to B, then what is B problem..? Is it np class problem or p class problem or both?

by Ujjwal Saini at August 20, 2014 07:15 AM

StackOverflow

Scala: how to merge a collection of Maps

I have a List of Map[String, Double], and I'd like to merge their contents into a single Map[String, Double]. How should I do this in an idiomatic way? I imagine that I should be able to do this with a fold. Something like:

val newMap = Map[String, Double]() /: listOfMaps { (accumulator, m) => ... }

Furthermore, I'd like to handle key collisions in a generic way. That is, if I add a key to the map that already exists, I should be able to specify a function that returns a Double (in this case) and takes the existing value for that key, plus the value I'm trying to add. If the key does not yet exist in the map, then just add it and its value unaltered.

In my specific case I'd like to build a single Map[String, Double] such that if the map already contains a key, then the Double will be added to the existing map value.

I'm working with mutable maps in my specific code, but I'm interested in more generic solutions, if possible.

by Jeff at August 20, 2014 07:12 AM

TheoryOverflow

Invert a number modulo a composite number

Supposing $M$ is a composite number and supposing $a$ is an integer such that $a^{-1}\mod M$ exists, can we compute $a^{-1} \mod M$ in $O(\log^{b}(M))$ arithmetic computations where $b>0$ and is some fixed number. Let $a$ and $M$ be of similar sizes.

by J.A at August 20, 2014 07:09 AM

StackOverflow

ReactiveMongo & Play: How to compare two DateTime instances

I use Play-ReactiveMongo to interact with MongoDB... and I'm wondering how to compare two dates considering that I don't use BSON in my application. Let me provide you with an example:

def isTokenExpired(tokenId: String): Future[Boolean] = {

  var query = collection.genericQueryBuilder.query(
    Json.obj(
      "_id" -> Json.obj("$oid" -> tokenId),
      "expirationTime" -> Json.obj("$lte" -> DateTime.now(DateTimeZone.UTC))
    )
  ).options(QueryOpts(skipN = 0))

  query.cursor[JsValue].collect[Vector](1).map {
    case Some(_) => true
    case _ => false
  }
}

isTokenExpired does not work as expected since expirationTime is considered a String – I've an implicit Writes that serializes a DateTime as "yyyy-MM-ddTHH:mm:ss.SSSZ"... and this is correct since I want a human-readable JSON.

That said, how do I get a document from a collection that has a DateTime less than another DateTime? The following doesn't seem to work:

Json.obj(
  "_id" -> Json.obj("$oid" -> tokenId),
  "expirationTime" -> Json.obj("$lte" -> Json.obj("$date" -> DateTime.now(DateTimeZone.UTC).getMillis))
)

Thanks.

by j3d at August 20, 2014 06:51 AM

Unexpected Difficulties with "Hello, World!"

I would like to learn Clojure and I've downloaded and set up the following gizmos:

  • Clojure 1.6.0 from official site;
  • Leiningen 2.4.3;
  • Cider 0.6.0 from GitHub.

I've got it working. Now I'm trying to print message "Hello, World!", while running Cider from within Emacs:

; CIDER 0.6.0 (Java 1.7.0_65, Clojure 1.6.0, nREPL 0.2.0-beta5)
user> (println "Hello World!")
Hello World!NoSuchMethodError clojure.tools.nrepl.StdOutBuffer.length()I
clojure.tools.nrepl.middleware.session/session-out/fn--7630
(session.clj:43)NoSuchMethodError clojure.tools.nrepl.StdOutBuffer.length()I
clojure.tools.nrepl.middleware.session/session-out/fn--7630 (session.clj:43)
user> 

What is this noise all about? When I just run:

$ clojure
;Clojure 1.6.0
user=> (println "Hello, World!")
Hello, World!
nil

everything is fine. When I do it with Leiningen:

$ lein repl
; lotsa stuff here...
user=> (println "Hello, World!")

After entering of this command I relish the following poetry:

CompilerException java.lang.RuntimeException: Unable to resolve symbol: rprintln in this context, compiling:(NO_SOURCE_PATH:1:1) NoSuchMethodError clojure.tools.nrepl.StdOutBuffer.length()I  clojure.tools.nrepl.middleware.session/session-out/fn--7630 (session.clj:43)
Exception in thread "nREPL-worker-3" java.lang.NoSuchMethodError: clojure.tools.nrepl.StdOutBuffer.length()I
    at clojure.tools.nrepl.middleware.session$session_out$fn__7630.doInvoke(session.clj:43)
    at clojure.lang.RestFn.invoke(RestFn.java:460)
    at clojure.tools.nrepl.middleware.session.proxy$java.io.Writer$ff19274a.write(Unknown Source)
    at java.io.PrintWriter.write(PrintWriter.java:456)
    at java.io.PrintWriter.write(PrintWriter.java:473)
    at clojure.core$fn__5471.invoke(core_print.clj:191)
    at clojure.lang.MultiFn.invoke(MultiFn.java:231)
    at clojure.core$pr_on.invoke(core.clj:3392)
    at clojure.core$pr.invoke(core.clj:3404)
    at clojure.lang.AFn.applyToHelper(AFn.java:154)
    at clojure.lang.RestFn.applyTo(RestFn.java:132)
    at clojure.core$apply.invoke(core.clj:624)
    at clojure.core$prn.doInvoke(core.clj:3437)
    at clojure.lang.RestFn.applyTo(RestFn.java:137)
    at clojure.core$apply.invoke(core.clj:624)
    at clojure.core$println.doInvoke(core.clj:3457)
    at clojure.lang.RestFn.invoke(RestFn.java:408)
    at clojure.main$repl_caught.invoke(main.clj:158)
    at clojure.tools.nrepl.middleware.interruptible_eval$evaluate$fn__7569$fn__7582.invoke(interruptible_eval.clj:76)
    at clojure.main$repl$fn__6634.invoke(main.clj:259)
    at clojure.main$repl.doInvoke(main.clj:257)
    at clojure.lang.RestFn.invoke(RestFn.java:1096)
    at clojure.tools.nrepl.middleware.interruptible_eval$evaluate$fn__7569.invoke(interruptible_eval.clj:56)
    at clojure.lang.AFn.applyToHelper(AFn.java:152)
    at clojure.lang.AFn.applyTo(AFn.java:144)
    at clojure.core$apply.invoke(core.clj:624)
    at clojure.core$with_bindings_STAR_.doInvoke(core.clj:1862)
    at clojure.lang.RestFn.invoke(RestFn.java:425)
    at clojure.tools.nrepl.middleware.interruptible_eval$evaluate.invoke(interruptible_eval.clj:41)
    at clojure.tools.nrepl.middleware.interruptible_eval$interruptible_eval$fn__7610$fn__7613.invoke(interruptible_eval.clj:171)
    at clojure.core$comp$fn__4192.invoke(core.clj:2402)
    at clojure.tools.nrepl.middleware.interruptible_eval$run_next$fn__7603.invoke(interruptible_eval.clj:138)
    at clojure.lang.AFn.run(AFn.java:22)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

Oh, Nooo. What a pain! Stop it, stop it!


I'm so perplexed. What is it and how do I fix this? Has anyone similar experience? How do I print "Hello, World!" in Clojure?

by Mark at August 20, 2014 06:25 AM

Changing sbt project's directory layout

According to sbt tutorial on changing paths I'm trying to change "target" output directory to "someother"

override def outputDirectoryName = "someother"

Everything goes fine except one: sbt automatically creates target directory with ".history" file inside. Why sbt does this when it supposed do create only "someother" dir ? I tryied to override all methods that are inherited from BasicProjectPaths (I use sbt.DefaultProject as superclass of my project descriptor)

override def mainCompilePath = ...
override def testCompilePath = ...
...

But sbt creates "target" folder in spite of paths overriding.

by Jeriho at August 20, 2014 06:18 AM

QuantOverflow

Valuation of barrier options in Jump diffusion model

I am trying to evaluate the value of a Barrier option using Monte carlo method. The stock follows a jump diffusion model. I am using the method described in Metwally and Atiya. The authors describe the steps so writing the algorithm in matlab say, should be easy. I have implemented the the first algorithm in matlab, described in this paper but my results are not the same as those of the authors. For example, my code below gives 5.1 but according to the authors results it should be 9.013.

The other problem I have is that the probability $P_i$ is negative or more than 1 sometimes during simulation. Could the formula in the paper be wrong?. How can it be coded to avoid this. I have used it as it is in the paper.

clc 
clear all
t = cputime;

%%%%%%%%%%%%%%%%%%% Parameters %%%%%%%%%%%%%%%%
S0 = 100.0;
X = 110.0;
H = 85.0;
R = 1.0;
r = 0.05;
sigma = 0.25;
T = 1.0;

%%%%%%%%%%%%%%%% Jump Parameters %%%%%%%%%%%%%%
lam = 2.0;
muA = 0.0;
 sigmaA = 0.1;

%%%%%%%%%%%%%%% calculated parameters %%%%%%%%%%
k = exp(muA+0.5*sigmaA*sigmaA)-1;
c = r-0.5*sigma^2-lam*k;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
N = 1e5; % Monte carlo runs
DP = zeros(N,1);
for n = 1:N
I = 1;
jumpTimes = 0:exprnd(lam):T; %interjump times Exp(lam)
K = size(jumpTimes,2)-1;
jumpTimes(end+1) = T;
x = log(S0);
for i = 1:K+1
    tau = jumpTimes(i+1)-jumpTimes(i);
    xbefore = x + c*tau + sigma*sqrt(tau)*randn();

    p = 1.0-exp(-2.0*(log(H)-x)*(log(H)-xbefore)/(tau*sigma^2));
    p = p*(xbefore > log(H));
    b = (jumpTimes(i+1)-jumpTimes(i))/(1.0-p);
    s = jumpTimes(i)+b*rand();

    if s <= jumpTimes(i+1) && s >= jumpTimes(i)
    gamma = exp(-(x-xbefore+c*tau)^2/(2*sigma^2*tau))/(sigma*sqrt(2*pi*tau));
    g = (x-log(H))/(2*gamma*pi*sigma^2)*(s-jumpTimes(i))^(-1.5)*(jumpTimes(i+1)-s)^(-0.5)*...
        exp(-((xbefore-log(H)-c*(jumpTimes(i+1)-s))^2/(2*(jumpTimes(i+1)-s)*sigma^2)+...
        (x-log(H)+c*(s-jumpTimes(i)))^2/(2*(s-jumpTimes(i))*sigma^2)));
    DP(n)= R*b*g*exp(-r*s);
    I = 0;
    break
    end
    A = muA + sigmaA*randn();
    xafter = xbefore + A;
    if xafter <= log(H)
    DP(n) = R*exp(-r*jumpTimes(i+1));
    I = 0;
    break
    end
    x = xafter;
end
if I==1 % no crossing happened
    xT = log(S0)+(c+muA*lam)*T+sqrt((sigma^2+(muA^2+sigmaA^2)*lam)*T)*randn();
    DP(n) = exp(-r*T)*max(exp(xT) - X, 0.0);
end

end

DOC = mean(DP)
e = (cputime-t)/60;

by Moneyness at August 20, 2014 06:14 AM

StackOverflow

trying to understand zeromq high water mark behaviour

I have been playing around with pyzmq and simple load balancing using HWM, and I don't quite understand the behaviour I am seeing.

I have set up a simple multithreading test, with a DEALER client connected to two workers via a ROUTER to DEALER pattern. HWM is set to 1. One of the worker is very fast and the other is very slow, and all the client does is spam 100 messages to the server. This generally seems to work and the faster worker processes many more messages than the slow worker.

However, even if I set the slow worker to be so slow, such that the fast worker should be able to process 99 messages before the slow worker finished even one, the slow worker still seems to receive at least 2 or 3 messages.

Are high water mark behaviour inexact or am I missing something?

The server code is as follows:

import re, sys, time, string, zmq, threading, signal


def worker_routine(worker_url, worker_id, context=None):
    # socket to talk to dispatcher
    context = context or zmq.Context.instance()
    socket = context.socket(zmq.REP)
    socket.set_hwm(1)
    socket.connect(worker_url)

    print "worker ", worker_id, " ready ..."
    while True:
        x = socket.recv()

        if worker_id==1:
            time.sleep(3)

        print worker_id, x
        sys.stdout.flush()

        socket.send(b'world')


context = zmq.Context().instance()
# socket facing clients
frontend = context.socket(zmq.ROUTER)
frontend.bind("tcp://*:5559")
# socket facing services
backend  = context.socket(zmq.DEALER)
url_worker = "inproc://workers"
backend.set_hwm(1)
backend.bind(url_worker)

# launch pool of worker threads
for i in range(2):
    thread = threading.Thread(target=worker_routine, args=(url_worker,i,))
    thread.start()
    time.sleep(0.1)

try:
    zmq.device(zmq.QUEUE, frontend, backend)
except:
    print "terminating!"

# we never get here
frontend.close()
backend.close()
context.term()

The client code is as follows:

import zmq, random, string, time, threading, signal

#  prepare our context and sockets
context = zmq.Context()
socket = context.socket(zmq.DEALER)
socket.connect("tcp://localhost:5559")

inputs = [''.join(random.choice(string.ascii_lowercase) for x in range(12)) for y in range(100)]

for x in xrange(100):
    socket.send_multipart([b'', str(x)])

print "finished!"

Example output:

...
0 81
0 82
0 83
0 84
0 85
0 86
0 87
0 88
0 89
0 90
0 91
0 92
0 93
0 94
0 95
0 96
0 97
0 98
0 99
1 1
1 3
1 5

by TheTaintedOne at August 20, 2014 06:04 AM

Can I have multiple selftypes in Scala?

Can I have a class that can have two different self types in Scala? Or emulate it in some way?

object Hi {
    trait One {
        val num = 1
    }
    trait Two {
        val num = 2
    }
    class Test {
        this: One => {
            println(num)
        }
        this: Two => {
            println(num)
        }
    }
}

import Hi._
new Test with One
new Test with Two

by mdenton8 at August 20, 2014 06:04 AM

Setting a default param as the value of another param

In Scala, I can set default params:

case class Foo(a:String, b:String = "hey")

What I would like to do is something like this:

case class Foo(a:String, b:String = a)

But that would result in an error:

not found: value a

This would be very useful in cases like these:

case class User(createdAt:DateTime = DateTime.now, updatedAt:DateTime = createdAt)

case class User(id:Long, profileName:String = "user-" + id.toString)

by goo at August 20, 2014 05:55 AM

CompsciOverflow

What are the advantages and disadvantages of a larger word size?

I'm doing a sample exam paper and there is this question:

What are the advantages and disadvantages of a larger word size?

Thinking of advantages, the only one that comes to my mind is that we can store larger values (more data) in one place (variable). What are the others? What are the disadvantages?

Research I have done:

What I came up with: see above.

by pmichna at August 20, 2014 05:52 AM

StackOverflow

Get IntelliJ to respect multiple @throws annotations in Scala dependency from a Java project

I have a Java project which depends on a Scala project. Inside that Scala project, there is a particular method that uses two @throws(classOf[<some exception>]):

  @throws(classOf[ExtensionException])
  @throws(classOf[LogoException])
  def perform(args: Array[Argument], context: Context)

However, intellij doesn't seem to know about both when I override the method:

enter image description here

The error is that the base method does not throw ExtensionException. The code compiles fine. Note that LogoException appears to be okay when I delete ExtensionException from throws declaration.

So, is there a way I can get Intellij to respect both throws declarations? Or is this a bug?

by Bryan Head at August 20, 2014 05:49 AM

/r/compilers

Name for language?

Hello everyone! I am huge on compiler and language design, and have designed my own little c-like language, and am currently working on the compiler (which will first compile to c (or c++) and eventually to ELF, COM, or MACH-o (depending on your system). I am having trouble coming up with a name though, and was wondering if someone here could help? Right now the only name I have is SLang (suggested by a friend, short for S Language, with the S being from DTSCode). Does anyone have any good ideas they would like to let me use?

submitted by DTSCode
[link] [8 comments]

August 20, 2014 05:11 AM

/r/compsci

Reading on Grammars?

What books can you recommend for a beginner to learn about regular language, context free grammars, formal grammars, etc? Something along the writing style of books like "Learn You a Haskell for Great Good", which doesn't assume much prior knowledge and is very reader friendly.

submitted by HifiBoombox
[link] [4 comments]

August 20, 2014 05:11 AM

CompsciOverflow

Complexity of Linear Diophantine equations

My question is simply, can linear Diophantine equations be solved in polynomial time? Specifically, I am looking at equations of the form $a_1 x_1+a_2 x_2 + ... + a_n x_n = k$, where $a_i,x_i,k$ are all integers, and solving for $x_i$. The algorithm I am using is based off of the following journal article:

http://www.jstor.org/stable/3620787 .

The algorithm I derived from the article is roughly as follow:

  1. Find a minimum coefficient (there could be many, just pick the any of them).
  2. Take every coefficient that is not the one you picked in the modulus of coefficient you picked.
  3. Check if any coefficients are 1, if so, go to 5.

  4. Go to 1.

  5. Back-substitute to find solution.

Currently, I have an incomplete argument that the algorithm is indeed polynomial time. My argument is as follows:

Suppose we choose the $a_j$ that such that it minimizes the number of bits all of the coefficients are reduced by, thereby maximizing the number of iterations. Then, after we compute ${a^{'}_{{i} \neq {j}} = a_{{i} \neq {j}}} \mod {a_j}$ we know that $a^{'}_{{i} \neq {j}} < {{a_{{i} \neq {j}}} \over {2}}$, because if it were not, then $a_j$ could go into $a_{{i} \neq {j}}$ another time. On the next iteration, the previous minimum $a_j$ cannot be the new minimum, since every other coefficient is less. Thus, the previous minimum will necessarily be reduced by one bit. Hence, all of the coefficients are reduced by at least 1 bit every 2 iterations. Thus, the maximum number of iterations is twice the maximum number of bits for a coefficient $2\log_2(\max {a_i})$, the back-substitution takes exactly the same number of steps, thus I estimate the algorithm is $O(\log(\max {a_i}))$, which is polynomial time with respect to the number of bits in the coefficient representation.

Am I missing something or is this correct? Please let me know if there is anything I need to clarify.

by John Jenkins at August 20, 2014 05:07 AM

TheoryOverflow

How can one find the "hard" probability distribution on the input for recursive boolean functions?

Update: Since, it seems there is no progress regarding this question, any idea, conjecture, hunch, or advice is welcome. For example, are there any partial or incomplete results? What are the main challenges? Which techniques may be amenable to make any progress? Any observation (irrespective of whether it is insightful or not; trivial or not) is also welcome.

Background:

Decision tree complexity or query complexity is a simple model of computation defined as follows. Let $f:\{0,1\}^n\to \{0,1\}$ be a Boolean function. The deterministic query complexity of $f$, denoted $D(f)$, is the minimum number of bits of the input $x\in\{0,1\}^n$ that need to be read (in the worse case) by a deterministic algorithm that computes $f(x)$. Note that the measure of complexity is the number of bits of the input that are read; all other computation is free.

We define the zero error or Las Vegas randomized query complexity of $f$, denoted $R_0(f)$, as the minimum number of input bits that need to be read in expectation by a zero-error randomized algorithm that computes $f(x)$. A zero-error algorithm always outputs the correct answer, but the number of input bits read by it depends on the internal randomness of the algorithm. (This is why we measure the expected number of input bits read.)

We define the bounded error or Monte Carlo randomized query complexity of $f$, denoted $R_2(f)$, to be the minimum number of input bits that need to be read by a bounded-error randomized algorithm that computes $f(x)$. A bounded-error algorithm always outputs an answer at the end, but it only needs to be correct with probability $\geq$ $1 - \delta$ ($2/3$, say).


Work on Recursive Boolean Functions:

There has been a line of work on the decision tree complexity of recursive boolean functions as mentioned below. The techniques focus on applying Yao's Lemma and using the distributional perspective guaranteed by it. This means we define a probability distribution on the inputs and the cost incurred by the best algorithm for this distribution gives a lower bound on the randomized decision tree complexity of the function. The worst possible distribution will give the actual randomized decision tree complexity.

The techniques in these works focus on giving a lower bound on the cost incurred by reading the "minority" bits (or vertices in the function tree) of the input via some form of induction. Another direction of attack could be to find the most "hard" distribution.


Some Notions

We define: The distribution $D^*$ on an input set $I$ is hard for a given function $f$, if $\forall D$ on $I$, $C(A,D) \leq C(A^*, D^*) $, where $C(A,D)$ is the expected cost (i.e. number of input bits read on expectation) incurred by the deterministic decision tree $A$ when the input follows the probability distribution is $D$. where $A^* = \operatorname{argmin}_A C(A, D^*), A = \operatorname{argmin}_A C(A, D)$

A distribution $D_1 < D_2$, if $C_m(D_1) < C_m(D_2)$, where $C_m(D_i) = C(A_i, D_i)$, and $A_i = \operatorname{argmin}_A C(A, D_i)$ . In other words $D_2$ is harder than $D_1$ means the best possible algorithm for $D_2$ does worse than the best possible algorithm for $D_1$. Note: The algorithm must be correct in the whole domain, and not just in the support of the distributions. For the base case of a recursive boolean function like say 2 bits or 4 bits, it is often easy to show a certain distribution to be hard. Often it is an easy observation or an obvious fact. In many cases, it may seem natural that the "hard" distribution is the recursive extension of the hard distribution. However, this may not be true in general, especially if the function is not symmetric over the input bits and rather skewed i.e. not all input bits are equally important to infer the value of the function on certain inputs (or a subset thereof).

Question:

Is there any work on how to approach the problem of finding the "hard" distribution of the recursive function, from that of the base case function?

Is there any interesting connection of this problem with any other problems? Any comments are welcome.

References:

[1] M. Saks, A. Wigderson Probabilistic Boolean Decision Trees and the Complexity of Evaluating Game Trees Proceedings of the 27th Foundations of Computer Science, pp. 29-38, October 1986.

[2] M. Santha. On the Monte Carlo boolean decision tree complexity of read-once formulae. Random Structures and Algorithms, 6(1):75{87, 1995.

[3] Frédéric Magniez, Ashwin Nayak, Miklos Santha, and David Xiao. Improved bounds for the randomized decision tree complexity of recursive majority. In Luca Aceto, Monika Henzinger, and Jiri Sgall, editors, ICALP (1), volume 6755 of Lecture Notes in Computer Science, pages 317–329. Springer, 2011.

[4] Nikos Leonardos. An improved lower bound for the randomized decision tree complexity of recursive majority. In Fedor V. Fomin, Rusins Freivalds, Marta Z. Kwiatkowska, and David Peleg, editors, Proceedings of 40th International Colloquium on Automata, Languages and Programming, volume 7965 of Lecture Notes in Computer Science, pages 696{708. Springer, 2013.

by Jardine at August 20, 2014 04:44 AM

StackOverflow

OpenJDK patch update for RHEL 6 server

We need to apply the jdk updates to one of the RHEL 6 servers. How do i apply the patch if i have the RPM package available which I have downloaded from the internet. Searched alot on the Internet but couldn't find steps to apply the jdk patch . Also, what precautions should be taken before applying the new RPM update so that the current functionality is not disturbed?

by 5hr4y at August 20, 2014 04:11 AM

CompsciOverflow

Probability Game Question [migrated]

I am new to probability. I am trying to solve the following problem.

In a game, probability of winning the game is $w$ & losing the game is $l$ & probability of game continuing is $(1 - w - l)$. What is the probability of winning the game in $m$ steps ?

I know the answer. It is p$(m) = w + (1 - w - l) * p(m - 1).$ Can some one explain why it is.

by user2615516 at August 20, 2014 04:03 AM

XKCD

StackOverflow

Confused about diagrams of Yampa switches

There is some diagrams of Yampa switches at:

http://www.haskell.org/haskellwiki/Yampa/switch

http://www.haskell.org/haskellwiki/Yampa/rSwitch

http://www.haskell.org/haskellwiki/Yampa/kSwitch

(and so on).

I've found that the switch, the only one diagram with description, is the easiest one to get understand. The others seems hard to follow the similar symbols to read the diagrams. For example, to try to read the rSwitch with the symbols used in the switch may be:

Be a recursive SF which is always fed a signal of type 'in' and returns a signal of type 'out'. Start with an initial SF of the same type but someone outside the switch function (the ?[cond]) square) may also pass a new SF via an Event (the type Event (SF in out) at the signature) while the condition is satisfied (for the '?' before the [cond] square). In case of the Event, Yampa would use the new SF instead of the existing one. This process is recursive since '?' (can't get it from the diagram except the signature of the rSwitch seems recursive).

And after I look into the source of rSwitch, it looks like it use switch to switch to the same init SF recursively while the t is fired (according to what described in the diagram, although I don't see what the special t would be fired in the source code).

In the Yampa Arcade it explains the dpSwitch with the code and example. And the paper about the game 'Frag' also uses dpSwitch. However the rSwitch seems absent in these tutorial. So I really don't know how to use r- or the k-serial switches, and in what cases we would need them.

by snowmantw at August 20, 2014 03:35 AM

CompsciOverflow

Can/Do multiple processes run simultaneously on a multi-core system?

I understand context switches and threading on a single core system, but I'm trying to understand what happens in a multi-core system. I know multiple threads from the same process can run simultaneously in a multi-core system. However, can multiple processes run simultaneously in such a system as well?

In other words, in a dual core processor: - How many processes can run simultaneously(without context switching) if all processes are single threaded? - How many processes can run simultaneously if there are 2 processes and both are multi-threaded?

by TriArc at August 20, 2014 03:35 AM

QuantOverflow

An optimization problem on Markov Chain

Consider a Markov Chain $\{X_n\}$ whose transition probability depends on some parameter $\theta$ ($p_{ij}(\theta)$). Now I want to optimize the following quantity

$$\lambda(\theta) = \lim_{n->\infty} \frac{1}{n}E[\sum_{m=0}^{n}f_{X_m}(\theta)]$$

where $f_i(\theta)$ is a given function for each state $i$ of the Markov Chain. We don't have the knowledge of the transition probability of the markov chain, only we have a simulator which can generate states according to that probability depending on some parameter $\theta$.

I want to know about what are the methods available in the literature for this problem.

by malkhor at August 20, 2014 03:22 AM

StackOverflow

Setting date range in queries

I would like to add a condition in my query to filter records which has a date falling under current month. How can I achieve this? I tried to use clj-time to retrieve the current month and match it along with the DB field. But that didn't give me any luck.

by Coding active at August 20, 2014 03:20 AM

converting a List of Map to a Map in Scala [duplicate]

This question already has an answer here:

I've a List of Maps and I want to create a Map.

For example -

Input

List(Map(a -> 1), Map(b -> 2), Map(c -> 3))

Output

Map( a -> 1, b -> 2, c -> 3 ) 

by Soumya Simanta at August 20, 2014 03:11 AM

Inconsistency with scala reflection library

I'm having trouble understanding why using scala's runtime reflection in 2.11.1 gives me seemingly inconsistent results.

I am trying to inspect the type of a field contained in a java object, like this one:

import java.util.List;
import java.util.ArrayList;

public class Example {
  private List<Integer> listOfInts;

  public Example () {
    listOfInts = new ArrayList<Integer>();
  }  
}

Now suppose I have a scala program that tries to reason about the type of the field inside "Example:"

import java.lang.Class
import java.lang.reflect.Field
import java.util.List
import scala.reflect.runtime.{ universe => ru }

object Inspect extends scala.App {
  val example = new Example 
  val cls = example.getClass
  val listfield = cls.getDeclaredField("listOfInts")

  println(isListType(listfield)) // prints false 
  println(isListType(listfield)) // prints true, as do all subsequent calls

  def isListType (field: Field): Boolean = {
    /*
      A function that returns whether the type of the field is a list.
      Based on examples at http://docs.scala-lang.org/overviews/reflection/environment-universes-mirrors.html
    */
    val fieldcls = field.getType

    val mirror: ru.Mirror = ru.runtimeMirror(getClass.getClassLoader)
    val fieldsym: ru.ClassSymbol = mirror.classSymbol(fieldcls)
    val fieldtype: ru.Type = fieldsym.toType 

    (fieldtype <:< ru.typeOf[List[_]])
  }  
}

In this particular code snippet, the first call to isListType returns false, and the second returns true. If I switch the type operator from <:< to =:=, the first call returns true, and the second false.

I have a similar function in a larger code body, and have found that even when the function is part of a static object, this behavior occurs. This does not happen when using unparameterized classes. While I intended for the function to be pure, this is obviously not the case. Further experimentation has shown that there is some persistent state held somewhere. If I replace the isListType function with straightline code, I get this:

...
val example = new Example   
val cls = example.getClass
val listfield = cls.getDeclaredField("listOfInts")
val fieldcls = listfield.getType

val mirror: ru.Mirror = ru.runtimeMirror(getClass.getClassLoader)
val fieldsym: ru.ClassSymbol = mirror.classSymbol(fieldcls)
val fieldtype: ru.Type = fieldsym.toType 

println(fieldtype <:< ru.typeOf[List[_]]) // prints false
println(fieldtype <:< ru.typeOf[List[_]]) // prints false

but if I reassign to field type after the <:< operator, I get this:

// replace as under the fieldsym assignment
var fieldtype: ru.Type = fieldsym.toType 
println(fieldtype <:< ru.typeOf[List[_]]) // prints false

fieldtype = fieldsym.toType 
println(fieldtype <:< ru.typeOf[List[_]]) // prints true

while reassigning to field type before the <:< operator gives this:

// replace as under the fieldsym assignment
var fieldtype: ru.Type = fieldsym.toType 
fieldtype = fieldsym.toType 

println(fieldtype <:< ru.typeOf[List[_]]) // prints false
println(fieldtype <:< ru.typeOf[List[_]]) // prints false

Does anyone understand what I'm doing wrong here, or at least have a way around this?

by Simon at August 20, 2014 02:52 AM

CompsciOverflow

Why do negative array indices make sense?

I have came across a weird experience in C programming. Consider this code:

int main(){
  int array1[6] = {0, 1, 2, 3, 4, 5};
  int array2[6] = {6, 7, 8, 9, 10, 11};

  printf("%d\n", array1[-1]);
  return 0;
}

When I compile and run this, I don't get any errors or warnings. As my lecturer said, the array index -1 accesses another variable. I'm still confused, why on earth does a programming language have this capability? I mean, why allow negative array indices?

by Fawzan at August 20, 2014 02:41 AM

StackOverflow

Cannot Connect to SQS with Akka-Camel

I'm trying to push messages through my Scala application to an SQS queue. I receive the following error when trying to connect to SQS:

ProducerRegistrar$$anonfun$receive$3.applyOrElse(CamelSupervisor.scala:159)
    at akka.actor.ActorCell.receiveMessage(ActorCell.scala:425)
    at akka.actor.ActorCell.invoke(ActorCell.scala:386)
    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:230)
    at akka.dispatch.Mailbox.run(Mailbox.scala:212)
    at akka.dispatch.ForkJoinExecutorConfigurator$MailboxExecutionTask.exec(AbstractDispatcher.scala:506)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:262)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:975)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1478)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104)
Caused by: org.apache.camel.ResolveEndpointFailedException: Failed to resolve endpoint: aws-sqs://analyticsSandboxSQS?accessKey=<access>&secretKey=<secret> due to: No component found with scheme: aws-sqs
    at org.apache.camel.impl.DefaultCamelContext.getEndpoint(DefaultCamelContext.java:475)
    at akka.camel.internal.ProducerRegistrar$$anonfun$receive$3.applyOrElse(CamelSupervisor.scala:151)
    ... 9 more

I have used the following code to setup the URI:

import akka.actor.{ Actor, ActorSystem, Props }
import akka.camel.{ Oneway, Producer }

class EventSenderSQS extends Actor with Producer with Oneway {
  def endpointUri = "aws-sqs://queueName?accessKey=<access>&secretKey=<secret>"
}

And I use the following to try to send a message:

val sys = ActorSystem("sys")
val eventsActor = sys.actorOf(Props[EventProducerSQS])
eventsActor ! "testMessage"

I am using akka-camel version 2.1.4, which should support the aws-sqs endpoint.

by Andrew Jones at August 20, 2014 02:40 AM

CompsciOverflow

Why does bubble sort do $\Theta(n^2)$ comparisons on an $n$ element list?

I have a quick question on the bubble sort algorithm. Why does it perform $\Theta(n^2)$ comparisons on an $n$ element list?

I looked at the Wikipedia page and it does not seem to tell me. I know that because of its magnitude it takes a lot of work with large numbers.

by Fernando Martinez at August 20, 2014 02:13 AM

StackOverflow

What is the proper way to return from an exception in Scala?

In a non-functional language, I might do something like:

try {
  // some stuff
} catch Exception ex {
  return false;
}

// Do more stuff

return true;

In Scala, however, this pattern is clearly not correct. If my scala code looks like this:

try {
  // do some stuff
}
catch {
  case e: Exception => // I want to get out of here and return false
  )
}

// do more stuff

true

How do I properly do that? I don't want to use the "return" statement, of course, but I also don't want to drop through and "do more stuff" and eventually return true.

by Christopher Ambler at August 20, 2014 02:09 AM

arXiv Logic in Computer Science

The Logic of Approximate Dependence. (arXiv:1408.4437v1 [math.LO])

We extend the treatment of functional dependence, the basic concept of dependence logic, to include the possibility of dependence with a limited number of exceptions. We call this approximate dependence. The main result of the paper is a Completeness Theorem for approximate dependence atoms. We point out some problematic features of this which suggests that we should consider multi-teams, not just teams.

by <a href="http://arxiv.org/find/math/1/au:+Vaananen_J/0/1/0/all/0/1">Jouko V&#xe4;&#xe4;n&#xe4;nen</a> at August 20, 2014 01:30 AM

Proceedings of the First International Workshop on FPGAs for Software Programmers (FSP 2014). (arXiv:1408.4423v1 [cs.AR])

This volume contains the papers accepted at the First International Workshop on FPGAs for Software Programmers (FSP 2014), held in Munich, Germany, September 1st, 2014. FSP 2014 was co-located with the International Conference on Field Programmable Logic and Applications (FPL).

by <a href="http://arxiv.org/find/cs/1/au:+Hannig_F/0/1/0/all/0/1">Frank Hannig</a>, <a href="http://arxiv.org/find/cs/1/au:+Koch_D/0/1/0/all/0/1">Dirk Koch</a>, <a href="http://arxiv.org/find/cs/1/au:+Ziener_D/0/1/0/all/0/1">Daniel Ziener</a> at August 20, 2014 01:30 AM

The Structure of Optimal and Near Optimal Target Sets in Consensus Models. (arXiv:1408.4364v1 [cs.DM])

We consider the problem of identifying a subset of nodes in a network that will enable the fastest spread of information in a decentralized environment.In a model of communication based on a random walk on an undirected graph, the optimal set over all sets of the same or smaller cardinality minimizes the sum of the mean first arrival times to the set by walkers starting at nodes outside the set. The problem originates from the study of the spread of information or consensus in a network and was introduced in this form by V.Borkar et al. in 2010. More generally, the work of A. Clark et al. in 2012 showed that estimating the fastest rate of convergence to consensus of so-called leader follower systems leads to a consideration of the same optimization problem.

The set function $F$ to be minimized is supermodular and therefore the greedy algorithm is commonly used to construct optimal sets or their approximations. In this paper, the problem is reformulated so that the search for solutions is restricted to optimal and near optimal subsets of the graph. We prove sufficient conditions for the existence of a greedoid structure that contains feasible optimal and near optimal sets. It is therefore possible we conjecture, to search for optimal or near optimal sets by local moves in a stepwise manner to obtain near optimal sets that are better approximations than the factor $(1-1/e)$ degree of optimality guaranteed by the use of the greedy algorithm. A simple example illustrates aspects of the method.

by <a href="http://arxiv.org/find/cs/1/au:+Hunt_F/0/1/0/all/0/1">Fern Y. Hunt</a> at August 20, 2014 01:30 AM

Mahonian STAT on words. (arXiv:1408.4290v1 [cs.DM])

In 2000, Babson and Steingr\'imsson introduced the notion of what is now known as a permutation vincular pattern, and based on it they re-defined known Mahonian statistics and introduced new ones, proving or conjecturing their Mahonity. These conjectures were proved by Foata and Zeilberger in 2001, and by Foata and Randrianarivony in 2006.

In 2010, Burstein refined some of these results by giving a bijection between permutations with a fixed value for the major index and those with the same value for STAT, where STAT is one of the statistics defined and proved to be Mahonian in the 2000 Babson and Steingr\'imsson's paper. Several other statistics are preserved as well by Burstein's bijection.

At the Formal Power Series and Algebraic Combinatorics Conference (FPSAC) in 2010, Burstein asked whether his bijection has other interesting properties. In this paper, we not only show that Burstein's bijection preserves the Eulerian statistic ides, but also use this fact, along with the bijection itself, to prove Mahonity of the statistic STAT on words we introduce in this paper. The words statistic STAT introduced by us here addresses a natural question on existence of a Mahonian words analogue of STAT on permutations. While proving Mahonity of our STAT on words, we prove a more general joint equidistribution result involving two six-tuples of statistics on (dense) words, where Burstein's bijection plays an important role.

by <a href="http://arxiv.org/find/cs/1/au:+Kitaev_S/0/1/0/all/0/1">Sergey Kitaev</a>, <a href="http://arxiv.org/find/cs/1/au:+Vajnovszki_V/0/1/0/all/0/1">Vincent Vajnovszki</a> at August 20, 2014 01:30 AM

Enhancing the Accuracy of Device-free Localization Using Spectral Properties of the RSS. (arXiv:1408.4239v1 [cs.NI])

Received signal strength based device-free localization has attracted considerable attention in the research society over the past years to locate and track people who are not carrying any electronic device. Typically, the person is localized using a spatial model that relates the time domain signal strength measurements to the person's position. Alternatively, one could exploit spectral properties of the received signal strength which reflects the rate at which the wireless propagation medium is being altered, an opportunity that has not been exploited in the related literature. In this paper, the power spectral density of the signal strength measurements are related to the person's position and velocity to augment the particle filter based tracking algorithm with an additional measurement. The system performance is evaluated using simulations and validated using experimental data. Compared to a system relying solely on time domain measurements, the results suggest that the robustness to parameter changes is increased while the tracking accuracy is enhanced by 50% or more when 512 particles are used.

by <a href="http://arxiv.org/find/cs/1/au:+Kaltiokallio_O/0/1/0/all/0/1">Ossi Kaltiokallio</a>, <a href="http://arxiv.org/find/cs/1/au:+Yigitler_H/0/1/0/all/0/1">H&#xfc;seyin Yi&#x11f;itler</a>, <a href="http://arxiv.org/find/cs/1/au:+Jantti_R/0/1/0/all/0/1">Riku J&#xe4;ntti</a> at August 20, 2014 01:30 AM

A Price Selective Centralized Algorithm for Resource Allocation with Carrier Aggregation in LTE Cellular Networks. (arXiv:1408.4151v1 [cs.NI])

In this paper, we consider a resource allocation with carrier aggregation optimization problem in long term evolution (LTE) cellular networks. In our proposed model, users are running elastic or inelastic traffic. Each user equipment (UE) is assigned an application utility function based on the type of its application. Our objective is to allocate multiple carriers resources optimally among users in their coverage area while giving the user the ability to select one of the carriers to be its primary carrier and the others to be its secondary carriers. The UE's decision is based on the carrier price per unit bandwidth. We present a price selective centralized resource allocation with carrier aggregation algorithm to allocate multiple carriers resources optimally among users while providing a minimum price for the allocated resources. In addition, we analyze the convergence of the algorithm with different carriers rates. Finally, we present simulation results for the performance of the proposed algorithm.

by <a href="http://arxiv.org/find/cs/1/au:+Shajaiah_H/0/1/0/all/0/1">Haya Shajaiah</a>, <a href="http://arxiv.org/find/cs/1/au:+Abdelhadi_A/0/1/0/all/0/1">Ahmed Abdelhadi</a>, <a href="http://arxiv.org/find/cs/1/au:+Clancy_C/0/1/0/all/0/1">Charles Clancy</a> at August 20, 2014 01:30 AM

Ameliorate Threshold Distributed Energy Efficient Clustering Algorithm for Heterogeneous Wireless Sensor Networks. (arXiv:1408.4112v1 [cs.NI])

Ameliorating the lifetime in heterogeneous wireless sensor network is an important task because the sensor nodes are limited in the resource energy. The best way to improve a WSN lifetime is the clustering based algorithms in which each cluster is managed by a leader called Cluster Head. Each other node must communicate with this CH to send the data sensing. The nearest base station nodes must also send their data to their leaders, this causes a loss of energy. In this paper, we propose a new approach to ameliorate a threshold distributed energy efficient clustering protocol for heterogeneous wireless sensor networks by excluding closest nodes to the base station in the clustering process. We show by simulation in MATLAB that the proposed approach increases obviously the number of the received packet messages and prolongs the lifetime of the network compared to TDEEC protocol.

by <a href="http://arxiv.org/find/cs/1/au:+Baghouri_M/0/1/0/all/0/1">Mostafa Baghouri</a>, <a href="http://arxiv.org/find/cs/1/au:+Chakkor_S/0/1/0/all/0/1">Saad Chakkor</a>, <a href="http://arxiv.org/find/cs/1/au:+Hajraoui_A/0/1/0/all/0/1">Abderrahmane Hajraoui</a> at August 20, 2014 01:30 AM

On the expected number of equilibria in a multi-player multi-strategy evolutionary game. (arXiv:1408.3850v1 [math.PR] CROSS LISTED)

In this paper, we analyze the mean number $E(n,d)$ of internal equilibria in a general $d$-player $n$-strategy evolutionary game where the agents' payoffs are normally distributed. First, we give a computationally implementable formula for the general case. Next we characterize the asymptotic behavior of $E(2,d)$, estimating its lower and upper bounds as $d$ increases. Then we provide an exact formula for $E(n,2)$. As a consequence, we show that in both cases the probability to see the maximal possible number of equilibria tends to zero when $d$ or $n$ respectively goes to infinity. Finally, for larger $n$ and $d$, numerical results are provided and discussed.

by <a href="http://arxiv.org/find/math/1/au:+Duong_M/0/1/0/all/0/1">Manh Hong Duong</a>, The <a href="http://arxiv.org/find/math/1/au:+Han_A/0/1/0/all/0/1">Anh Han</a> at August 20, 2014 01:30 AM

Type Expressiveness and Its Application in Separation of Behavior Programming and Data Management Programming. (arXiv:1107.3193v10 [cs.PL] UPDATED)

A new behavior descriptive entity type called spec is proposed, which combines the traditional interface with test rules and test cases, to completely specify the desired behavior of each method, and to enforce the behavior-wise correctness of all compiled units. Using spec, a new programming paradigm is proposed, which allows the separation programming space into 1) a behavior domain to aggregate all behavior programming in the format of specs, 2) a object domain to bind each concrete spec to its data representation in a particular address space, and 3) a realization domain to connect the behavior domain and the object domain. Such separation guarantees the strictness of behavior satisfaction at compile time, while allows flexibility of dynamical binding of actual implementation at runtime. A new convention call type expressiveness to allow data exchange between different programming languages and between different software environments is also proposed.

by <a href="http://arxiv.org/find/cs/1/au:+Wang_C/0/1/0/all/0/1">Chengpu Wang</a> at August 20, 2014 01:30 AM

StackOverflow

Java/Scala library for algebra, mathematics

Can you advise me some flexible and powerful, yet fast library which could cover SciPy (both in performance and functionality). I found SciPy very expressive - but I want to try something in Scala.

I read a little about Scala - but is not as featured as SciPy. Any alternatives? Maybe Java library?

by Robert Zaremba at August 20, 2014 01:19 AM

/r/netsec

CompsciOverflow

Why comparison is dominant time consumption for comparison-based sorting algorithms? [duplicate]

This question already has an answer here:

Comparison-based sorting algorithms does a number of different operations to accomplish the sorting, why comparisons are the dominant time consumption? While I understand the standard analyses of asymptotical behavior of number of comparison operations, I don't quite understand why other costs of other types of operations are negligible.

by user78219 at August 20, 2014 01:18 AM

QuantOverflow

Is it possible to model general wrong way risk via concentration risk?

General wrong way risk (GWWR) is defined as due to a positive correlation between the level of exposure and the default probability of the counterparty, due to general market factors. (Specific wrong way risk is when they are positively correlated anyway). According to the “Risk Concentration Principles” (bcbs63) “different entities within the conglomerate could be exposed to the same or similar risk factors or to apparently unrelated risk factors that may interact under some unusual stressful circumstances.”

Given that the different market factors tend have a stronger positive correlation when one is talking about the same country/region(mainly the base curves), the same industry (mainly the spreads), etc, should be the concentration risk (per region, industry,..) be used to model the general wrong way risk?

With 5 regions (Americas, UK, Europe(ex UK), Japan, Asia-Pacific(ex Japan) and 10 sectors (Energy, Basic Materials ,Industrails, consumer Cyclical, consumer Non-Cyclical, Health Care, Financials, Information Techniology, Telecomunication Services and Utilities), you should be able to get the GWWR from a sort of variance of the concentration from the average_of_sectors(ideally 10%) and average_of_regions (ideally 20%). When you have 40% of your exposure in Energy, 30% in Financials 20% in Telecomunication services and 10% in whatever else; well diversified. What I mean is, assuming that the rest of the parameters is all the same (same maturities, types of instruments=bonds -to simplify, pricipals, etc), the GWWR should be much larger for 40-10-40-10 than for 30-30-30-10.

Ex1: A Swiss company receives CHF, buys materials in EUR and takes a loan in EUR to pay them. In case the EUR increases with respect to CHF, both the probability of default of the company (raw materials increase in price) and its exposure in CHF increase. As the default is a statistical property, having 40% of your portfolio as loans provided to many of such companies will make you notice the default (which does not any longer behave idiosyncraticly, as when you would have one company). Assume the lender does not structure its business around the EUR/CHF exchange risk.

Ex2: You are a European lender 10 years ago. People buy houses and earn salaries in the local currency and take mortgages in CHF, as CHF had very low/the lowest interest rates. The CHF rises by 1.25, and the exposure rises by 25%. The probability of default rises, as the price of the house/collateral does not rise in the local currency and the monthly rate to pay goes well over the allowed indebtment percentage.If you are providing many of such mortgages, you are exposed to GWWR proportional to their concentration with respect to your portfolio.

My question is if general wrong way risk is not a form of double counting (Should'nt wrong way risk include only the specific wrong way risk?) Could someone,please, give an example of GWWR where concentration is not a factor?

I guess that one can regress credit risk/hazard rates on market factors and look for strong correlations, but this should already be accounted for by the stressed VaR.

by user7056 at August 20, 2014 01:17 AM

Portland Pattern Repository

QuantOverflow

what is General IB2 Restriction in Basel II credit risk model

I was reading Basel II wiki page, it says:

The first pillar

The first pillar deals with maintenance of regulatory capital calculated for three major components of risk that a bank faces: credit risk, operational risk, and market risk. Other risks are not considered fully quantifiable at this stage.

The credit risk component can be calculated in three different ways of varying degree of sophistication, namely standardized approach, Foundation IRB, Advanced IRB and General IB2 Restriction. IRB stands for "Internal Rating-Based Approach".

Any idea what is such “General IB2 Restrition”? I checked the Basel II: International Convergence of Capital Measurement and Capital Standards: a Revised Framework, Comprehensive Version (BCBS) (June 2006 Revision) but couldn't find any definition.

by athos at August 20, 2014 12:56 AM

Stress Testing Methods

I'm working on the following task:

Given quarterly data:

  1. a time series representing the 1-year realized (10 years of data) rates of default on a portfolio of mortgages
  2. a slew of realized (10 years of data) macroeconomic time series. Each time series may or may not be relevant
  3. A stressed scenario of those same macroeconomic time series for 2 years

Estimate the probability of default using the stressed data.

I don't actually know anything about underlying distributions. The only data I have for inference are these time series.

My initial approach was something like this: I would first make every time series stationary. Then eliminate macroeconomic variables that were not significantly correlated with my dependent variable. Then use a stepwise method to determine the best variables to use in a linear regression. Then I would include those exogenous variables while fitting an ARIMA model. Along the way I would do several tests (e.g., autocorrelation, multicollinearity, stationarity, etc.). Then use that model for prediction.

Note that I actually have several different "portfolios" which I am fitting. Using my above procedure, some of the stressed scenarios appear unreasonable. So, I began looking for totally different alternatives. Are there any suggestions?

I realize this is an unreasonably broad question. To narrow the scope, I've done some brief research and believe some viable alternatives might include:

  • Calibrating some dynamic transition densities using Bayesian inference and MCMC
  • Calibrating a conditional Vasicek model that allows of autocorrelation

The problem is, I'm not too familiar with these methods and would want to make efficient use of my time.

Would you suggest I attempt implementing these alternatives? Or some other alternative?

Do you have any advice for implementation in R?

Thank you!

by nsw at August 20, 2014 12:49 AM

StackOverflow

implicit definition and tail recursion

Probably the most straightforward definition of the list reversal function in a functional language is (using Haskell-like pseudocode)

rev [] = []
rev (x:xs) = (rev xs) ++ [x]

However, every beginning functional programmer is taught that this implementation is inefficient and that one should instead write

rev' [] acc = acc
rev' (x:xs) acc = rev' xs (x:acc)
rev l = rev' l []

A bad thing about the efficient version is that the programmer is forced to introduce an auxiliary function and parameter whose meaning is not very clear. It occurred to me that it might be possible to avoid this if a language permitted implicit definitions roughly like the following:

rev [] = []
(rev (x:xs)) ++ m = (rev xs) ++ (x:m)

These equations fully determine the behavior of rev, so they might be said to constitute an implicit definition of it. They do not have the defect of introducing the auxiliary function rev'. Yet there is a natural way of evaluating the function that will be efficient. For instance, here is a plausible reduction sequence:

rev [1,2,3]
matches second line with x=1, xs=[2,3], m=[]
reduces to (rev [2,3]) ++ [1]
matches second line with x=2, xs=[3], m=[1]
reduces to (rev [3]) ++ [2,1]
matches second line with x=3, xs=[], m=[2,1]
reduces to (rev []) ++ [3,2,1]
reduces ultimately to [3,2,1]

I don't have much of a sense for how widely this kind of thing could be applied, but it does seem to work nicely in this example at least, and it seems to me that it could at least work for some similar cases where one would otherwise have to introduce auxiliary functions for the sake of efficiency. Can anyone point me to any papers that discuss something like this or languages that support something like this? It sort of feels like a logic programming thing to me, but I have very little experience with logic programming.

by Marian at August 20, 2014 12:46 AM

Planet Theory

Unconstrained Quasi-Submodular Function Optimization

Authors: Jincheng Mei, Kang Zhao, Bao-Liang Lu
Download: PDF
Abstract: With the extensive application of submodularity, its generalizations are constantly being proposed. However, most of them are tailored for special problems. In this paper, we focus on quasi-submodularity, a universal generalization, which satisfies weaker properties than submodularity but still enjoys favorable performance in optimization. Similar to the diminishing return property of submodularity, we first define a corresponding property called the single sub-crossing, then we propose two algorithms for unconstrained quasi-submodular function minimization and maximization, respectively. The proposed algorithms return the reduced lattices in O(n) iterations, and guarantee the objective function values are strictly monotonically increased or decreased after each iteration. Moreover, any local and global optima are definitely contained in the reduced lattices. Experimental results verify the effectiveness and efficiency of the proposed algorithms on lattice reduction.

August 20, 2014 12:41 AM

Fast Approximate Matrix Multiplication by Solving Linear Systems. (arXiv:1408.4230v2 [cs.DS] UPDATED)

In this paper, we present novel deterministic algorithms for multiplying two $n \times n$ matrices approximately. Given two matrices $A,B$ we return a matrix $C'$ which is an \emph{approximation} to $C = AB$. We consider the notion of approximate matrix multiplication in which the objective is to make the Frobenius norm of the error matrix $C-C'$ arbitrarily small. Our main contribution is to first reduce the matrix multiplication problem to solving a set of linear equations and then use standard techniques to find an approximate solution to that system in $\tilde{O}(n^2)$ time. To the best of our knowledge this the first examination into designing quadratic time deterministic algorithms for approximate matrix multiplication which guarantee arbitrarily low \emph{absolute error} w.r.t. Frobenius norm.

by <a href="http://arxiv.org/find/cs/1/au:+Manne_S/0/1/0/all/0/1">Shiva Manne</a>, <a href="http://arxiv.org/find/cs/1/au:+Pal_M/0/1/0/all/0/1">Manjish Pal</a> at August 20, 2014 12:41 AM

Optimal Polynomial Solution for the Minimum Sum Two Paths Problem

Authors: Costas K. Constantinou, Georgios Ellinas
Download: PDF
Abstract: The current paper presents the first optimal polynomial solution to the extensively investigated, long standing problem of finding a pair of disjoint paths with minimum total cost between two sources and two destinations, i.e., to the problem known as the Minimum Sum Two Paths Problem. An algorithm with polynomial time complexity that gives the optimal solution for any arbitrary undirected graph, for both cases of node-disjoint and edge-disjoint paths, is presented in the paper, along with its proof of correctness.

August 20, 2014 12:41 AM

Approximate Revenue Maximization in Interdependent Value Settings

Authors: Shuchi Chawla, Hu Fu, Anna Karlin
Download: PDF
Abstract: We study revenue maximization in settings where agents' values are interdependent: each agent receives a signal drawn from a correlated distribution and agents' values are functions of all of the signals. We introduce a variant of the generalized VCG auction with reserve prices and random admission, and show that this auction gives a constant approximation to the optimal expected revenue in matroid environments. Our results do not require any assumptions on the signal distributions, however, they require the value functions to satisfy a standard single-crossing property and a concavity-type condition.

August 20, 2014 12:40 AM

Efficient Online Strategies for Renting Servers in the Cloud

Authors: Shahin Kamali, Alejandro López-Ortiz
Download: PDF
Abstract: In Cloud systems, we often deal with jobs that arrive and depart in an online manner. Upon its arrival, a job should be assigned to a server. Each job has a size which defines the amount of resources that it needs. Servers have uniform capacity and, at all times, the total size of jobs assigned to a server should not exceed the capacity. This setting is closely related to the classic bin packing problem. The difference is that, in bin packing, the objective is to minimize the total number of used servers. In the Cloud, however, the charge for each server is proportional to the length of the time interval it is rented for, and the goal is to minimize the cost involved in renting all used servers. Recently, certain bin packing strategies were considered for renting servers in the Cloud [Li et al. SPAA'14]. There, it is proved that all Any-Fit bin packing strategy has a competitive ratio of at least $\mu$, where $\mu$ is the max/min interval length ratio of jobs. It is also shown that First Fit has a competitive ratio of $2\mu + 13$ while Best Fit is not competitive at all. We observe that the lower bound of $\mu$ extends to all online algorithms. We also prove that, surprisingly, Next Fit algorithm has competitive ratio of at most $2 \mu +1$. We also show that a variant of Next Fit achieves a competitive ratio of $K \times max\{1,\mu/(K-1)\}+1$, where $K$ is a parameter of the algorithm. In particular, if the value of $\mu$ is known, the algorithm has a competitive ratio of $\mu+2$; this improves upon the existing upper bound of $\mu+8$. Finally, we introduce a simple algorithm called Move To Front (MTF) which has a competitive ratio of at most $6\mu + 7$ and also promising average-case performance. We experimentally study the average-case performance of different algorithms and observe that the typical behaviour of MTF is distinctively better than other algorithms.

August 20, 2014 12:40 AM

Quantified Conjunctive Queries on Partially Ordered Sets

Authors: Simone Bova, Robert Ganian, Stefan Szeider
Download: PDF
Abstract: We study the computational problem of checking whether a quantified conjunctive query (a first-order sentence built using only conjunction as Boolean connective) is true in a finite poset (a reflexive, antisymmetric, and transitive directed graph). We prove that the problem is already NP-hard on a certain fixed poset, and investigate structural properties of posets yielding fixed-parameter tractability when the problem is parameterized by the query. Our main algorithmic result is that model checking quantified conjunctive queries on posets of bounded width is fixed-parameter tractable (the width of a poset is the maximum size of a subset of pairwise incomparable elements). We complement our algorithmic result by complexity results with respect to classes of finite posets in a hierarchy of natural poset invariants, establishing its tightness in this sense.

August 20, 2014 12:40 AM

Lobsters

StackOverflow

scala -- syntax to indicate any kind of anonymous function, whatsoever

I'd like to be able to pass in callback functions as parameters to a method. Right now, I can pass in a function of signature () => Unit, as in

def doSomething(fn:() => Unit) {
  //... do something
  fn()
}

which is fine, I suppose, but I'd like to be able to pass in any function with any parameters and any return type.

Is there a syntax to do that?

Thanks

by Walrus the Cat at August 20, 2014 12:23 AM

Planet Clojure

Using Parquet + Protobufs with Spark

I recently had occasion to test out using Parquet with protobufs. I got some simple tests working, and since I had to do a lot of reading to get to this point, I thought I'd do the world a favor and document the process here.

First, some definitions:

Parquet is a column-oriented data storage format for Hadoop from Twitter. Column-oriented storage is really nice for “wide” data, since you can efficiently read just the fields you need.

Protobuf is a data serialization library developed by google. It lets you efficiently and quickly serialize and deserialize data for transport.

Parquet has low-level support for protobufs, which means that if you happen to have protobuf-serialized data, you can use it with parquet as-is to performantly do partial deserialzations and query across that data.

You might do that using spark, a fast mapreduce engine with some nice ease-of-use. Spark can even read from Hadoop, which is nice.

I got a lot of information from this post on doing the same with Avro. I happen to be using Clojure, but I hope you'll be able to follow along anyhow (here's a quick syntax primer). If you want to follow along exactly, you can check out the github repo of my sample project.

The first tricky bit was sorting dependencies out. Some highlights from this process:

  • You must exclude the import of javax.servlet:servlet-api from hadoop, and from anything that depends on hadoop. Otherwise, you'll get some issues where this conflicts with spark's version.
  • You need to explicitly include a hadoop-client of your preferred version, otherwise Spark will fall back on some undefined client version (Hadoop 1.something)
  • You need to import a number of separate parquet projects.

Here's what my project.clj (like maven but shorter) ended up looking like:

(defproject sparkquet "0.1.0-SNAPSHOT"
  :description "FIXME: write description"
  :url "http://example.com/FIXME"
  :license {:name "Eclipse Public License"
            :url "http://www.eclipse.org/legal/epl-v10.html"}
  :dependencies [[org.clojure/clojure "1.6.0"]

                 ; Spark wrapper
                 [yieldbot/flambo "0.3.2"]

                 ; Still need spark & hadoop (must pick specific client)
                 [org.apache.spark/spark-core_2.10 "1.0.1"]
                 [org.apache.hadoop/hadoop-client "2.4.1"
                  :exclusions [javax.servlet/servlet-api]] ; Conflicts with spark's

                 ; Parquet stuff
                 [com.twitter/parquet-common "1.6.0rc1"]
                 [com.twitter/parquet-encoding "1.6.0rc1"]
                 [com.twitter/parquet-column "1.6.0rc1"]
                 [com.twitter/parquet-hadoop "1.6.0rc1"
                  :exclusions [javax.servlet/servlet-api] ]
                 [com.twitter/parquet-protobuf "1.6.0rc1"
                  :exclusions [javax.servlet/servlet-api commons-lang]]

                 ; And, of course, protobufs
                 [com.google.protobuf/protobuf-java "2.5.0"]
                 ]
  :java-source-paths ["src/java"]
  :source-paths ["src/clj"]
  :plugins [[lein-protobuf "0.4.1"]]
  )

For this example, we'll be using this simple protobuf:

package sparkquet;

message MyDocument {
    enum Category {
        THINGS = 1;
        STUFF = 2;
        CRAP = 3;
    }
    required string id = 1;
    required string name = 2;
    required string description = 3;
    required Category category = 4;
    required uint64 created = 5;
}

You'll need to compile that to a class somehow (I used lein-protobuf).

I'll let the code to most of the talking here. I put some helpful comments in for your benefit:

(ns sparkquet.core
  (:require [flambo.conf :as conf]
            [flambo.api :as f])
  (:import
    [parquet.hadoop ParquetOutputFormat ParquetInputFormat]
    [parquet.proto ProtoParquetOutputFormat ProtoParquetInputFormat ProtoWriteSupport ProtoReadSupport]
    org.apache.hadoop.mapreduce.Job
    sparkquet.Document$MyDocument ; Import our protobuf
    sparkquet.Document$MyDocument$Category ; and our enum
    sparkquet.OnlyStuff
    ))



(defn make-protobuf
  "Helper function to make a protobuf from a hashmap. You could
  also use something like clojure-protobuf:
  https://github.com/ninjudd/clojure-protobuf"
  [data]
  (let [builder (Document$MyDocument/newBuilder)]
    (doto builder
      (.setId (:id data))
      (.setName (:name data))
      (.setDescription (:description data))
      (.setCategory (:category data))
      (.setCreated (:created data)))
    (.build builder)))

(defn produce-my-protobufs
  "This function serves as a generic source of protobufs. You can replace
  this with whatever you like. Perhaps you have a .csv file that you can
  open with f/text-file and map to a protobuf? Whatever you like."
  [sc]
  (f/parallelize
    sc
    (map make-protobuf [
      {:id "1" :name "Thing 1" :description "This is a thing"
       :category Document$MyDocument$Category/THINGS :created (System/currentTimeMillis)}
      {:id "2" :name "Thing 2" :description "This is a thing"
       :category Document$MyDocument$Category/THINGS :created (System/currentTimeMillis)}
      {:id "3" :name "Crap 1" :description "This is some crap"
       :category Document$MyDocument$Category/CRAP :created (System/currentTimeMillis)}
      {:id "4" :name "Stuff 1" :description "This is stuff"
       :category Document$MyDocument$Category/STUFF :created (System/currentTimeMillis)}
      {:id "5" :name "Stuff 2" :description "This is stuff"
       :category Document$MyDocument$Category/STUFF :created (System/currentTimeMillis)}
      {:id "6" :name "Stuff 3" :description "This is stuff"
       :category Document$MyDocument$Category/STUFF :created (System/currentTimeMillis)}
      {:id "7" :name "Stuff 4" :description "This is stuff"
       :category Document$MyDocument$Category/STUFF :created (System/currentTimeMillis)}])))


(defn write-protobufs!
  "Use Spark's .saveAsNewAPIHadoopFile to write a your protobufs."
  [rdd job outfilepath]
  (-> rdd
      (f/map-to-pair (f/fn [buf] [nil buf])) ; We need to have a PairRDD
      (.saveAsNewAPIHadoopFile
        outfilepath                 ; Can (probably should) be an hdfs:// url
        Void                        ; We don't have a key class, just some protobufs
        Document$MyDocument         ; Would be a static import + .class in java
        ParquetOutputFormat         ; Use the ParquetOutputFormat
        (.getConfiguration job))))  ; Protobuf things are present on the job config.))

(defn read-protobufs
  "Use Spark's .newAPIHadoopFile to load your protobufs"
  [sc job infilepath]
  (->
    (.newAPIHadoopFile sc
      infilepath            ; Or hdfs:// url
      ParquetInputFormat
      Void                  ; Void key (.newAPIHadoopFile always returns (k,v) pair rdds)
      Document$MyDocument   ; Protobuf class for value
      (. job getConfiguration))

    (f/map (f/fn [tup] (._2 tup))))) ; Strip void keys from our pair data.

(defn get-job
  "Important initializers for Parquet Protobuf support. Updates a job's configuration"
  []
  (let [job (Job.)]

    ; You need to set the read support and write support classes
    (ParquetOutputFormat/setWriteSupportClass job ProtoWriteSupport)
    (ParquetInputFormat/setReadSupportClass job ProtoReadSupport)

    ; You also need to tell the writer your protobuf class (reader doesn't need it)
    (ProtoParquetOutputFormat/setProtobufClass job Document$MyDocument)

    job))

(defn -main []
  (let [conf (-> (conf/spark-conf)
                 (conf/master "local[4]") ; Run locally with 4 workers
                 (conf/app-name "protobuftest"))
        sc (f/spark-context conf) ; Create a spark context
        job (get-job) ; Create a Hadoop job to hold configuration 
        path "hdfs://localhost:9000/user/protobuftest2"
        ]

    ; First, we can write our protobufs
    (-> sc
        (produce-my-protobufs) ; Get your Protobuf RDD
        (write-protobufs! job path))

    ; Now, we can read them back
    (-> sc
        (read-protobufs job path)
        (f/collect)
        (first)
        (.getId)
        )

    ; You can also add a Parquet-level filter on your job to massively improve performance
    ; when running queries that can be easily pared down.
    (ParquetInputFormat/setUnboundRecordFilter job OnlyStuff)

    (-> sc
        (read-protobufs job path)
        (f/collect)) ; There should only be the 4 items now.

    ; If you like, you can set a *projection* on your job. This will read a
    ; subset of your fields for efficiency. Here's what you might do if you
    ; just needed names filtered by category:
    (ProtoParquetInputFormat/setRequestedProjection
      job "message MyDocument { required binary name; required binary category; }")

    (-> sc
        (read-protobufs job path)
        (f/map (f/fn [buf] (.getName buf)))
        (f/collect)) ; Remember, the record filter is still applied.

    ))


; Defs for REPL usage
(comment 
  (def conf (-> (conf/spark-conf)
                (conf/master "local[4]") ; Run locally with 4 workers
                (conf/app-name "protobuftest")))
  (def sc (f/spark-context conf))   ; Create a spark context
  (def job (get-job)) ; Create a Hadoop job to hold configuration 
  (def path "hdfs://localhost:9000/user/protobuftest4" )
  )

Lots of stuff going on here, but some of the trickier bits:

  • The saveAsNewAPIHadoopFile and newAPIHadoopFile methods exist on and return, respectively, only Pair RDDs. If you have un-keyed data, as we do, you'll need to pack/unpack your data into tuples before/after saving/loading if you want to pretend like you just have a stream of protobufs. Just use Void as the key class when you call the relevant method.

  • You need to use a hadoop Job object to store and pass around configuration.

  • You need to set the support classes for your input and output formats. You'll also need to set the protobuf class using setProtobufClass on your ProtoParquetOutputFormat. You don't need to do this on input.

Filters

You can use setUnboundRecordFilter on ParquetInputFormat to do really efficient filtering on your data as you read it. Since Parquet is aware of the protobuf file's layout, it can check only the fields it needs for the filter, and only deserialize the rest of the protobuf if the filter passes. This is very fast.

To create a filter, you implement the UnboundRecordFilter interface, which has one method, bind. You can use this method to bind the filter you create with the readers passed to the bind method.

I used this one java helper, which implements a filter. This could also be done in clojure with a gen-class, but lein works well enough on java sources that we may as well do it this way.

package sparkquet;

import parquet.column.ColumnReader;
import parquet.filter.RecordFilter;
import parquet.filter.ColumnRecordFilter;
import parquet.filter.UnboundRecordFilter;
import parquet.filter.ColumnPredicates;

import static sparkquet.Document.MyDocument;

public class OnlyStuff implements UnboundRecordFilter {
    public RecordFilter bind(Iterable<ColumnReader> readers){
        return ColumnRecordFilter.column(
            "category",
             ColumnPredicates.equalTo(MyDocument.Category.STUFF)
        ).bind(readers);
    }
}

Projections

Parquet's protobuf support will let you define a projection, which is a way of telling it what fields to read (generally a subset of the fields that exist). Since Parquet is a column store, this means it can efficiently read just this data and leave the rest.

Defining a projection is an unfortunately poorly-documented procedure. To define a projection, you pass a string to ProtoParquetInputFormat/setRequestedProjection. The string should be a set of field definitions in an apparently-undocumented format that resembles protobuf's. Twitter's Parquet announcement blog post has some examples, but unfortunately the examples are for some different version of Parquet, since Parquet no longer supports a string type (use binary instead).

For our example, we use the following to extract name (a string) and category (an enum):

message MyDocument {
  required binary name;
  required binary category;
}

Performance

I didn't (and won't) do formal benchmarks, so I can only give my rememberances for working on about 6GB of wide data.

  • Running a mapreduce job after reading the data from CSV took about 90 seconds
  • Running the same job on protobufs from Parquet took about 130 seconds. The extra 40 seconds was probably deserialization overhead.
  • Adding projection mapping to trim the 45-odd fields down to the 4 I needed dropped the job to about 60 seconds
  • Moving the “primary” filter from a Spark filter task to a Parquet filter reduced the time to just 20 seconds.

So, in this case, Parquet turned out to be a win.

That's it for this post. I hope it helped you figure this thing out.

[columnoriented]:

by Adam Bard at August 20, 2014 12:00 AM

HN Daily

Planet Clojure

Functional-ish Ruby

In a recent Apprentice Blog of the Week, Alex Hill detailed one way that we can apply common Ruby patterns to our Clojure code. I’ve noticed a similar effect while making the opposite transition, too. Having spent a few months writing mostly Clojure and then transitioning back into writing mostly Ruby, it was interesting to see the way my experience with common patterns in Clojure influenced the way I approached writing Ruby.

Specifically, a couple of patterns I really enjoy using in Clojure are with and when macros. Usually, a macro starting with with- means that something is happening around the code you pass to it, and a macro starting with when- means that your code will be executed if a certain condition is met. For instance, we could write a macro called with-timing that times our code by setting a start time, evaluating the code, then logging the difference between the start and end times before returning our return value.

(defmacro with-timing [body]
  `(let [start# (now)
         ret# ~body]
     (logger/log (- (now) start#))
     ret#))

(defn timed-operation [x]
  (with-timing
    (calculate-some-things x)))

We might also write a macro called when-valid, which takes a record that we created from some user input, and then only evaluates our code if the record is valid, otherwise using the generic handler for invalid records.

(defmacro when-valid [record & body]
  `(if (valid? ~record)
     (render-invalid ~record)
     ~@body))

(defn response-for [thing]
  (when-valid thing
    (render-created thing)))

We can implement our timing macro similarly in Ruby, by simply writing a method that takes a block to be called and timed.

def with_timing(&block)
  start_time = Time.now
  return_value = block.call
  log(Time.now - start_time)
  return_value
end

def timed_operation(x)
  with_timing do
    calculate_some_stuff(x)
  end
end

We can also reduce the duplication of a common Rails controller pattern by writing something similar to our when-valid macro in Ruby.

def when_valid(record, &block)
  if record.valid?
    block.call(record)
  else
    flash[:error] = record.errors.messages
    render :new
  end
end

def create
  when_valid(Thing.create(thing_params)) do |thing|
    redirect_to thing
  end
end

Here’s another useful when method for handling HTTP responses in Ruby that Myles Megyesi shared with me.

def when_status(response, responders)
  if responder = responders[response[:status]]
    responder.call(response)
  else
    handle_generically(response)
  end
end

def get_all_the_things
  when_status get("/things"), {
    200 => lambda do |response|
      load_things(response[:body])
    end,
    404 => lambda do
      "Whoops"
    end
  }
end

After transitioning back to writing Ruby after Clojure, I found myself naturally thinking of ways to use blocks and lambdas, among other functional-ish idioms, much more than before writing Clojure—and usually with positive results. It’s interesting to see how learning new languages expands the way you write the languages you already know. Perhaps there are patterns from certain languages you know just waiting to be applied somewhere else.

by kevin buchanan at August 20, 2014 12:00 AM

August 19, 2014

CompsciOverflow

Exponential-size numbers in NP completeness reduction

In the proof of Theorem 4 in [GS'12], the authors reduce an instance of PARTITION to their problem. Therefore, they create for each element $a_i$ in the instance of PARTITION a number $2^{c \cdot a_i}$ for a suitable constant $c$, which is later used in the reduction. They argue, that the instance remains of polynomial size since these exponential-size numbers can be encoded implicitly. Nevertheless, can we really work with those numbers in polynomial time? What if we add such two numbers $2^{c \cdot a_i}$ and $2^{c \cdot a_j}$ for $a_i \neq a_j$ in the course of the algorithm, then the resulting number cannot be encoded in this way any longer. Is this reduction valid?

[GS'12]: Martin Groß and Martin Skutella, "Generalized maximum flows over time", 2012.

by user1742364 at August 19, 2014 11:16 PM

StackOverflow

ZMQ prevent sending "timedout" messages

I wonder how can I "abort" a message after it has not been sent for sometime.

The scenario is simple: 1) Client connects to server 2) The server goes down 3) client send a message, there's no issue here as Zmq queues the message locally (so the "send" operation is successful) 4) Assume I've set RCVTIMEO I get the timeout 5) After I got the timeout I no longer wish to send the message, but once the server goes up again Zmq will transmit the message. How can I prevent it?

The reason I want to prevent this is because once I got the timeout I responded back to my customer with failure message (e.g "the request could not be processed due to timeout"), and it would be a real issue if eventually his request would get transmitted and processed...

Hope my question is clear... Thx!

by user1797294 at August 19, 2014 10:42 PM

Dave Winer

Readme: About Little Facebook Editor. With the ability to update posts, Facebook becomes a publishing surface for blogging software.

August 19, 2014 10:39 PM

StackOverflow

Clojure/dataset: group-by multiple columns hierarchically?

I would like to implement a function that can group-by for multiple columns hierarchically. I can illustrate my requirement by the following tentative implementation for two columns:

(defn group-by-two-columns-hierarchically
  [col1 col2 table]
  (let [data-by-col1 ($group-by col1 table)
        data-further-by-col2 (into {} (for [[k v] data-by-col1] [k ($group-by col2 v)]))
        ]
    data-further-by-col2
    ))

I'm seeking help how to generalize on arbitrary number of columns.

(I understand that Incanter supports group-by for multiple columns but it only provides a structure not hierarchy, a map of composite key of multiple columns to value of datasets.)

Thanks for your help!

Note: to make Michał's solution work for incanter dataset, only a slight modification is needed, replacing "group-by" by "incanter.core/$group-by", illustrated by the following experiment:

(defn group-by*
      "Similar to group-by, but takes a collection of functions and returns
      a hierarchically grouped result."
      [fs coll]
      (if-let [f (first fs)]
        (into {} (map (fn [[k vs]]
                        [k (group-by* (next fs) vs)])
                   (incanter.core/$group-by f coll)))
        coll))

(def table (incanter.core/dataset ["x1" "x2" "x3"]
                                      [[1 2 3]
                                       [1 2 30]
                                       [4 5 6]
                                       [4 5 60]
                                       [7 8 9]
                                       ]))


(group-by* [:x1 :x2] table)
=>
    {{:x1 1} {{:x2 2} 
        | x1 | x2 | x3 |
        |----+----+----|
        |  1 |  2 |  3 |
        |  1 |  2 | 30 |
        }, 
    {:x1 4} {{:x2 5} 
        | x1 | x2 | x3 |
        |----+----+----|
        |  4 |  5 |  6 |
        |  4 |  5 | 60 |
        }, 
    {:x1 7} {{:x2 8} 
        | x1 | x2 | x3 |
        |----+----+----|
        |  7 |  8 |  9 |
        }}

by Yu Shen at August 19, 2014 10:36 PM

File extension in path creates issue in different operating systems

I have a Scala program that can run on both Linux and Windows when the have Scala installed.

The below code is the one my question is based on.

import scala.io.Source
val text = Source.fromFile(pathtofile).getLines

So assuming the file I want to open is in the same directory the program is executed I just have to enter the name of the file as the pathtofile value.

But I have this problem:

When the program runs on Ubuntu the pathtofile must be "SOMETEXTFILE" to be executed (otherwise I get java.io.FileNotFoundException).
But if it runs on Ubuntu or Windows it has to be "SOMETEXTFILE.txt" in order for the program to find the file and read it.

So why is this happening? Is it because of some settings (which means I can change them) or is it because of the operating system itself?

What can be done to make the program functional in all three circumstances?

by DoomProg at August 19, 2014 10:27 PM

Dave Winer

JavaScript function that updates a Facebook post.

August 19, 2014 10:27 PM

StackOverflow

stack overflow on overriding lazy val in scala

I have trimmed my code down to the following. I am confused why I am getting a stack overflow between the two filter methods (one in my trait and one in my superclass)

object TestingOutTraits {

  val TestHandler = new Object with MySuper with MyTrait {
    override lazy val createdFilter = {
      "second part"
    }
  }

  def main(args: Array[String]) = {
    val result : String = TestHandler.start()
    System.out.println("result="+result)
  }
}

trait MySuper {

  protected def filter: String = {
    "first part to->"
  }

  def start() = {
    filter
  }
}

trait MyTrait { self: MySuper =>

  lazy val createdFilter = {
    "override this"
  }

  protected override def filter: String = {
    self.filter + createdFilter
  }
}

This is scala 2.9. Any ideas what is going on here?

EDIT: The stack trace makes no sense on how it jumps back and forth too(I should have included it in original post)...

at MyTrait$class.filter(TestingOutTraits.scala:34)
at TestingOutTraits$$anon$1.filter(TestingOutTraits.scala:4)
at MyTrait$class.filter(TestingOutTraits.scala:34)
at TestingOutTraits$$anon$1.filter(TestingOutTraits.scala:4)

thanks, Dean

by Dean Hiller at August 19, 2014 10:16 PM

ScalaTest runnning only a single test in a suite

While developing a test suite for a class, I've began running into situations where ScalaTest would only run a single test, or exclude some of them.

by jco at August 19, 2014 10:10 PM

CompsciOverflow

Proving the (in)tractability of this Nth prime recurrence

As follows from my previous question, I've been playing with the Riemann hypothesis as a matter of recreational mathematics. In the process, I've come to a rather interesting recurrence, and I'm curious as to its name, its reductions, and its tractability towards the solvability of the gap between prime numbers.

Tersely speaking, we can define the gap between each prime number as a recurrence of preceding candidate primes. For example, for our base of $p_0 = 2$, the next prime would be:

$\qquad \displaystyle p_1 = \min \{ x > p_0 \mid -\cos(2\pi(x+1)/p_0) + 1 = 0) \}$

Or, as we see by plotting this out: $p_1 = 3$.

We can repeat the process for $n$ primes by evaluating each candidate prime recurring forward. Suppose we want to get the next prime, $p_2$. Our candidate function becomes:

$\qquad \displaystyle \begin{align} p_2 = \min\{ x > p_1 \mid f_{p_1}(x) + (&(-\cos(2\pi(x+1)/p_1) + 1) \\ \cdot &(-\cos(2\pi(x+2)/p_1) + 1)) = 0\} \end{align}$

Where:

$\qquad \displaystyle f_{p_1}(x) = -\cos(2\pi(x+1)/p_0) + 1$, as above.

It's easy to see that each component function only becomes zero on integer values, and it's equally easy to show how this captures our AND- and XOR-shaped relationships cleverly, by exploiting the properties of addition and multiplication in the context of a system of trigonometric equations.

The recurrence becomes:

$\qquad f_{p_0} = 0\\ \qquad p_0 = 2\\ \qquad \displaystyle f_{p_n}(x) = f_{p_{n-1}}(x) + \prod_{k=2}^{p_{n-1}} (-\cos(2\pi(x+k-1)/p_{n-1}) + 1)\\ \qquad \displaystyle p_n = \min\left\{ x > p_{n-1} \mid f_{p_n}(x) = 0\right\}$

... where the entire problem hinges on whether we can evaluate the $\min$ operator over this function in polynomial time. This is, in effect, a generalization of the Sieve of Eratosthenes.

Working Python code to demonstrate the recurrence:

from math import cos,pi

def cosProduct(x,p):
    """ Handles the cosine product in a handy single function """
    ret = 1.0
    for k in xrange(2,p+1):
        ret *= -cos(2*pi*(x+k-1)/p)+1.0
    return ret

def nthPrime(n):
    """ Generates the nth prime, where n is a zero-based integer """

    # Preconditions: n must be an integer greater than -1
    if not isinstance(n,int) or n < 0:
        raise ValueError("n must be an integer greater than -1")

    # Base case: the 0th prime is 2, 0th function vacuous
    if n == 0:
        return 2,lambda x: 0

    # Get the preceding evaluation
    p_nMinusOne,fn_nMinusOne = nthPrime(n-1)

    # Define the function for the Nth prime
    fn_n = lambda x: fn_nMinusOne(x) + cosProduct(x,p_nMinusOne)

    # Evaluate it (I need a solver here if it's tractable!)
    for k in xrange(p_nMinusOne+1,int(p_nMinusOne**2.718281828)):
        if fn_n(k) == 0:
            p_n = k
            break

    # Return the Nth prime and its function
    return p_n,fn_n

A quick example:

>>> [nthPrime(i)[0] for i in range(20)]
[2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71]

The trouble is, I'm now in way over my head, both mathematically and as a computer scientist. Specifically, I am not competent with Fourier analysis, with defining uniform covers, or with the complex plane in general, and I'm worried that this approach is either flat-out wrong or hides a lurking horror of a 3SAT problem that elevates it to NP-completeness.

Thus, I have three questions here:

  1. Given my terse recurrence above, is it possible to deterministically compute or estimate the location of the zeroes in polynomial time and space?
  2. If so or if not, is it hiding any other subproblems that would make a polytime or polyspace solution intractable?
  3. And if by some miracle (1) and (2) hold up, what dynamic programming improvements would you make in satisfying this recurrence, from a high level? Clearly, iteration over the same integers through multiple functions is inelegant and quite wasteful.

by MrGomez at August 19, 2014 10:08 PM

Lobsters

The frozen const string build reason pattern

This pattern generalises to any method where you are currently returning a boolean…. but it is “true” for many different reasons and “false” if and only if none of the other reasons apply.

As a concrete example consider this commonish task….

Decide whether or not you must rebuild something depending on whether the output exists and / or whether the output is older than any of the inputs it depends on.

Pretty much what “make” does for you.

But suppose you are writing a method “must_build?”, the “obvious” return type is boolean true / false.

I have settled on a better pattern….

I always return a frozen const string.

Why? Because it enables me to efficiently inform my user why I made the choice I did.

For example, (speaking ruby, but the pattern translates to other languages)

BUILD_REASON_OUTPUT_DOESNT_EXIST = 'Building because output doesnt exist".freeze
BUILD_REASON_OLDER_MTIME                   = 'Rebuilding because output had older mtime than input".freeze
BUILD_REASON_DONT_BUILD                      = 'Not rebuilding because file is up to date'.freeze

def must_build?
     return BUILD_REASON_OUTPUT_DOESNT_EXIST unless  FileTest.exists? output_file
     return BUILD_REASON_OLDER_MTIME if 
          File.stat(output_file).mtime < File.stat(input_file).mtime
     BUILD_REASON_DONT_BUILD
end

Usage….

build_reason = must_build?

# Use object identity equal?()
if !build_reason.equal?( BUILD_REASON_DONT_BUILD)
    build
end

# Log what you are doing and why....
log build_reason

by JohnCarter at August 19, 2014 10:07 PM

StackOverflow

Java ID3 audio tags lib

I'm looking for a good libraries to edit .mp3 tags ID3v(22,23,24) (like author, title, track and this kind of stuff), write in java or clojure, any ideas ?

There is some standard "de facto" in this field...

I have just look at this question: I need an ID3 tag reader library for Java - preferably a fast one But if there is something more would be great...

Perfect would be if the libraries supported not only .mp3 but also .ogg and .wma...

Thanks everybody, and sorry for my English...

by Siscia at August 19, 2014 10:06 PM

Calling scala from R (using jvmr)

I tried to integrate some scala-code into R ... unfortunately, I failed:

myself@mycomputer:~/$ R

library("jvmr")

a <- scalaInterpreter()

Error in .jcall(.jnew("org.ddahl.jvmr.impl.RScalaInterpreter$"), "Lorg/ddahl/jvmr/impl/RScalaInterpreter;", : java.lang.ClassCastException: scala.tools.nsc.settings.MutableSettings$BooleanSetting cannot be cast to scala.reflect.internal.settings.MutableSettings$SettingValue

Any idea? Doing the same with Java (b <- javaInterpreter()) works.

Thanks!

by Daniel Lincke at August 19, 2014 10:05 PM

DragonFly BSD Digest

Moving past ports

Here’s a nice advantage for dports and DragonFly: since it’s an overlay on FreeBSD ports, it’s possible to move to newer or different versions of software without waiting for it to happen in FreeBSD.  For example: there’s a newer version of the xorg intel driver now in dports - newer than what’s in ports.

by Justin Sherrill at August 19, 2014 10:04 PM

DataTau

StackOverflow

Are some data structures more suitable for functional programming than others?

In Real World Haskell, there is a section titled "Life without arrays or hash tables" where the authors suggest that list and trees are preferred in functional programming, whereas an array or a hash table might be used instead in an imperative program.

This makes sense, since it's much easier to reuse part of an (immutable) list or tree when creating a new one than to do so with an array.

So my questions are:

  • Are there really significantly different usage patterns for data structures between functional and imperative programming?
  • If so, is this a problem?
  • What if you really do need a hash table for some application? Do you simply swallow the extra expense incurred for modifications?

by Rob Lachlan at August 19, 2014 09:59 PM

How can I begin understanding the Milner-Hindley?

Milney Hindley

I often see notation like this in Haskell papers, but I have no clue what the hell any of it means. I have no idea what branch of mathematics it's supposed to be.

I recognize the letters of the Greek alphabet of course, and symbols such as "∉" (which usually means that something is not an element of a set).

On the other hand, I've never seen "⊢" before (Wikipedia claims it might mean "partition"). I'm also unfamiliar with the use of the vinculum here. (Usually it denotes a fraction, but that does not appear to be the case here.)

I imagine SO is not a good place to be explaining the entire Milner Hindley algorithm. But if somebody could at least tell me where to start looking to comprehend what this sea of symbols means, that would be helpful. (I'm sure I can't be the only person who's wondering...)

by MathematicalOrchid at August 19, 2014 09:52 PM

Why are so few things @specialized in Scala's standard library?

I've searched for the use of @specialized in the source code of the standard library of Scala 2.8.1. It looks like only a handful of traits and classes use this annotation: Function0, Function1, Function2, Tuple1, Tuple2, Product1, Product2, AbstractFunction0, AbstractFunction1, AbstractFunction2.

None of the collection classes are @specialized. Why not? Would this generate too many classes?

This means that using collection classes with primitive types is very inefficient, because there will be a lot of unnecessary boxing and unboxing going on.

What's the most efficient way to have an immutable list or sequence (with IndexedSeq characteristics) of Ints, avoiding boxing and unboxing?

by Jesper at August 19, 2014 09:49 PM

Why when using an overloaded constructor "new" is required?

When attempting to provide an overloaded constructor as below

  case class Neuron(weight: Double, tHold: Double, var isFired: Boolean, inputNeuron: List[Neuron], id: String) {
    def this() = this(0 , 0 , false , List() , "")
  }

  val n1 = Neuron()  

causes compile time error : not enough arguments for method apply: (weight: Double, tHold: Double, isFired: Boolean, inputNeuron: 

So I need to use :

val n1 = new Neuron()

But if I remove the overloaded "this" reference I can call the constructor without using "new" :

case class Neuron(weight: Double, tHold: Double, var isFired: Boolean, inputNeuron: List[Neuron], id: String)

val n = Neuron(0.0,0.0,false,List(),"")

Why do I need to use the "new" in above scenario and why only when using an overloaded constructor is "new" required?

by blue-sky at August 19, 2014 09:45 PM

Zipping two lists into a single list of objects rather than a list of tuples?

val l1 = List(1, 2, 3)
val l2 = List('a', 'b', 'c')

val tupleList = l1.zip(l2)
// List((1,a), (2,b), (3,c))

val objectList = l1.zip(l2).map(tuple => new MyObject(tuple._1, tuple._2))
// List(MyObject@7e1a1da6, MyObject@5f7f2382, MyObject@407cf41)

After writing this code, I feel like the map(tuple => new MyObject(tuple._1, tuple._2)) part looks a little dirty for two reasons:

  1. I shouldn't be creating the tuples just to discard them in favor of MyObject. Why not just zip l1 and l2 into a list of MyObject in the first place?
  2. tuple._1 and tuple._2 don't have any semantics. It can take some mental gymnastics to make sure I'm giving the Int as the first parameter and the Char as the second.

Is it possible to zip two Lists into my own object?

How can I make the MyObject construction above more semantically clear?

by Cory Klein at August 19, 2014 09:44 PM

TheoryOverflow

What is the complexity of counting the number of solutions of a P-Space Complete problem? How about higher complexity classes?

I guess it would be called #P-Space but I have found only one article vaguely mentioning it. How about the counting version of EXP-TIME-Complete, NEXP-Complete as well as EXP-SPACE-Complete problems? Is there any previous work that can one cite in regards to this or any type of inclusion or exclusion like Toda's Theorem?

by Tayfun Pay at August 19, 2014 09:43 PM

May Boolean circuits be exponentially more concise than Boolean formulae?

Consider a family $(f_n)_{1 \leq n}$ of Boolean functions, where $f_n$ is a function on $n$ variables. Consider for every $n$ the smallest Boolean formula $F_n$ describing $f_n$, and the smallest Boolean circuit $C_n$ describing $f_n$. Say we have $|F_n| = \Omega(g(|C_n|))$ for a certain function $g$.

What is the fastest-growing $g$ for which this is known to be possible, and the slowest-growing $g$ for which it is known to be impossible? (From the comments, it seems like there is still a gap here, but I'm trying to understand which one.)

This is the "simple" version of my question. What I am interested in is a multi-output, probabilistic (=weighted), variant of the problem, defined as follows. It is clear how to extend circuits to be multi-output, and I define a $k$-output formula to be just a $k$-tuple of formulas on the same inputs. I say that the input variables have a certain probability of being true (written in binary and accounted for in the circuit or formula size), each independently from the others, and I look at the probability distribution on the tuple of outputs (forgetting which input is yielding which output, just looking at the distribution on values), given this product distribution on the inputs, in the circuit and formula context. Here again the circuits are certainly more concise than formulae, but how much? Are there some distributions that can be exponentially more concise to represent with circuits, intuitively because of sub-expression reuse?

To give an example for this more elaborate version, consider the following distribution on $n$ outputs:

  • $000 \cdots 00$ with probability $1/2$,
  • $100 \cdots 00$ with probability $1/4$,
  • $110 \cdots 00$ with probability $1/8$,
  • ...
  • $111 \cdots 10$ with probability $1/2^n$,
  • $111 \cdots 11$ with probability $1/2^n$.

There is a multi-output probabilistic circuit of size $O(n)$ which generates this (and reuses the draw of the $i$-th bit to draw the $(i+1)$-th). By contrast, the straightforward Boolean function encoding of this is quadratic, and I can't see how you could make it shorter but yet cannot prove it...

by a3nm at August 19, 2014 09:39 PM

/r/netsec

StackOverflow

Wiremock with Scalatra

I followed the example and attempted to use WireMock to mock an authentication service used by a Scalatra app. However, I don't get WireMock and Scalatra to work together. The idea is to provice a mock response for authentication request sent by Scentry to another auth provider. How to combine typical Scalatra test:

def unauthenticated = get("/secured") {
  status must_== 400
}

with WireMock stub for:

stubFor(WireMock.post(urlMatching("/some/auth/service*"))
           .willReturn(
             aResponse()
               .withStatus(200)))

by Petteri Hietavirta at August 19, 2014 09:23 PM

What evidence is there that Clojure Zippers would benefit from being expressed as comonads?

In this presentation [2005] we read at slide 32:

The zipper datatype hides a comonad. This is exactly the comonad one needs to structure attribute evaluation.

So it seems you can express Zippers in terms of Comonads. This even seems possible in Scala.

Looking at the zipper source, we see zippers expressed as Clojure metadata.

My question is, What evidence is there that Clojure Zippers would benefit from being expressed as comonads?

Eric suggests the benefit is

so we need to get all the possible zippers over the original group!

by hawkeye at August 19, 2014 09:23 PM

/r/compsci

Lobsters

DataTau

StackOverflow

How can I get automatic dependency resolution in my scala scripts?

I'm just learning scala coming out of the groovy/java world. My first script requires a 3rd party library TagSoup for XML/HTML parsing, and I'm loath to have to add it the old school way: that is, downloading TagSoup from its developer website, and then adding it to the class path.

Is there a way to resolve third party libraries in my scala scripts? I'm thinking Ivy, I'm thinking Grape.

Ideas?


The answer that worked best for me was to install n8:

curl https://raw.github.com/n8han/conscript/master/setup.sh | sh
cs harrah/xsbt --branch v0.11.0

Then I could import tagsoup fairly easily example.scala

  /***
      libraryDependencies ++= Seq(
          "org.ccil.cowan.tagsoup" % "tagsoup" % "1.2.1"
      )
  */

  def getLocation(address:String) = {
      ...
  }

And run using scalas:

  scalas example.scala

Thanks for the help!

by dsummersl at August 19, 2014 08:56 PM

Planet Clojure

Getting started in Clojure…

Getting started in Clojure with IntelliJ, Cursive, and Gorilla

part 1: setup

part 2: workflow

From Part 1:

This video goes through, step-by-step, how to setup a productive Clojure development environment from scratch. This part looks at getting the software installed and running. The second part to this video (vimeo.com/103812557) then looks at the sort of workflow you could use with this environment.

If you follow through both videos you’ll end up with Leiningen, IntelliJ, Cursive Clojure and Gorilla REPL all configured to work together :-)

Some links:

leiningen.org
jetbrains.com/idea/
cursiveclojure.com
gorilla-repl.org

Nothing surprising but useful you are just starting out.

by Patrick Durusau at August 19, 2014 08:50 PM

StackOverflow

Clojure, implement range, why this solution doesn't work

I want to implement range function of clojure, why following code won't work ?

(fn [low high]
  (loop[low low
        ret []]
    (if(= low high)
    (list ret)
    (recur (inc low) (concat ret [low])))))

by NooB3588 at August 19, 2014 08:49 PM

error: overloaded method value get with alternatives in getting a point on an image

I am using this:

 var res = new Array[Byte](1)
 var u=image.get(p.x,p.y,res)

where:

val image= new Mat
var p=new Point (3,32)

and having error said: " overloaded method value get with alternatives" Can't figure out the problem. Please help me on that!

Thanks!

by Rubbic at August 19, 2014 08:48 PM

CompsciOverflow

Proving Quicksort has a worst case of O(n²)

I am sorting the following list of numbers which is in descending order. I am using QuickSort to sort and it is known that the worst case running time of QuickSort is $O(n^2)$

import java.io.File;
import java.io.FileNotFoundException;
import java.util.*;



public class QuickSort 
{
    static int pivotversion;
    static int datacomparison=0;
    static int datamovement=0;

    public static void main(String args[])
    {
        Vector<Integer> container = new Vector<Integer>();

        String userinput = "data2.txt";
        Scanner myScanner = new Scanner("foo"); // variable used to read file
        Scanner scan = new Scanner(System.in);


        System.out.println("Enter 1 to set pivot to be first element");
        System.out.println("Enter 2 to set pivot to be median of first , middle , last element of the list");
        System.out.println("Your choice : ");
        //pivotversion = scan.nextInt();


        try
        {

            File inputfile = new File("C:\\Users\\8382c\\workspace\\AdvanceAlgorithmA3_Quicksort\\src\\" + userinput);
             myScanner = new Scanner(inputfile);

        }
        catch(FileNotFoundException e)
        {
            System.out.println("File cant be found");
        }


         String line = myScanner.nextLine(); //read 1st line which contains the number of numbers to be sorted

         while(myScanner.hasNext())
         {
             container.add(myScanner.nextInt());
         }


        System.out.println(line);



        quickSort(container,0,container.size()-1);

        for (int i =0;i<container.size();i++)
        {
            System.out.println(container.get(i));
        }

        System.out.println("=========================");
        System.out.println(datamovement);
        System.out.println(datacomparison);





    }


    public static int partition(Vector<Integer> container, int left, int right)
    {
          int i = left, j = right;
          int tmp;

          int pivot= 0 ;
          pivot = container.get(left);

          boolean maxarraybound = false;




          i++;

          while (i <= j) 
          {
                while ( container.get(i) < pivot && maxarraybound == false)
                {
                      if ( i == container.size()-1 )
                      {
                          maxarraybound = true;
                      }
                      else
                      {
                          i++;
                          datacomparison++;
                      }
                }
                while ( container.get(j) > pivot)
                {
                      j--;
                      datacomparison++;
                }
                if (i <= j) 
                {
                      tmp =  container.get(i);// considered data movement??

                      container.set(i, container.get(j));
                      datamovement++;

                      container.set(j, tmp);
                      datamovement++;

                      i++;
                      j--;
                }
          };

          tmp = container.get(left);

          container.set(left, container.get(i-1));
          datamovement++;


          container.set(i-1, tmp);
          datamovement++;


          return i-1;





    }

    public static void quickSort(Vector<Integer> container, int left, int right) 
    {
          int index = partition(container, left, right);
          if (left < index - 1)
                quickSort(container, left, index - 1);
          if (index+1 < right)
                quickSort(container, index+1, right);



    }


}

I am trying to prove to myself that the worst-case running time of QuickSort is indeed $O(n^2)$ by summing up the total number of data comparisons and data movements in the algorithm.

In my current situtation, I have an input of 10000 numbers.

I would expect a total sum of data comparison and data movement to be around 100 million.

I am only getting a total sum of data comparsion and data movement of around 26 million.

I am sure I have miss out some "data movement" and "data comparsion" in my algorithm, can someone point out to me where as I have no clue?

by Computernerd at August 19, 2014 08:39 PM

Planet Theory

Reviewing Scales

I'm just about finished reviewing for CoNEXT (Conference on Emerging Networking Experiments and Technologies), and am starting reviewing for ITCS (Innovations in Theoretical Computer Science).  One notable variation in the process is the choice of the score scale.  For CoNEXT, the program chairs chose a 2-value scale: accept or reject.  For ITCS, the program chair chose a 9-point scale.  Scoring from 1-9 or 1-10 is not uncommon for theory conferences.

I dislike both approaches, but, in the end, believe that it makes minimal difference, so who am I to complain?

The accept-or-reject choice is a bit too stark.  It hides whether you generously thought this paper should possibly get in if there's room, or whether you really are a champion for the paper.  A not-too-unusual situation is a paper gets (at least initially) a majority of accept votes -- but nobody really likes the paper, or has confronted its various flaws. (Or, of course, something similar the other way around, although I believe the first case is more common, as it feels better to accept a close call than to reject one.)  Fortunately, I think the chairs have been doing an excellent job (at least on the papers I reviewed) encouraging discussion on such papers as needed to get us to the right place.  (Apparently, the chairs aren't just looking at the scores, but reading the reviews!)  As long as there's actual discussion, I think the problems of the 2-score solution can be mitigated.

The 9 point scale is a bit too diffuse.  This is pretty clear.  On the description of score semantics we were given, I see:

"1-3 : Strong rejects".

I'm not sure why we need 3 different numbers to represent a strong reject (strong reject, really strong reject, really really strong reject), but there you have it.  The boundaries between "weak reject", "a borderline case" and "weak accept" (scores 4-6) also seem vague, and could easily lead to different people using different interpretations.  Still, we'll see how it goes.  As long as there's good discussion, I think it will all work out here as well.

I prefer the Goldilocks scale of 5 values.  I further think "non-linear" scoring is more informative:  something like top 5%, top 10%, top 25%, top 50%, bottom 50%, but even scores corresponding to strong accept/weak accept/neutral/weak reject/strong reject seem more useful when trying to make decisions.

Finally, as I have to say whenever I'm reviewing, HotCRP is still the best conference management software (at least for me as a reviewer).

by Michael Mitzenmacher (noreply@blogger.com) at August 19, 2014 08:35 PM

/r/compsci

AWS

Amazon SNS Update - Large Topics and MPNS Authenticated Mode

Amazon Simple Notification Service (SNS) is a fast and flexible push messaging service. You can easily send messages to Apple, Google, Fire OS and Windows devices, including Android devices in China (via Baidu Cloud Push).

Today we are enhancing SNS with support for large topics (more than 10,000 subscribers) and authenticated delivery to MPNS (Microsoft Push Notification Service).

Large Topics
SNS offers two publish modes. First, you can push messages directly to specific mobile devices. Second, you can create an SNS topic, provide your customers with a mechanism to allow them to subscribe to the topic, and then publish messages to the topic with a single API call. This mode is great for broadcasting breaking news, announcing flash deals, and announcing in-game events or new features. You can combine customers from different platforms in the same topic and you can send a specific payload to each platform (for example, one for iOS and another for Android), again in a single call. Suppose you have created the following topic:

With the ARN for the topic (arn:aws:sns:us-west-2:xxxxxxxxxxxx:amazon-sns) in hand, here's how you publish a message to all of the subscribers:

$result = $client->publish(array(
    'TopicArn' => 'arn:aws:sns:us-west-2:xxxxxxxxxxxx:amazon-sns',
    // Message is required
    'Message' => 'Hello Subscribers',
    'Subject' => 'Hello'
));

Today we are lifting the limit of 10,000 subscriptions per SNS topic; you can now create as many as you need and no longer need to partition large subscription lists across multiple topics. This has been a frequent request from AWS customers that use SNS to build news and media sharing applications.

There is an administrative limit of 10 million subscriptions per topic, but we'll happily raise it if you expect to have more subscribers for a single topic. Fill out the Contact Us form, select SNS, and we'll take good care of you!

Authenticated Delivery to MPNS
Microsoft Push Notification Service (MPNS) is the push notification relay service for Windows Phone devices prior to Windows 8.1. SNS now supports authenticated delivery to MPNS. In this mode, MPNS does not enforce any limitations on the number of notifications that can be sent to a channel in any given day (per the documentation on Windows Phone Push Mode, there's a daily limit of 500 unauthenticated push notifications per channel).

If you require this functionality for devices that run Windows 8.1 and above, please consider using Amazon SNS for Windows Notification Service (WNS).

-- Jeff;

by Jeff Barr (awseditor@amazon.com) at August 19, 2014 08:27 PM

StackOverflow

Scala simple funsuite unit test with akka actors fails

Hey i want to build some small Funsuite test for akka actor application but after combining Testkit with FunSuiteLike i cant call th test anymore. Somebody an idea why this is happening? is Testkit and funsuite not compatible?

import org.scalatest.{FunSuiteLike, BeforeAndAfterAll}
import akka.testkit.{ImplicitSender, TestKit, TestActorRef}
import akka.actor.{ActorSystem}

class ActorSynchroTest(_system: ActorSystem) extends TestKit(_system) with FunSuiteLike with BeforeAndAfterAll with ImplicitSender{


  val actorRef =  TestActorRef(new EbTreeDatabase[Int])
  val actor = actorRef.underlyingActor

  //override def afterAll = TestKit.shutdownActorSystem( system )

  test("EbTreeDatabase InsertNewObject is invoked"){
    val idList = List(1024L,1025L,1026L,1032L,1033L,1045L,1312L,1800L)
      idList.
      foreach(x => actorRef ! EbTreeDataObject[Int](x,x,1,None,null))
    var cursor:Long = actor.uIdTree.firstKey()
    var actorItems:List[Long] = List(cursor)

    while(cursor!=actor.uIdTree.lastKey()){
      cursor = actor.uIdTree.next(cursor)
      cursor :: actorItems
    }
    assert(idList.diff(actorItems)  == List())
  }
}

The intelliJ idea test enviroment says:

One or more requested classes are not Suites: model.ActorSynchroTest

by prototyp at August 19, 2014 08:09 PM

how can I make a ComboBox with JavaFX using Scala?

and populate the combo box as well.

doing this (the way it is done in java):

ObservableList<String> options = 
FXCollections.observableArrayList(
    "Option 1",
    "Option 2",
    "Option 3"
 );
final ComboBox comboBox = new ComboBox(options);

--produces an error

by user2723261 at August 19, 2014 08:09 PM

Implicit class applicable to all Traversable subclasses including Array

I've run into a problem trying to create an implicit class applicable to all Traversable subclasses including Array. I tried the following simple example in both Scala 2.11.1 and 2.10.4:

implicit class PrintMe[T](a: Traversable[T]) {
  def printme = for (b <- a) print(b)
}

As far as I understand this should allow for an implicit conversion to PrintMe so that printme can be called on any Traversable, including List and Array. E.g:

scala> List(1,2,3).printme
123
// Great, works as I expected!

scala> Array(1,2,3).printme
<console>:23: error: value printme is not a member of Array[Int]
              Array(1,2,3).printme
// Seems like for an Array it doesn't!

scala> new PrintMe(Array(1,2,3)).printme
123
// Yet explicitly building a PrintMe from an Array works 

What's going on here? Why does the implicit conversion work for a List and not an Array?

I understand there has been some trickery adapting java Arrays but looking at the picture below from http://docs.scala-lang.org/overviews/collections/overview.html it certainly seems like Array is meant to behave like a subclass of Traversable.

scala.collection.immutable

by agrinh at August 19, 2014 07:58 PM

What am I doing wrong around adding an additional case class constructor which first transforms it parameters?

So, I had a very simple case class:

case class StreetSecondary1(designator: String, value: Option[String])

This was working just fine. However, I kept having places where I was parsing a single string into a tuple which was then used to build an instance of this case class:

def parse1(values: String): StreetSecondary1 = {
  val index = values.indexOf(" ")
  StreetSecondary1.tupled(
    if (index > -1)
      //clip off string prior to space as designator and optionally use string after space as value
      (values.take(index), if (values.size > index + 1) Some(values.drop(index + 1)) else None)
    else
      //no space, so only designator could have been provided
      (values, None)
  )
}

So, I wanted to refactor all the different places with this same parsing code into the case class like this (but this won't compile):

case class StreetSecondary2(designator: String, value: Option[String]) {
  def this(values: String) = this.tupled(parse(values))
  private def parse(values: String): (String, Option[String]) = {
    val index = values.indexOf(" ")
    if (index > -1)
      //clip off string prior to space as designator and optionally use string after space as value
      (values.take(index), if (values.size > index + 1) Some(values.drop(index + 1)) else None)
    else
      //no space, so only designator could have been provided
      (values, None)
  }
}

It appears there is some chicken/egg problem around adding a case class constructor AND having a function that takes the parameter(s) and transforms them prior to calling the actual constructor. I have fiddled with this (going on many tangents). I then resorted to trying the companion object pathway:

object StreetSecondary3 {
  private def parse(values: String): (String, Option[String]) = {
    val index = values.indexOf(" ")
    if (index > -1)
      //clip off string prior to space as designator and optionally use string after space as value
      (values.take(index), if (values.size > index + 1) Some(values.drop(index + 1)) else None)
    else
      //no space, so only designator could have been provided
      (values, None)
  }
  def apply(values: String): StreetSecondary3 = {
    val tuple: (String, Option[String]) = parse(values)
    StreetSecondary3(tuple._1, tuple._2)  //Why doesn't .tupled method work here?
  }
}
case class StreetSecondary3(designator: String, value: Option[String])

What am I doing wrong in StreetSecondary2? Is there some way to get it to work? Surely there has to be a better simpler way where I am not required to add all the companion object boilerplate present in StreetSecondary3. Is there?

Thank you for any feedback and guidance you can give me around this.


UPDATE

Whew! Lots of lessons learned already.

A) the StreetSecondary2 parse method does not use the "this" implicit context in the case class instance being constructed (i.e. it is a static method in Java terms), so it works better moved to the companion object.

B) Unfortunately when composing an explicit companion object for a case class, the compiler provided "implicit companion object" is lost. The tupled method (and others, I am guessing - sure wish there was a way to keep it and augment as opposed to blowing it away) were contained in the compiler provided "implicit companion object" and not provided in the new explicit companion object. This was fixed by adding "extends ((String, Option[String]) => StreetSecondary)" to the explicit companion object.

C) Here's an updated solution (which also incorporates a more terse version of the parse function with a nod of thanks to Gabriele Petronella):

object StreetSecondary4 extends ((String, Option[String]) => StreetSecondary4) {
  private def parseToTuple(values: String): (String, Option[String]) = {
    val (designator, value) = values.span(_ != ' ')
    (designator, Option(value.trim).filter(_.nonEmpty))
  }
  def apply(values: String): StreetSecondary4 =
    StreetSecondary4.tupled(parseToTuple(values))
}
case class StreetSecondary4(designator: String, value: Option[String])

This is barely better in terms of boilerplate than the StreetSecondary3 version. However, it now makes quite a bit more sense due to so much implicit context having now been made explicit.

by chaotic3quilibrium at August 19, 2014 07:55 PM

CompsciOverflow

Solve Recurrence Equation Problem [duplicate]

This question is an exact duplicate of:

I ask this question before, and someone put it as duplicate. i dont know why no one can answer some question, mark question as duplicate. please be kind and let others to know from others.

link of my question is here: Solve Recurrence Equation Problem

and get 3 positive marks.. !!!

How we calculate the answer of following recurrence?

$$T(n)=4T\left(\frac{\sqrt{n}}{3}\right)+ \log^2n\,.$$

Any nice solution would be highly appreciated.

My solution is to substitute $n=3^m$, giving $$T(3^m)=4T\left(\frac{3^{m/2}}{3}\right)+\log^2 3^m = F(m)=4F((m/2)-1)+m^2=O(m^2logm)= O(\log^2 n \log n \log n)\,.$$

by Mina Simin at August 19, 2014 07:45 PM

StackOverflow

Utility methods for operating on custom scala class

I'd like to define an operator that works on a custom class in Scala. Similar to scala's Array utility methods, such as Array concatenation:

val (a, b) = (new Array[Int](4), new Array[Int](3))
val c = Array.concat(a, b)

I'd like to define an operator vaguely as follows:

class MyClass {
  def op():MyClass = {
     // for instance,, 
     return new MyClass();
  }
}

to be invoked, like val x = MyClass.op()

To provide a more concrete example, suppose that MyClass is an extension of MyAbstractClass

// Provided as a utility for the more relevant code below. 
def randomBoolean():Boolean = {
  val randomInt = Math.round(Math.random()).toInt
  if (randomInt == 1 ) return true;
  else return false;
}

abstract class MyAbstractClass[T](size:Int) {
  val stuff = new Array[T](size)
  def randomClassStuff():Array[T]
}

class MyClass(size:Int) extends MyAbstractClass[Boolean](size) {
  def randomClassStuff():Array[Boolean] = {
    return new Array[Boolean](size) map {x => randomBoolean()}
  } 
}

I realize that I could define an object called MyClass with a function called randomClassStuff defined in there, but I'd rather utilize abstract classes to require that extensions of the abstract class provide a method that creates random stuff specific to that class.

by sinwav at August 19, 2014 07:43 PM

What are the uses for the bindable and callable pattern?

I've seen this little snippet of code floating around before and never really taken the time to wrap my head around what it does.

var bind = Function.bind;
var call = Function.call;

var bindable = bind.bind(bind);
var callable = bindable(call);

I understand in concept and in practice what .bind and .call do, but what is the benefit, advantage or practical use of creating the bindable and callbable functions above?

Below is a contextual example of a use case for bindable.

var bound = bindable(db.find, db).apply(null, arguments);
var findable = bindable(db.find, db);
var bound = findable.apply(null, arguments);
var bound = findable(1, 2, 3);

What can this pattern be used for?

by Dan Prince at August 19, 2014 07:41 PM

Treating an SQL ResultSet like a Scala Stream

When I query a database and receive a (forward-only, read-only) ResultSet back, the ResultSet acts like a list of database rows.

I am trying to find some way to treat this ResultSet like a Scala Stream. This will allow such operations as filter, map, etc., while not consuming large amounts of RAM.

I implemented a tail-recursive method to extract the individual items, but this requires that all items be in memory at the same time, a problem if the ResultSet is very large:

// Iterate through the result set and gather all of the String values into a list
// then return that list
@tailrec
def loop(resultSet: ResultSet,
         accumulator: List[String] = List()): List[String] = {
  if (!resultSet.next) accumulator.reverse
  else {
    val value = resultSet.getString(1)
    loop(resultSet, value +: accumulator)
  }
}

by Ralph at August 19, 2014 07:40 PM

Understanding Akka, is it more than a long running process management service?

In web applications you often need to run certain tasks offline or asych i.e. not on the same thread being used to service web requests.

Scenerio

An ecommerce site connecting to 3rd party API's to validate and charge a credit card and return a response.

Is this something you would do using Akka?

Why would one choose akka over just creating a regular long-running java daemon that polls some sort of a queue?

by public static at August 19, 2014 07:35 PM

Gatling - Looping through JSON array

I have a block of code which needs to loop through a JSON array which is obtained from response of a REST service. (Full gist available here.)

.exec(http("Request_1")
  .post("/endPoint")
  .headers(headers_1)
  .body(StringBody("""REQUEST_BODY""")).asJSON
  .check(jsonPath("$.result").is("SUCCESS"))
  .check(jsonPath("$.data[*]").findAll.saveAs("pList")))
.exec(session => {
  println(session)
  session
})
.foreach("${pList}", "player"){
 exec(session => {
    val playerId = JsonPath.query("$.playerId", "${player}")
    session.set("playerId", playerId)
  })
 .exec(http("Request_1")
    .post("/endPoint")
    .headers(headers_1)
    .body(StringBody("""{"playerId":"${playerId}"}""")).asJSON
    .check(jsonPath("$.result").is("SUCCESS")))

}

The response format of the first request was

{
  "result": "SUCCESS",
  "data": [
    {
      "playerId": 2
    },
    {
      "playerId": 3
    },
    {
      "playerId": 4
    }
  ]
}

And playerId shows up in the session as

pList -> Vector({playerId=2, score=200}, {playerId=3, score=200}

I am seeing in the second request the body is

{"playerId":"Right(empty iterator)}

Expected : 3 requests with body as

 {"playerId":1}
 {"playerId":2}
 {"playerId":3}

I can loop over the resulting array successfully if I save just the playerIds:

.check(jsonPath("$.data[*].playerId").findAll.saveAs("pList")))

by NeilGhosh at August 19, 2014 07:34 PM

CompsciOverflow

Time complexity of naive look-and-say sequence algorithm

I've been looking at the look-and-say sequence for the past few days and I've been wondering what is the time complexity of a naive algorithm to print the nth element. Here is an example in python:

def look_and_say(n):
  prev = '1'
  for i in range(n-1):
    count = 1
    next = ''
    prchar = prev[0]
    for char in prev[1:]:
      if char == prchar:
        count += 1
      else:
        next += str(count)+prchar 
        prchar = char
        count = 1
    next += str(count)+prchar
    prev = next
  print prev

the problem is that I am not sure how to handle the varying length of each element. Any help is appreciated.

by Veritas at August 19, 2014 07:31 PM

Dave Winer

StackOverflow

Throttling messages from RabbitMQ using RxJava

I'm using RxJava to pull out values from RabbitMQ. Here's the code:

val amqp = new RabbitQueue("queueName")
val obs = Observable[String](subscr => while (true) subscr onNext amqp.next)
obs subscribe (
  s => println(s"String from rabbitmq: $s"), 
  error => amqp.connection.close
)

It works fine but now I have a requirement that a value should be pulled at most once per second while all the values should be preserved (so debounce won't do since it drops intermediary values).

It should be like amqp.next blocks thread so we're waiting... (RabbitMQ got two messages in queue) pulled a 1st message... wait 1 second... pulled a 2nd message... wait indefinitely for the next message...

How can I achieve this using rx methods?

by Anton at August 19, 2014 07:20 PM

Lobsters

Fefe

Ihr habt ja bestimmt vom IT-Sicherheitsgesetz gehört, ...

Ihr habt ja bestimmt vom IT-Sicherheitsgesetz gehört, dass unser Innenminister da gerade mit großem Tamtam durchs Dorf treibt. Und ihr werdet euch bestimmt gedacht haben: Hmm, De Maiziere zum Thema Hacken ist eine Geschichte voller Missverständnisse!1!! Das kann doch nur voll in die Hose gehen? Wo ist der Pferdefuß?

Nun, liebe Leser, das will ich gerne aufklären: Die Kohle aus dem neuen IT-Sicherheitsgesetz fließt in den "Verfassungsschutz". Ja, der Verfassungsschutz! Die, die wie kaum eine andere Behörde für das Gegenteil von Sicherheit stehen! Die, die den Ku-Klux-Klan Deutschland, den NSU und andere verfassungsfeindliche Organisationen entweder selbst gegründet und/oder jahrelang per V-Mann-Spende am Leben gehalten haben, ausgerechnet die werden jetzt nicht zugemacht sondern kriegen noch neue Planstellen dazu! Und dann nennen sie das auch noch Sicherheitsgesetz!

Wenn das nicht unsere Steuergelder wären, wäre es fast eine schöne Aktion zur Monty-Python-Abschlußtour.

Oh, einen noch. Falls jemand dachte, hey, der Verfassungsschutz ist furchtbar, aber immerhin sind sie noch nicht mit Trojanern in der Hand ertappt worden, wie das BKA! Für den habe ich eine schlechte Nachricht:

Das Bundesinnenministerium möchte mit seinem jetzt veröffentlichten Referentenentwurf für ein IT-Sicherheitsgesetz neben dem Bundesamt für Sicherheit in der Informationstechnik (BSI) und dem Bundeskriminalamt (BKA) auch das Bundesamt für Verfassungsschutz (BfV) ausbauen.
Business as usual!

Wer sich jetzt wundert, dass der Zoll und BND nicht auch noch einen Geldregen kriegen: Der BND untersteht dem Kanzleramt, der Zoll dem JustizFinanzministerium. Der Referentenentwurf kommt aus dem Innenministerium. Die geben in ihren Gesetzesentwürfen natürlich nur sich selbst Geld, nicht anderen Ministerien. Denn darum geht es hier. Geld her, egal wie dümmlich der Vorwand auch ist.

Update: Vielleicht sollte ich auch auf inhaltliche Aspekte eingehen. Nur so der Vollständigkeit halber. Firmen müssen jetzt Hackerangriffe melden, aber a) nur "den Behörden", nicht der Bevölkerung, und b)

können sie das anonym tun, solange es keine Störungen oder Ausfälle gibt
Und wer soll das schon prüfen oder im Nachhinein überhaupt noch sagen können, ob einer der vielen Ausfälle auf Hackerangriffe zurückzuführen war oder nicht.

An dieser Stelle sei mir erlaubt, kurz darauf hinzuweisen, wieso die Idee "Hackerangriffe müssen gemeldet werden" überhaupt auf dem Tisch lag. Damit Firmen und Behörden diese Peinlichkeit vermeiden wollen. Das soll einen Anreiz schaffen, die Infrastruktur robust und sicher zu machen. Dafür gibt im Moment niemand wirklich Geld aus, weil es betriebswirtschaftlich kurzfristig mehr Profit bringt, wenn man es in Kundenacquise steckt. Diese zentrale Idee wird also mit diesem Gesetzesentwurf komplett ad absurdum geführt. Ursprünglich ging die Idee ja noch weiter, die Firmen hätten auch alle potentiell betroffenen Kunden jeweils individuell in Kenntnis setzen sollen. Damit die Firma befürchten muss, die Kunden laufen wenn.

Oh und wieso betrifft das eigentlich nur Firmen und nicht auch Behörden? Und wieso nur kritische Infrastruktur, nicht auch ganz normale Infrastruktur? Wer definiert eigentlich, welche Infrastruktur kritisch ist und welche nicht?

Update: Ich finde übrigens, wenn man da richtig Anreiz schaffen will, muss man auch gleich gesetzlich ein Sonderkündigungsrecht schaffen, wenn eine Firma beim Schlampen erwischt wird. Und zwar nicht nur für die betroffenen Kunden, für alle Kunden. Was glaubt ihr, wie eilig es die Knebelvertrag-Telcos plötzlich hätten, nie wieder gehackt zu werden!

August 19, 2014 07:02 PM

Portland Pattern Repository

StackOverflow

Publishing to Sonatype via SBT

I am attempting to publish a scala library to the OSS Sonatype repository via SBT. I have followed the SBT guides for Publishing0 & Using Sonatype and reviewed the Sonatype requirements documentation, but cannot seem to publish my artifacts. All attempts end with java.io.IOException: Access to URL [...] was refused by the server: Forbidden. I have had the necessary repository setup done in the Sonatype JIRA system. I have created a PGP key and published it to hkp://pool.sks-keyservers.net & hkp://keyserver.ubuntu.com.

build.sbt

import play.twirl.sbt.SbtTwirl

name := "spring-mvc-twirl"

organization := "us.hexcoder"

version := "1.0.0-SNAPSHOT"

scalaVersion := "2.11.2"

sbtVersion := "0.13.5"

lazy val root = (project in file(".")).enablePlugins(SbtTwirl)

// Removed for brevity    
libraryDependencies ++= Seq()

// Test dependencies
// Removed for brevity
libraryDependencies ++= Seq()

// Publish configurations
publishMavenStyle := true

publishArtifact in Test := false

publishTo := {
    val nexus = "https://oss.sonatype.org/"
    if (isSnapshot.value)
        Some("snapshots" at nexus + "content/repositories/snapshots")
    else
        Some("releases"  at nexus + "service/local/staging/deploy/maven2")
}

licenses := Seq("MIT" -> url("http://opensource.org/licenses/MIT"))

homepage := Some(url("https://github.com/67726e/Spring-MVC-Twirl"))

credentials += Credentials(Path.userHome / ".sbt" / ".credentials")

pomIncludeRepository := { _ => false }

// Additional POM information for releases
pomExtra :=
<developers>
    <developer>
        <name>Glenn Nelson</name>
        <email>glenn@hexcoder.us</email>
    </developer>
</developers>
<scm>
    <connection>scm:git:git@github.com:67726e/Spring-MVC-Twirl.git</connection>
    <developerConnection>scm:git:git@github.com:67726e/Spring-MVC-Twirl.git</developerConnection>
    <url>git@github.com:67726e/Spring-MVC-Twirl.git</url>
</scm>

SBT Output:

> publishSigned
[info] Wrote /Users/67726e/Documents/Spring-MVC-Twirl/target/scala-2.11/spring-mvc-twirl_2.11-1.0.0-SNAPSHOT.pom
[info] :: delivering :: us.hexcoder#spring-mvc-twirl_2.11;1.0.0-SNAPSHOT :: 1.0.0-SNAPSHOT :: integration :: Tue Aug 19 09:57:13 EDT 2014
[info]  delivering ivy file to /Users/67726e/Documents/Spring-MVC-Twirl/target/scala-2.11/ivy-1.0.0-SNAPSHOT.xml
[trace] Stack trace suppressed: run last *:publishSigned for the full output.
[error] (*:publishSigned) java.io.IOException: Access to URL https://oss.sonatype.org/content/repositories/snapshots/us/hexcoder/spring-mvc-twirl_2.11/1.0.0-SNAPSHOT/spring-mvc-twirl_2.11-1.0.0-SNAPSHOT-sources.jar was refused by the server: Forbidden
[error] Total time: 5 s, completed Aug 19, 2014 9:57:18 AM
> last *:publishSigned
java.io.IOException: Access to URL https://oss.sonatype.org/content/repositories/snapshots/us/hexcoder/spring-mvc-twirl_2.11/1.0.0-SNAPSHOT/spring-mvc-twirl_2.11-1.0.0-SNAPSHOT-sources.jar was refused by the server: Forbidden
    at org.apache.ivy.util.url.AbstractURLHandler.validatePutStatusCode(AbstractURLHandler.java:79)
    at org.apache.ivy.util.url.BasicURLHandler.upload(BasicURLHandler.java:264)
    at org.apache.ivy.util.FileUtil.copy(FileUtil.java:150)
    at org.apache.ivy.plugins.repository.url.URLRepository.put(URLRepository.java:84)

by Glenn Nelson at August 19, 2014 06:56 PM

Scalatest 2.10 with akka.TestKit, weird compilier error

I'm using the scala IDE for development. I have a few actors which I'm testing out. I wrote one scala test suite with the following definition and didn't have any problems:

import org.scalatest._
import akka.testkit._
import akka.actor.ActorSystem
import org.scalatest.BeforeAndAfterAll
import org.scalatest._
import scala.concurrent.duration._
import akka.actor.Props 
import filters._

class ReaderSourceTest( _system: ActorSystem ) extends TestKit( _system ) with FunSuiteLike with BeforeAndAfterAll with ImplicitSender {
  import ReaderSource._

  //Must have a zero argument constructor
  def this() = this( ActorSystem( "ReaderSourceSuite" ) )

  override def afterAll = TestKit.shutdownActorSystem( system )

  test( "Reader should be alive as an actor" ) {
    val reader = system.actorOf( Props( classOf[ ReaderSource ], "dummy/file/name" ), "tstReaderA" )

    reader ! Ping( "Hello" )
    expectMsg( Pong( "Hello" ) )
  }
}

I then created another test file to test another actor which goes like this:

import socketclient._
import org.scalatest._
import akka.testkit._
import akka.actor.ActorSystem
import org.scalatest.BeforeAndAfterAll
import scala.concurrent.duration._
import akka.actor.Props
import org.scalatest.fixture.FunSuiteLike
import java.net.InetAddress
import org.kdawg.CommProtocol.CommMessages._
import org.kdawg.CommProtocol.CommMessages

class NetworkTest( _system: ActorSystem ) extends TestKit( _system ) with FunSuiteLike with BeforeAndAfterAll with ImplicitSender
{
  import NetworkTalker._
  def this() = this( ActorSystem( "NetworkTalkerTest") )

  override def afterAll = TestKit.shutdownActorSystem( system )
  test( "Can Send a Packet" )
  {
     val net = system.actorOf( NetworkTalker.props("10.1.0.5", 31000), "TestA" )  
     val pktBuilder = CommMessage.newBuilder
     pktBuilder.setType( MessageType.STATUS_REQUEST )
     pktBuilder.setStatusRequest( CommProtocol.CommandsProtos.StatusRequest.newBuilder() )
     val pkt = pktBuilder.build
     net ! PktSend(1, pkt)
     expectMsg( PktSent(1) ) 
  }
}

I keep getting the following error on the last line of the above class

Multiple markers at this line
    - type mismatch; found : org.kdawg.socketclient.NetworkTalker.PktSent required: NetworkTalkerTest.this.FixtureParam => 
     Any
    - type mismatch; found : org.kdawg.socketclient.NetworkTalker.PktSent required: NetworkTalkerTest.this.FixtureParam => 

Can anyone help me figure this out ?

by Kartik Aiyer at August 19, 2014 06:46 PM

How do I undo a transaction in datomic?

I committed a transaction to datomic accidentally and I want to "undo" the whole transaction. I know exactly which transaction it is and I can see its datoms, but I don't know how to get from there to a rolled-back transaction.

by Francis Avila at August 19, 2014 06:41 PM

/r/compsci

Chromebook for Computer Science?

Hey guys, I was wondering would the Chromebook be fine for the sort of work in Computer Science. I'll most likely be installing Ubuntu on it, and was just curious as I am starting in about 5 weeks.

Feedback is appreciated.

submitted by Melliano
[link] [9 comments]

August 19, 2014 06:39 PM

StackOverflow

Can't get javascriptRoutes to work with Play Framework 2

I'm trying to use javascriptRoutes in Play 2 (Scala) and I am getting an error (see below). Here is what I did:

Add javascriptRoutes method to Application controller

def javascriptRoutes = Action { implicit request =>
    import routes.javascript._
    Ok(Routes.javascriptRouter("jsRoutes")(Orders.searchProducts))
        .as("text/javascript")
}

Add route to routes file

GET    /assets/javascripts/routes    controllers.Application.javascriptRoutes

Add <script> import to main.scala.html

<head>
...
<script type="text/javascript" src="@routes.Application.javascriptRoutes"></script>
...
</head>

With these changes in place I am getting the following error in the JavaScript console:

GET http://localhost:9000/assets/javascripts/routes 404 (Not Found)
Uncaught ReferenceError: jsRoutes is not defined

What am I missing?

by arussinov at August 19, 2014 06:36 PM

/r/compsci

CompsciOverflow

Vehicle Routing Frameworks in Python [on hold]

Are there any frameworks like optaPlanner or VRP in python out there (preferably on an apache license) which give solutions for Vehicle Routing Problems and/or Traveling Salesman Problem with Profits.

Any pointers would be appreciated.

by RukTech at August 19, 2014 06:25 PM

StackOverflow

Why does Scala not infer the type parameters when pattern matching with @

I'm using Scala 2.10.4 with akka 2.3.4. I ran into a problem where type inference is not behaving the way I expected.

The code below illustrates an example of what I am experiencing. I have a case class which wraps messages with an id named MyMessage. It is parameterized with the type of the message. Then I have a payload named MyPayload which contains a String.

Within an actor (here I'm just using a regular object named MyObject since the problem isn't particular to akka) I am pattern matching and calling a function that operates on my payload type MyPayload.

package so

case class MyMessage[T](id:Long, payload:T)
case class MyPayload(s:String)

object MyObject {
  def receive:PartialFunction[Any, Unit] = {
    case m @ MyMessage(id, MyPayload(s)) =>

      // Doesn't compile
      processPayload(m)

      // Compiles
      processPayload(MyMessage(id, MyPayload(s)))
  }

  def processPayload(m:MyMessage[MyPayload]) = {
    println(m)
  }
}

For reasons I don't understand, pattern patching with @ and an unapplied case class doesn't infer the type parameter of MyMessage[T]. In the code above, I would have expected that m would have type MyMessage[MyPayload]. However, when I compile, it believes that the type is MyMessage[Any].

[error] PatternMatch.scala:9: type mismatch;
[error]  found   : so.MyMessage[Any]
[error]  required: so.MyMessage[so.MyPayload]
[error] Note: Any >: so.MyPayload, but class MyMessage is invariant in type T.
[error] You may wish to define T as -T instead. (SLS 4.5)
[error]       processPayload(m)
[error]                      ^
[error] one error found
[error] (compile:compile) Compilation failed
[error] Total time: 1 s, completed Aug 19, 2014 12:08:04 PM

Is this expected behavior? If so, what have I misunderstood about type inference in Scala?

by joescii at August 19, 2014 06:20 PM

Scheme: given a list of lists and a permutation, permute

I am practicing for my programming paradigms exam and working through problem sets I come to this problem. This is the first problem after reversing and joining lists recursively, so I suppose there is an elegant recursive solution.

I am given a list of lists and a permutation. I should permute every list including a list of lists with that specified permutation.

I am given an example:

->(permute '((1 2 3) (a b c) (5 6 7)) '(1 3 2))
->((1 3 2) (5 7 6) (a c b))

I have no idea even how to start. I need to formulate the problem in recursive interpretation to be able to solve it, but I can not figure out how.

by user3135661 at August 19, 2014 06:07 PM

/r/emacs

Portland Pattern Repository

StackOverflow

Ansible - Define Inventory at run time

I am liitle new to ansible so bear with me if my questions are a bit basic.

Scenario:

I have a few group of Remote hosts such as [EPCs] [Clients] and [Testers] I am able to configure them just the way I want them to be.

Problem:

I need to write a playbook, which when runs, asks the user for the inventory at run time. As an example when a playbook is run the user should be prompted in the following way: "Enter the number of EPCs you want to configure" "Enter the number of clients you want to configure" "Enter the number of testers you want to configure"

What should happen:

Now for instance the user enters 2,5 and 8 respectively. Now the playbook should only address the first 2 nodes in the group [EPCs], the first 5 nodes in group [Clients] and the first 7 nodes in the group [Testers] . I don't want to create a large number of sub-groups, for instance if I have 20 EPCs, then I don't want to define 20 groups for my EPCs, I want somewhat of a dynamic inventory, which should automatically configure the machines according to the user input at run time using the vars_prompt option or something similar to that

Let me post a partial part of my playbook for better understanding of what is to happen:

---
- hosts: epcs # Now this is the part where I need a lot of flexibility

  vars_prompt:
    name: "what is your name?"
    quest: "what is your quest?"

  gather_facts: no

  tasks:

  - name: Check if path exists
    stat: path=/home/khan/Desktop/tobefetched/file1.txt
    register: st

  - name: It exists
    debug: msg='Path existence verified!'
    when: st.stat.exists

   - name: It doesn't exist
     debug: msg="Path does not exist"
     when: st.stat.exists == false

   - name: Copy file2 if it exists
     fetch: src=/home/khan/Desktop/tobefetched/file2.txt dest=/home/khan/Desktop/fetched/   flat=yes
     when: st.stat.exists

   - name: Run remotescript.sh and save the output of script to output.txt on the Desktop
     shell: cd /home/imran/Desktop; ./remotescript.sh > output.txt

   - name: Find and replace a word in a file placed on the remote node using variables
     shell: cd /home/imran/Desktop/tobefetched; sed -i 's/{{name}}/{{quest}}/g' file1.txt

    tags:
       - replace

@gli I tried your solution, I have a group in my inventory named test with two nodes in it. When I enter 0..1 I get:

TASK: [echo sequence] ********************************************************* 
changed: [vm2] => (item=some_prefix0)
changed: [vm1] => (item=some_prefix0)
changed: [vm1] => (item=some_prefix1)
changed: [vm2] => (item=some_prefix1)

Similarly when I enter 1..2 I get:

TASK: [echo sequence] ********************************************************* 
changed: [vm2] => (item=some_prefix1)
changed: [vm1] => (item=some_prefix1)
changed: [vm2] => (item=some_prefix2)
changed: [vm1] => (item=some_prefix2)

Likewise when I enter 4..5 (nodes not even present in the inventory, I get:

TASK: [echo sequence] ********************************************************* 
changed: [vm1] => (item=some_prefix4)
changed: [vm2] => (item=some_prefix4)
changed: [vm1] => (item=some_prefix5)
changed: [vm2] => (item=some_prefix5)

Any help would be really appreciated. Thanks!

by Khan at August 19, 2014 05:56 PM

/r/netsec

StackOverflow

How to determine type of Seq[A] without Reflection API [duplicate]

Is it possible to determine type of Seq[A] in Scala 2.11.2?

For example:

val fruit = List("apples", "oranges", "pears")
val nums = List(1, 2, 3, 4) 

I want to print type of Seq something like this:

scala> def printType[A](xs: Seq[A]): Unit = xs match {
     | case x: List[String] => println("String")
     | case y: List[Int] => println("Int")
     | }

<console>:8: warning: non-variable type argument String in type pattern List[Str
ing] (the underlying of List[String]) is unchecked since it is eliminated by era
sure
       case x: List[String] => println("String")
               ^
<console>:9: warning: non-variable type argument Int in type pattern List[Int] (
the underlying of List[Int]) is unchecked since it is eliminated by erasure
       case y: List[Int] => println("Int")
               ^
<console>:9: warning: unreachable code
       case y: List[Int] => println("Int")
                                   ^
printType: [A](xs: Seq[A])Unit

p.s. I'm new in Scala.

UPDATE:

Is there solution without using Reflection API?

by Ir3000 at August 19, 2014 05:39 PM

Apply a list of parameters to a list of functions

I have a list of parameters like List(1,2,3,"abc","c") and a set of functions which validates the data present in the list like isNumberEven, isAValidString etc.

Currently, I take each value of the list and apply proper function which validates the data like isNumberEven(params(0)). This has led to big and messy code which is completly imperative in thinking.

I am expecting that it should be possible to do something like this in Scala -

List(1,2,3,"abc","c").zip(List(fuctions)).foreach{ x => x._2(x._1)}

However, this fails giving a runtime exception of type mismatch:

error: type mismatch; found : x._1.type (with underlying type Any) required: Int with String

I tried pattern matching on Function traits but it fails due to type erasure.

Any pointers will be appreciated as how can this be solved.

by user1908093 at August 19, 2014 05:36 PM

Undeadly

Heads up: rcctl(8) the rc.conf.local management tool landing in base soon

Antoine Jacoutot (ajacoutot@) has just committed committed a tool for managing rc.conf.local(8), in order to make it simpler for automated management systems such as Puppet or Ansible to interface with the operating system configuration:

CVSROOT:	/cvs
Module name:	src
Changes by:	ajacoutot@cvs.openbsd.org	2014/08/19 08:08:20

Added files:
	usr.sbin/rcctl : Makefile rcctl.8 rcctl.sh 

Log message:
Introduce rcctl(8), a simple utility for maintaining rc.conf.local(8).

# rcctl
usage: rcctl enable|disable|status|action [service [flags [...]]]

Lots of man page improvement from the usual suspects (jmc@ and schwarze@)
not hooked up yet but committing now so work can continue in-tree
agreed by several

August 19, 2014 05:34 PM

StackOverflow

seq to vec conversion - Key must be integer

I want to get the indices of nil elements in a vector eg. [1 nil 3 nil nil 4 3 nil] => [1 3 4 7]

(defn nil-indices [vec]
  (vec (remove nil? (map
    #(if (= (second %) nil) (first %))
      (partition-all 2 (interleave (range (count vec)) vec)))))
  )

Running this code results in

java.lang.IllegalArgumentException: Key must be integer (NO_SOURCE_FILE:0)

If I leave out the (vec) call surrounding everything, it seems to work, but returns a sequence instead of a vector.

Thank you!

by Simbi at August 19, 2014 05:34 PM

TheoryOverflow

Example of a $U^\omega$ that is not Deterministic Büchi recognizable

Is there a regular language $U$, for which $U^\omega$ is not a Deterministic Büchi recognizable language. I have been thinking over it for some time, but have been unable to come up with an example.

by Miheer at August 19, 2014 05:21 PM

StackOverflow

How does orElse work on PartialFunctions

I am getting very bizarre behavior (at least it seems to me) with the orElse method defined on PartialFunction

It would seem to me that:

val a = PartialFunction[String, Unit] {
    case "hello" => println("Bye")
}
val b: PartialFunction[Any, Unit] = a.orElse(PartialFunction.empty[Any, Unit])
a("hello") // "Bye"
a("bogus") // MatchError
b("bogus") // Nothing
b(true)    // Nothing

makes sense but this is not how it is behaving and I am having a lot of trouble understanding why as the types signatures seem to indicate what I exposed above.

Here is a transcript of what I am observing with Scala 2.11.2:

Welcome to Scala version 2.11.2 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_11).
Type in expressions to have them evaluated.
Type :help for more information.

scala> val a = PartialFunction[String, Unit] {
     | case "hello" => println("Bye")
     | }
a: PartialFunction[String,Unit] = <function1>

scala> a("hello")
Bye

scala> a("bye")
scala.MatchError: bye (of class java.lang.String)
  at $anonfun$1.apply(<console>:7)
  at $anonfun$1.apply(<console>:7)
  at scala.PartialFunction$$anonfun$apply$1.applyOrElse(PartialFunction.scala:242)
  at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
  ... 33 elided

scala> val b = a.orElse(PartialFunction.empty[Any, Unit])
b: PartialFunction[String,Unit] = <function1>

scala> b("sdf")
scala.MatchError: sdf (of class java.lang.String)
  at $anonfun$1.apply(<console>:7)
  at $anonfun$1.apply(<console>:7)
  at scala.PartialFunction$$anonfun$apply$1.applyOrElse(PartialFunction.scala:242)
  at scala.PartialFunction$OrElse.apply(PartialFunction.scala:162)
  ... 33 elided

Note the return type of val b which has not widen the type of the PartialFunction.

But this also does not work as expected:

scala> val c = a.orElse(PartialFunction.empty[String, Unit])
c: PartialFunction[String,Unit] = <function1>

scala> c("sdfsdf")
scala.MatchError: sdfsdf (of class java.lang.String)
  at $anonfun$1.apply(<console>:7)
  at $anonfun$1.apply(<console>:7)
  at scala.PartialFunction$$anonfun$apply$1.applyOrElse(PartialFunction.scala:242)
  at scala.PartialFunction$OrElse.apply(PartialFunction.scala:162)
  ... 33 elided

by UndercoverAgent at August 19, 2014 05:20 PM

/r/netsec

StackOverflow

Optional params with Play 2 and Swagger

I'm trying to use Swagger to document a Play 2 REST API but swagger-play2 doesn't seem to understand optional parameters defined with Scala's Option type - the normal way to make a param optional in Play 2:

GET /documents controllers.DocumentController.getDocuments(q: Option[String])

I want the q param to be optional. There is a matching annotated controller method with this Option[String] param. On startup I'm getting UNKOWN TYPE in the log and the json produced by api-docs breaks swagger-ui:

UNKNOWN TYPE: scala.Option
[info] play - Application started (Dev)

Is there another way to specify an optional parameter in Play 2 and have Swagger understand it?

by Tom Wadley at August 19, 2014 05:17 PM

/r/scala

Beginner question: how do you use scala documentation?

So I am trying to teach myself scala, and one of the first things I wanted to do is read a text file, preferably line by line. There is nothing about files in the scala Getting Started or in Guides and Overviews or in the Tutorials. So I try googling for it and the top post is a forum question (http://www.scala-lang.org/old/node/5415) where they recommend scala.io.Source.fromPath("filename"). This fails: value fromPath is not a member of object scala.io.Source.

At this point I'm a little annoyed, but I try once again to go to the docs, this time the API docs for scala.io.Source: http://www.scala-lang.org/api/current/index.html#scala.io.Source . There is a long list of methods, but looking at it I can't seem to find any constructors, and there's certainly no fromPath function.

Internet failing me, I pick up Programming in Scala by Martin Odersky, and sure enough early on he recommends "source.fromFile", which works just fine. But source.fromFile isn't in the API page either! More searching, and I realize googling for "scala io source" brings up the Source class, not the source Object, which is what I needed. There is not any link to the source Object from the Source class page, at least that I could see.

So I've come away feeling like I shouldn't waste my time trying to slog through scala's official documentation for anything. Is there a point of scala proficiency where navigating the API becomes easier? Is googling and searching through stackoverflow the best way to find a function that does what I want?

submitted by emeraldemon
[link] [9 comments]

August 19, 2014 05:14 PM